refactor(ux): consolidate BMAD skills, update design system, and clean up Prisma generated client
This commit is contained in:
32
.agent/skills/bmad-module-builder/SKILL.md
Normal file
32
.agent/skills/bmad-module-builder/SKILL.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: bmad-module-builder
|
||||
description: Plans, creates, and validates BMad modules. Use when the user requests to 'ideate module', 'plan a module', 'create module', 'build a module', or 'validate module'.
|
||||
---
|
||||
|
||||
# BMad Module Builder
|
||||
|
||||
## Overview
|
||||
|
||||
This skill helps you bring BMad modules to life — from the first spark of an idea to a fully scaffolded, installable module. It offers three paths:
|
||||
|
||||
- **Ideate Module (IM)** — A creative brainstorming session that helps you imagine what your module could be, decide on the right architecture (agent vs. workflow vs. both), and produce a detailed plan document. The plan then guides you through building each piece with the Agent Builder and Workflow Builder.
|
||||
- **Create Module (CM)** — Takes an existing folder of built skills (or a single skill) and scaffolds the module infrastructure that makes it installable. For multi-skill modules, generates a dedicated `-setup` skill. For single skills, embeds self-registration directly into the skill. Supports `--headless` / `-H`.
|
||||
- **Validate Module (VM)** — Checks that a module's structure is complete and correct — every skill has its capabilities registered, entries are accurate and well-crafted, and structural integrity is sound. Handles both multi-skill and standalone modules. Supports `--headless` / `-H`.
|
||||
|
||||
**Args:** Accepts `--headless` / `-H` for CM and VM paths, an initial description for IM, or a path to a skills folder or single SKILL.md file for CM/VM.
|
||||
|
||||
## On Activation
|
||||
|
||||
Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmb` section). If neither exists, fall back to `{project-root}/_bmad/bmb/config.yaml` (legacy per-module format). If still missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
|
||||
|
||||
Detect user's intent:
|
||||
|
||||
- **Ideate / Plan** keywords or no path argument → Load `./references/ideate-module.md`
|
||||
- **Create / Scaffold** keywords, a folder path, or a path to a single SKILL.md file → Load `./references/create-module.md`
|
||||
- **Validate / Check** keywords → Load `./references/validate-module.md`
|
||||
- **Unclear** → Present options:
|
||||
- **Ideate Module (IM)** — "I have an idea for a module and want to brainstorm and plan it"
|
||||
- **Create Module (CM)** — "I've already built my skills and want to package them as a module"
|
||||
- **Validate Module (VM)** — "I want to check that my module's setup skill is complete and correct"
|
||||
|
||||
If `--headless` or `-H` is passed, route to CM with headless mode.
|
||||
128
.agent/skills/bmad-module-builder/assets/module-plan-template.md
Normal file
128
.agent/skills/bmad-module-builder/assets/module-plan-template.md
Normal file
@@ -0,0 +1,128 @@
|
||||
---
|
||||
title: 'Module Plan'
|
||||
status: 'ideation'
|
||||
module_name: ''
|
||||
module_code: ''
|
||||
module_description: ''
|
||||
architecture: ''
|
||||
standalone: true
|
||||
expands_module: ''
|
||||
skills_planned: []
|
||||
config_variables: []
|
||||
created: ''
|
||||
updated: ''
|
||||
---
|
||||
|
||||
# Module Plan
|
||||
|
||||
## Vision
|
||||
|
||||
<!-- What this module does, who it's for, and why it matters -->
|
||||
|
||||
## Architecture
|
||||
|
||||
<!-- Architecture decision and rationale -->
|
||||
<!-- Options: single agent with capabilities, multiple agents, hybrid, orchestrator pattern -->
|
||||
<!-- Document WHY this architecture was chosen — future builders need the reasoning -->
|
||||
|
||||
### Memory Architecture
|
||||
|
||||
<!-- Which pattern: personal memory only, personal + shared, or single shared memory? -->
|
||||
<!-- If single shared memory: include the full folder structure -->
|
||||
<!-- If shared memory: define the memory contract below -->
|
||||
|
||||
### Memory Contract
|
||||
|
||||
<!-- For each curated file in the memory folder, document: -->
|
||||
<!-- - Filename and purpose -->
|
||||
<!-- - What agents read it -->
|
||||
<!-- - What agents write to it -->
|
||||
<!-- - Key content/structure -->
|
||||
|
||||
### Cross-Agent Patterns
|
||||
|
||||
<!-- How do agents hand off work to each other? -->
|
||||
<!-- Is the user the router? Is there an orchestrator? Service-layer relationships? -->
|
||||
<!-- How does shared memory enable cross-domain awareness? -->
|
||||
|
||||
## Skills
|
||||
|
||||
<!-- For each planned skill, create a self-contained brief below. -->
|
||||
<!-- Each brief should be usable by the Agent Builder or Workflow Builder WITHOUT conversation context. -->
|
||||
|
||||
### {skill-name}
|
||||
|
||||
**Type:** {agent | workflow}
|
||||
|
||||
**Persona:** <!-- For agents: who is this? Communication style, expertise, personality -->
|
||||
|
||||
**Core Outcome:** <!-- What does success look like? -->
|
||||
|
||||
**The Non-Negotiable:** <!-- The one thing this skill must get right -->
|
||||
|
||||
**Capabilities:**
|
||||
|
||||
| Capability | Outcome | Inputs | Outputs |
|
||||
| ---------- | ------- | ------ | ------- |
|
||||
| | | | |
|
||||
|
||||
<!-- For outputs: note where HTML reports, dashboards, or structured artifacts would add value -->
|
||||
|
||||
**Memory:** <!-- What does this agent read on activation? Write to? Daily log tag? -->
|
||||
|
||||
**Init Responsibility:** <!-- What happens on first run? Shared memory creation? Domain onboarding? -->
|
||||
|
||||
**Activation Modes:** <!-- Interactive, headless, or both? -->
|
||||
|
||||
**Tool Dependencies:** <!-- External tools with technical specifics -->
|
||||
|
||||
**Design Notes:** <!-- Non-obvious considerations, the "why" behind decisions -->
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
<!-- Module-level config variables for the setup skill. -->
|
||||
<!-- If none needed, explicitly state: "This module requires no custom configuration beyond core BMad settings." -->
|
||||
|
||||
| Variable | Prompt | Default | Result Template | User Setting |
|
||||
| -------- | ------ | ------- | --------------- | ------------ |
|
||||
| | | | | |
|
||||
|
||||
## External Dependencies
|
||||
|
||||
<!-- CLI tools, MCP servers, or other external software that skills depend on -->
|
||||
<!-- For each: what it is, which skills need it, and how the setup skill should handle it -->
|
||||
|
||||
## UI and Visualization
|
||||
|
||||
<!-- Does the module include dashboards, progress views, interactive interfaces, or a web app? -->
|
||||
<!-- If yes: what it shows, which skills feed into it, how it's served/installed -->
|
||||
|
||||
## Setup Extensions
|
||||
|
||||
<!-- Beyond config collection: web app installation, directory scaffolding, external service configuration, starter files, etc. -->
|
||||
<!-- These will need to be manually added to the setup skill after scaffolding -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- Standalone: how it provides independent value -->
|
||||
<!-- Expansion: parent module, cross-module capability relationships, skills that may reference parent module ordering -->
|
||||
|
||||
## Creative Use Cases
|
||||
|
||||
<!-- Beyond the primary workflow — unexpected combinations, power-user scenarios, creative applications discovered during brainstorming -->
|
||||
|
||||
## Ideas Captured
|
||||
|
||||
<!-- Raw ideas from brainstorming — preserved for context even if not all made it into the plan -->
|
||||
<!-- Write here freely during phases 1-2. Don't write structured sections until phase 3+. -->
|
||||
|
||||
## Build Roadmap
|
||||
|
||||
<!-- Recommended build order with rationale for why each skill should be built in that order -->
|
||||
|
||||
**Next steps:**
|
||||
|
||||
1. Build each skill using **Build an Agent (BA)** or **Build a Workflow (BW)** — share this plan document as context
|
||||
2. When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure
|
||||
@@ -0,0 +1,76 @@
|
||||
---
|
||||
name: "{setup-skill-name}"
|
||||
description: Sets up {module-name} module in a project. Use when the user requests to 'install {module-code} module', 'configure {module-name}', or 'setup {module-name}'.
|
||||
---
|
||||
|
||||
# Module Setup
|
||||
|
||||
## Overview
|
||||
|
||||
Installs and configures a BMad module into a project. Module identity (name, code, version) comes from `./assets/module.yaml`. Collects user preferences and writes them to three files:
|
||||
|
||||
- **`{project-root}/_bmad/config.yaml`** — shared project config: core settings at root (e.g. `output_folder`, `document_output_language`) plus a section per module with metadata and module-specific values. User-only keys (`user_name`, `communication_language`) are **never** written here.
|
||||
- **`{project-root}/_bmad/config.user.yaml`** — personal settings intended to be gitignored: `user_name`, `communication_language`, and any module variable marked `user_setting: true` in `./assets/module.yaml`. These values live exclusively here.
|
||||
- **`{project-root}/_bmad/module-help.csv`** — registers module capabilities for the help system.
|
||||
|
||||
Both config scripts use an anti-zombie pattern — existing entries for this module are removed before writing fresh ones, so stale values never persist.
|
||||
|
||||
`{project-root}` is a **literal token** in config values — never substitute it with an actual path. It signals to the consuming LLM that the value is relative to the project root, not the skill root.
|
||||
|
||||
## On Activation
|
||||
|
||||
1. Read `./assets/module.yaml` for module metadata and variable definitions (the `code` field is the module identifier)
|
||||
2. Check if `{project-root}/_bmad/config.yaml` exists — if a section matching the module's code is already present, inform the user this is an update
|
||||
3. Check for per-module configuration at `{project-root}/_bmad/{module-code}/config.yaml` and `{project-root}/_bmad/core/config.yaml`. If either file exists:
|
||||
- If `{project-root}/_bmad/config.yaml` does **not** yet have a section for this module: this is a **fresh install**. Inform the user that installer config was detected and values will be consolidated into the new format.
|
||||
- If `{project-root}/_bmad/config.yaml` **already** has a section for this module: this is a **legacy migration**. Inform the user that legacy per-module config was found alongside existing config, and legacy values will be used as fallback defaults.
|
||||
- In both cases, per-module config files and directories will be cleaned up after setup.
|
||||
|
||||
If the user provides arguments (e.g. `accept all defaults`, `--headless`, or inline values like `user name is BMad, I speak Swahili`), map any provided values to config keys, use defaults for the rest, and skip interactive prompting. Still display the full confirmation summary at the end.
|
||||
|
||||
## Collect Configuration
|
||||
|
||||
Ask the user for values. Show defaults in brackets. Present all values together so the user can respond once with only the values they want to change (e.g. "change language to Swahili, rest are fine"). Never tell the user to "press enter" or "leave blank" — in a chat interface they must type something to respond.
|
||||
|
||||
**Default priority** (highest wins): existing new config values > legacy config values > `./assets/module.yaml` defaults. When legacy configs exist, read them and use matching values as defaults instead of `module.yaml` defaults. Only keys that match the current schema are carried forward — changed or removed keys are ignored.
|
||||
|
||||
**Core config** (only if no core keys exist yet): `user_name` (default: BMad), `communication_language` and `document_output_language` (default: English — ask as a single language question, both keys get the same answer), `output_folder` (default: `{project-root}/_bmad-output`). Of these, `user_name` and `communication_language` are written exclusively to `config.user.yaml`. The rest go to `config.yaml` at root and are shared across all modules.
|
||||
|
||||
**Module config**: Read each variable in `./assets/module.yaml` that has a `prompt` field. Ask using that prompt with its default value (or legacy value if available).
|
||||
|
||||
## Write Files
|
||||
|
||||
Write a temp JSON file with the collected answers structured as `{"core": {...}, "module": {...}}` (omit `core` if it already exists). Then run both scripts — they can run in parallel since they write to different files:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/merge-config.py --config-path "{project-root}/_bmad/config.yaml" --user-config-path "{project-root}/_bmad/config.user.yaml" --module-yaml ./assets/module.yaml --answers {temp-file} --legacy-dir "{project-root}/_bmad"
|
||||
python3 ./scripts/merge-help-csv.py --target "{project-root}/_bmad/module-help.csv" --source ./assets/module-help.csv --legacy-dir "{project-root}/_bmad" --module-code {module-code}
|
||||
```
|
||||
|
||||
Both scripts output JSON to stdout with results. If either exits non-zero, surface the error and stop. The scripts automatically read legacy config values as fallback defaults, then delete the legacy files after a successful merge. Check `legacy_configs_deleted` and `legacy_csvs_deleted` in the output to confirm cleanup.
|
||||
|
||||
Run `./scripts/merge-config.py --help` or `./scripts/merge-help-csv.py --help` for full usage.
|
||||
|
||||
## Create Output Directories
|
||||
|
||||
After writing config, create any output directories that were configured. For filesystem operations only (such as creating directories), resolve the `{project-root}` token to the actual project root and create each path-type value from `config.yaml` that does not yet exist — this includes `output_folder` and any module variable whose value starts with `{project-root}/`. The paths stored in the config files must continue to use the literal `{project-root}` token; only the directories on disk should use the resolved paths. Use `mkdir -p` or equivalent to create the full path.
|
||||
|
||||
## Cleanup Legacy Directories
|
||||
|
||||
After both merge scripts complete successfully, remove the installer's package directories. Skills and agents in these directories are already installed at `.claude/skills/` — the `_bmad/` directory should only contain config files.
|
||||
|
||||
```bash
|
||||
python3 ./scripts/cleanup-legacy.py --bmad-dir "{project-root}/_bmad" --module-code {module-code} --also-remove _config --skills-dir "{project-root}/.claude/skills"
|
||||
```
|
||||
|
||||
The script verifies that every skill in the legacy directories exists at `.claude/skills/` before removing anything. Directories without skills (like `_config/`) are removed directly. If the script exits non-zero, surface the error and stop. Missing directories (already cleaned by a prior run) are not errors — the script is idempotent.
|
||||
|
||||
Check `directories_removed` and `files_removed_count` in the JSON output for the confirmation step. Run `./scripts/cleanup-legacy.py --help` for full usage.
|
||||
|
||||
## Confirm
|
||||
|
||||
Use the script JSON output to display what was written — config values set (written to `config.yaml` at root for core, module section for module values), user settings written to `config.user.yaml` (`user_keys` in result), help entries added, fresh install vs update. If legacy files were deleted, mention the migration. If legacy directories were removed, report the count and list (e.g. "Cleaned up 106 installer package files from bmb/, core/, \_config/ — skills are installed at .claude/skills/"). Then display the `module_greeting` from `./assets/module.yaml` to the user.
|
||||
|
||||
## Outcome
|
||||
|
||||
Once the user's `user_name` and `communication_language` are known (from collected input, arguments, or existing config), use them consistently for the remainder of the session: address the user by their configured name and communicate in their configured `communication_language`.
|
||||
@@ -0,0 +1 @@
|
||||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
|
@@ -0,0 +1,6 @@
|
||||
code:
|
||||
name: ""
|
||||
description: ""
|
||||
module_version: 1.0.0
|
||||
default_selected: false
|
||||
module_greeting: >
|
||||
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Remove legacy module directories from _bmad/ after config migration.
|
||||
|
||||
After merge-config.py and merge-help-csv.py have migrated config data and
|
||||
deleted individual legacy files, this script removes the now-redundant
|
||||
directory trees. These directories contain skill files that are already
|
||||
installed at .claude/skills/ (or equivalent) — only the config files at
|
||||
_bmad/ root need to persist.
|
||||
|
||||
When --skills-dir is provided, the script verifies that every skill found
|
||||
in the legacy directories exists at the installed location before removing
|
||||
anything. Directories without skills (like _config/) are removed directly.
|
||||
|
||||
Exit codes: 0=success (including nothing to remove), 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Remove legacy module directories from _bmad/ after config migration."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bmad-dir",
|
||||
required=True,
|
||||
help="Path to the _bmad/ directory",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
required=True,
|
||||
help="Module code being cleaned up (e.g. 'bmb')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--also-remove",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Additional directory names under _bmad/ to remove (repeatable)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skills-dir",
|
||||
help="Path to .claude/skills/ — enables safety verification that skills "
|
||||
"are installed before removing legacy copies",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def find_skill_dirs(base_path: str) -> list:
|
||||
"""Find directories that contain a SKILL.md file.
|
||||
|
||||
Walks the directory tree and returns the leaf directory name for each
|
||||
directory containing a SKILL.md. These are considered skill directories.
|
||||
|
||||
Returns:
|
||||
List of skill directory names (e.g. ['bmad-agent-builder', 'bmad-builder-setup'])
|
||||
"""
|
||||
skills = []
|
||||
root = Path(base_path)
|
||||
if not root.exists():
|
||||
return skills
|
||||
for skill_md in root.rglob("SKILL.md"):
|
||||
skills.append(skill_md.parent.name)
|
||||
return sorted(set(skills))
|
||||
|
||||
|
||||
def verify_skills_installed(
|
||||
bmad_dir: str, dirs_to_check: list, skills_dir: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Verify that skills in legacy directories exist at the installed location.
|
||||
|
||||
Scans each directory in dirs_to_check for skill folders (containing SKILL.md),
|
||||
then checks that a matching directory exists under skills_dir. Directories
|
||||
that contain no skills (like _config/) are silently skipped.
|
||||
|
||||
Returns:
|
||||
List of verified skill names.
|
||||
|
||||
Raises SystemExit(1) if any skills are missing from skills_dir.
|
||||
"""
|
||||
all_verified = []
|
||||
missing = []
|
||||
|
||||
for dirname in dirs_to_check:
|
||||
legacy_path = Path(bmad_dir) / dirname
|
||||
if not legacy_path.exists():
|
||||
continue
|
||||
|
||||
skill_names = find_skill_dirs(str(legacy_path))
|
||||
if not skill_names:
|
||||
if verbose:
|
||||
print(
|
||||
f"No skills found in {dirname}/ — skipping verification",
|
||||
file=sys.stderr,
|
||||
)
|
||||
continue
|
||||
|
||||
for skill_name in skill_names:
|
||||
installed_path = Path(skills_dir) / skill_name
|
||||
if installed_path.is_dir():
|
||||
all_verified.append(skill_name)
|
||||
if verbose:
|
||||
print(
|
||||
f"Verified: {skill_name} exists at {installed_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
missing.append(skill_name)
|
||||
if verbose:
|
||||
print(
|
||||
f"MISSING: {skill_name} not found at {installed_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
if missing:
|
||||
error_result = {
|
||||
"status": "error",
|
||||
"error": "Skills not found at installed location",
|
||||
"missing_skills": missing,
|
||||
"skills_dir": str(Path(skills_dir).resolve()),
|
||||
}
|
||||
print(json.dumps(error_result, indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
return sorted(set(all_verified))
|
||||
|
||||
|
||||
def count_files(path: Path) -> int:
|
||||
"""Count all files recursively in a directory."""
|
||||
count = 0
|
||||
for item in path.rglob("*"):
|
||||
if item.is_file():
|
||||
count += 1
|
||||
return count
|
||||
|
||||
|
||||
def cleanup_directories(
|
||||
bmad_dir: str, dirs_to_remove: list, verbose: bool = False
|
||||
) -> tuple:
|
||||
"""Remove specified directories under bmad_dir.
|
||||
|
||||
Returns:
|
||||
(removed, not_found, total_files_removed) tuple
|
||||
"""
|
||||
removed = []
|
||||
not_found = []
|
||||
total_files = 0
|
||||
|
||||
for dirname in dirs_to_remove:
|
||||
target = Path(bmad_dir) / dirname
|
||||
if not target.exists():
|
||||
not_found.append(dirname)
|
||||
if verbose:
|
||||
print(f"Not found (skipping): {target}", file=sys.stderr)
|
||||
continue
|
||||
|
||||
if not target.is_dir():
|
||||
if verbose:
|
||||
print(f"Not a directory (skipping): {target}", file=sys.stderr)
|
||||
not_found.append(dirname)
|
||||
continue
|
||||
|
||||
file_count = count_files(target)
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing {target} ({file_count} files)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
try:
|
||||
shutil.rmtree(target)
|
||||
except OSError as e:
|
||||
error_result = {
|
||||
"status": "error",
|
||||
"error": f"Failed to remove {target}: {e}",
|
||||
"directories_removed": removed,
|
||||
"directories_failed": dirname,
|
||||
}
|
||||
print(json.dumps(error_result, indent=2))
|
||||
sys.exit(2)
|
||||
|
||||
removed.append(dirname)
|
||||
total_files += file_count
|
||||
|
||||
return removed, not_found, total_files
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
bmad_dir = args.bmad_dir
|
||||
module_code = args.module_code
|
||||
|
||||
# Build the list of directories to remove
|
||||
dirs_to_remove = [module_code, "core"] + args.also_remove
|
||||
# Deduplicate while preserving order
|
||||
seen = set()
|
||||
unique_dirs = []
|
||||
for d in dirs_to_remove:
|
||||
if d not in seen:
|
||||
seen.add(d)
|
||||
unique_dirs.append(d)
|
||||
dirs_to_remove = unique_dirs
|
||||
|
||||
if args.verbose:
|
||||
print(f"Directories to remove: {dirs_to_remove}", file=sys.stderr)
|
||||
|
||||
# Safety check: verify skills are installed before removing
|
||||
verified_skills = None
|
||||
if args.skills_dir:
|
||||
if args.verbose:
|
||||
print(
|
||||
f"Verifying skills installed at {args.skills_dir}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
verified_skills = verify_skills_installed(
|
||||
bmad_dir, dirs_to_remove, args.skills_dir, args.verbose
|
||||
)
|
||||
|
||||
# Remove directories
|
||||
removed, not_found, total_files = cleanup_directories(
|
||||
bmad_dir, dirs_to_remove, args.verbose
|
||||
)
|
||||
|
||||
# Build result
|
||||
result = {
|
||||
"status": "success",
|
||||
"bmad_dir": str(Path(bmad_dir).resolve()),
|
||||
"directories_removed": removed,
|
||||
"directories_not_found": not_found,
|
||||
"files_removed_count": total_files,
|
||||
}
|
||||
|
||||
if args.skills_dir:
|
||||
result["safety_checks"] = {
|
||||
"skills_verified": True,
|
||||
"skills_dir": str(Path(args.skills_dir).resolve()),
|
||||
"verified_skills": verified_skills,
|
||||
}
|
||||
else:
|
||||
result["safety_checks"] = None
|
||||
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,408 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
"""Merge module configuration into shared _bmad/config.yaml and config.user.yaml.
|
||||
|
||||
Reads a module.yaml definition and a JSON answers file, then writes or updates
|
||||
the shared config.yaml (core values at root + module section) and config.user.yaml
|
||||
(user_name, communication_language, plus any module variable with user_setting: true).
|
||||
Uses an anti-zombie pattern for the module section in config.yaml.
|
||||
|
||||
Legacy migration: when --legacy-dir is provided, reads old per-module config files
|
||||
from {legacy-dir}/{module-code}/config.yaml and {legacy-dir}/core/config.yaml.
|
||||
Matching values serve as fallback defaults (answers override them). After a
|
||||
successful merge, the legacy config.yaml files are deleted. Only the current
|
||||
module and core directories are touched — other module directories are left alone.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
print("Error: pyyaml is required (PEP 723 dependency)", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module config into shared _bmad/config.yaml with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the module.yaml definition file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--answers",
|
||||
required=True,
|
||||
help="Path to JSON file with collected answers",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--user-config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.user.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module config files. "
|
||||
"Matching values are used as fallback defaults, then legacy files are deleted.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def load_yaml_file(path: str) -> dict:
|
||||
"""Load a YAML file, returning empty dict if file doesn't exist."""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return {}
|
||||
with open(file_path, "r", encoding="utf-8") as f:
|
||||
content = yaml.safe_load(f)
|
||||
return content if content else {}
|
||||
|
||||
|
||||
def load_json_file(path: str) -> dict:
|
||||
"""Load a JSON file."""
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
# Keys that live at config root (shared across all modules)
|
||||
_CORE_KEYS = frozenset(
|
||||
{"user_name", "communication_language", "document_output_language", "output_folder"}
|
||||
)
|
||||
|
||||
|
||||
def load_legacy_values(
|
||||
legacy_dir: str, module_code: str, module_yaml: dict, verbose: bool = False
|
||||
) -> tuple[dict, dict, list]:
|
||||
"""Read legacy per-module config files and return core/module value dicts.
|
||||
|
||||
Reads {legacy_dir}/core/config.yaml and {legacy_dir}/{module_code}/config.yaml.
|
||||
Only returns values whose keys match the current schema (core keys or module.yaml
|
||||
variable definitions). Other modules' directories are not touched.
|
||||
|
||||
Returns:
|
||||
(legacy_core, legacy_module, files_found) where files_found lists paths read.
|
||||
"""
|
||||
legacy_core: dict = {}
|
||||
legacy_module: dict = {}
|
||||
files_found: list = []
|
||||
|
||||
# Read core legacy config
|
||||
core_path = Path(legacy_dir) / "core" / "config.yaml"
|
||||
if core_path.exists():
|
||||
core_data = load_yaml_file(str(core_path))
|
||||
files_found.append(str(core_path))
|
||||
for k, v in core_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
legacy_core[k] = v
|
||||
if verbose:
|
||||
print(f"Legacy core config: {list(legacy_core.keys())}", file=sys.stderr)
|
||||
|
||||
# Read module legacy config
|
||||
mod_path = Path(legacy_dir) / module_code / "config.yaml"
|
||||
if mod_path.exists():
|
||||
mod_data = load_yaml_file(str(mod_path))
|
||||
files_found.append(str(mod_path))
|
||||
for k, v in mod_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
# Core keys duplicated in module config — only use if not already set
|
||||
if k not in legacy_core:
|
||||
legacy_core[k] = v
|
||||
elif k in module_yaml and isinstance(module_yaml[k], dict):
|
||||
# Module-specific key that matches a current variable definition
|
||||
legacy_module[k] = v
|
||||
if verbose:
|
||||
print(
|
||||
f"Legacy module config: {list(legacy_module.keys())}", file=sys.stderr
|
||||
)
|
||||
|
||||
return legacy_core, legacy_module, files_found
|
||||
|
||||
|
||||
def apply_legacy_defaults(answers: dict, legacy_core: dict, legacy_module: dict) -> dict:
|
||||
"""Apply legacy values as fallback defaults under the answers.
|
||||
|
||||
Legacy values fill in any key not already present in answers.
|
||||
Explicit answers always win.
|
||||
"""
|
||||
merged = dict(answers)
|
||||
|
||||
if legacy_core:
|
||||
core = merged.get("core", {})
|
||||
filled_core = dict(legacy_core) # legacy as base
|
||||
filled_core.update(core) # answers override
|
||||
merged["core"] = filled_core
|
||||
|
||||
if legacy_module:
|
||||
mod = merged.get("module", {})
|
||||
filled_mod = dict(legacy_module) # legacy as base
|
||||
filled_mod.update(mod) # answers override
|
||||
merged["module"] = filled_mod
|
||||
|
||||
return merged
|
||||
|
||||
|
||||
def cleanup_legacy_configs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy config.yaml files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "config.yaml"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy config: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def extract_module_metadata(module_yaml: dict) -> dict:
|
||||
"""Extract non-variable metadata fields from module.yaml."""
|
||||
meta = {}
|
||||
for k in ("name", "description"):
|
||||
if k in module_yaml:
|
||||
meta[k] = module_yaml[k]
|
||||
meta["version"] = module_yaml.get("module_version") # null if absent
|
||||
if "default_selected" in module_yaml:
|
||||
meta["default_selected"] = module_yaml["default_selected"]
|
||||
return meta
|
||||
|
||||
|
||||
def apply_result_templates(
|
||||
module_yaml: dict, module_answers: dict, verbose: bool = False
|
||||
) -> dict:
|
||||
"""Apply result templates from module.yaml to transform raw answer values.
|
||||
|
||||
For each answer, if the corresponding variable definition in module.yaml has
|
||||
a 'result' field, replaces {value} in that template with the answer. Skips
|
||||
the template if the answer already contains '{project-root}' to prevent
|
||||
double-prefixing.
|
||||
"""
|
||||
transformed = {}
|
||||
for key, value in module_answers.items():
|
||||
var_def = module_yaml.get(key)
|
||||
if (
|
||||
isinstance(var_def, dict)
|
||||
and "result" in var_def
|
||||
and "{project-root}" not in str(value)
|
||||
):
|
||||
template = var_def["result"]
|
||||
transformed[key] = template.replace("{value}", str(value))
|
||||
if verbose:
|
||||
print(
|
||||
f"Applied result template for '{key}': {value} → {transformed[key]}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
transformed[key] = value
|
||||
return transformed
|
||||
|
||||
|
||||
def merge_config(
|
||||
existing_config: dict,
|
||||
module_yaml: dict,
|
||||
answers: dict,
|
||||
verbose: bool = False,
|
||||
) -> dict:
|
||||
"""Merge answers into config, applying anti-zombie pattern.
|
||||
|
||||
Args:
|
||||
existing_config: Current config.yaml contents (may be empty)
|
||||
module_yaml: The module definition
|
||||
answers: JSON with 'core' and/or 'module' keys
|
||||
verbose: Print progress to stderr
|
||||
|
||||
Returns:
|
||||
Updated config dict ready to write
|
||||
"""
|
||||
config = dict(existing_config)
|
||||
module_code = module_yaml.get("code")
|
||||
|
||||
if not module_code:
|
||||
print("Error: module.yaml must have a 'code' field", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Migrate legacy core: section to root
|
||||
if "core" in config and isinstance(config["core"], dict):
|
||||
if verbose:
|
||||
print("Migrating legacy 'core' section to root", file=sys.stderr)
|
||||
config.update(config.pop("core"))
|
||||
|
||||
# Strip user-only keys from config — they belong exclusively in config.user.yaml
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in config:
|
||||
if verbose:
|
||||
print(f"Removing user-only key '{key}' from config (belongs in config.user.yaml)", file=sys.stderr)
|
||||
del config[key]
|
||||
|
||||
# Write core values at root (global properties, not nested under "core")
|
||||
# Exclude user-only keys — those belong exclusively in config.user.yaml
|
||||
core_answers = answers.get("core")
|
||||
if core_answers:
|
||||
shared_core = {k: v for k, v in core_answers.items() if k not in _CORE_USER_KEYS}
|
||||
if shared_core:
|
||||
if verbose:
|
||||
print(f"Writing core config at root: {list(shared_core.keys())}", file=sys.stderr)
|
||||
config.update(shared_core)
|
||||
|
||||
# Anti-zombie: remove existing module section
|
||||
if module_code in config:
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing existing '{module_code}' section (anti-zombie)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
del config[module_code]
|
||||
|
||||
# Build module section: metadata + variable values
|
||||
module_section = extract_module_metadata(module_yaml)
|
||||
module_answers = apply_result_templates(
|
||||
module_yaml, answers.get("module", {}), verbose
|
||||
)
|
||||
module_section.update(module_answers)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"Writing '{module_code}' section with keys: {list(module_section.keys())}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
config[module_code] = module_section
|
||||
|
||||
return config
|
||||
|
||||
|
||||
# Core keys that are always written to config.user.yaml
|
||||
_CORE_USER_KEYS = ("user_name", "communication_language")
|
||||
|
||||
|
||||
def extract_user_settings(module_yaml: dict, answers: dict) -> dict:
|
||||
"""Collect settings that belong in config.user.yaml.
|
||||
|
||||
Includes user_name and communication_language from core answers, plus any
|
||||
module variable whose definition contains user_setting: true.
|
||||
"""
|
||||
user_settings = {}
|
||||
|
||||
core_answers = answers.get("core", {})
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in core_answers:
|
||||
user_settings[key] = core_answers[key]
|
||||
|
||||
module_answers = answers.get("module", {})
|
||||
for var_name, var_def in module_yaml.items():
|
||||
if isinstance(var_def, dict) and var_def.get("user_setting") is True:
|
||||
if var_name in module_answers:
|
||||
user_settings[var_name] = module_answers[var_name]
|
||||
|
||||
return user_settings
|
||||
|
||||
|
||||
def write_config(config: dict, config_path: str, verbose: bool = False) -> None:
|
||||
"""Write config dict to YAML file, creating parent dirs as needed."""
|
||||
path = Path(config_path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing config to {path}", file=sys.stderr)
|
||||
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
yaml.dump(
|
||||
config,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
allow_unicode=True,
|
||||
sort_keys=False,
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Load inputs
|
||||
module_yaml = load_yaml_file(args.module_yaml)
|
||||
if not module_yaml:
|
||||
print(f"Error: Could not load module.yaml from {args.module_yaml}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
answers = load_json_file(args.answers)
|
||||
existing_config = load_yaml_file(args.config_path)
|
||||
|
||||
if args.verbose:
|
||||
exists = Path(args.config_path).exists()
|
||||
print(f"Config file exists: {exists}", file=sys.stderr)
|
||||
if exists:
|
||||
print(f"Existing sections: {list(existing_config.keys())}", file=sys.stderr)
|
||||
|
||||
# Legacy migration: read old per-module configs as fallback defaults
|
||||
legacy_files_found = []
|
||||
if args.legacy_dir:
|
||||
module_code = module_yaml.get("code", "")
|
||||
legacy_core, legacy_module, legacy_files_found = load_legacy_values(
|
||||
args.legacy_dir, module_code, module_yaml, args.verbose
|
||||
)
|
||||
if legacy_core or legacy_module:
|
||||
answers = apply_legacy_defaults(answers, legacy_core, legacy_module)
|
||||
if args.verbose:
|
||||
print("Applied legacy values as fallback defaults", file=sys.stderr)
|
||||
|
||||
# Merge and write config.yaml
|
||||
updated_config = merge_config(existing_config, module_yaml, answers, args.verbose)
|
||||
write_config(updated_config, args.config_path, args.verbose)
|
||||
|
||||
# Merge and write config.user.yaml
|
||||
user_settings = extract_user_settings(module_yaml, answers)
|
||||
existing_user_config = load_yaml_file(args.user_config_path)
|
||||
updated_user_config = dict(existing_user_config)
|
||||
updated_user_config.update(user_settings)
|
||||
if user_settings:
|
||||
write_config(updated_user_config, args.user_config_path, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module config files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
legacy_deleted = cleanup_legacy_configs(
|
||||
args.legacy_dir, module_yaml["code"], args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
module_code = module_yaml["code"]
|
||||
result = {
|
||||
"status": "success",
|
||||
"config_path": str(Path(args.config_path).resolve()),
|
||||
"user_config_path": str(Path(args.user_config_path).resolve()),
|
||||
"module_code": module_code,
|
||||
"core_updated": bool(answers.get("core")),
|
||||
"module_keys": list(updated_config.get(module_code, {}).keys()),
|
||||
"user_keys": list(user_settings.keys()),
|
||||
"legacy_configs_found": legacy_files_found,
|
||||
"legacy_configs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,218 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Merge module help entries into shared _bmad/module-help.csv.
|
||||
|
||||
Reads a source CSV with module help entries and merges them into a target CSV.
|
||||
Uses an anti-zombie pattern: all existing rows matching the source module code
|
||||
are removed before appending fresh rows.
|
||||
|
||||
Legacy cleanup: when --legacy-dir and --module-code are provided, deletes old
|
||||
per-module module-help.csv files from {legacy-dir}/{module-code}/ and
|
||||
{legacy-dir}/core/. Only the current module and core are touched.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
# CSV header for module-help.csv
|
||||
HEADER = [
|
||||
"module",
|
||||
"skill",
|
||||
"display-name",
|
||||
"menu-code",
|
||||
"description",
|
||||
"action",
|
||||
"args",
|
||||
"phase",
|
||||
"after",
|
||||
"before",
|
||||
"required",
|
||||
"output-location",
|
||||
"outputs",
|
||||
]
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module help entries into shared _bmad/module-help.csv with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target",
|
||||
required=True,
|
||||
help="Path to the target _bmad/module-help.csv file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--source",
|
||||
required=True,
|
||||
help="Path to the source module-help.csv with entries to merge",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module CSV files.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
help="Module code (required with --legacy-dir for scoping cleanup).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def read_csv_rows(path: str) -> tuple[list[str], list[list[str]]]:
|
||||
"""Read CSV file returning (header, data_rows).
|
||||
|
||||
Returns empty header and rows if file doesn't exist.
|
||||
"""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return [], []
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", newline="") as f:
|
||||
content = f.read()
|
||||
|
||||
reader = csv.reader(StringIO(content))
|
||||
rows = list(reader)
|
||||
|
||||
if not rows:
|
||||
return [], []
|
||||
|
||||
return rows[0], rows[1:]
|
||||
|
||||
|
||||
def extract_module_codes(rows: list[list[str]]) -> set[str]:
|
||||
"""Extract unique module codes from data rows."""
|
||||
codes = set()
|
||||
for row in rows:
|
||||
if row and row[0].strip():
|
||||
codes.add(row[0].strip())
|
||||
return codes
|
||||
|
||||
|
||||
def filter_rows(rows: list[list[str]], module_code: str) -> list[list[str]]:
|
||||
"""Remove all rows matching the given module code."""
|
||||
return [row for row in rows if not row or row[0].strip() != module_code]
|
||||
|
||||
|
||||
def write_csv(path: str, header: list[str], rows: list[list[str]], verbose: bool = False) -> None:
|
||||
"""Write header + rows to CSV file, creating parent dirs as needed."""
|
||||
file_path = Path(path)
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing {len(rows)} data rows to {path}", file=sys.stderr)
|
||||
|
||||
with open(file_path, "w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(header)
|
||||
for row in rows:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def cleanup_legacy_csvs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy per-module module-help.csv files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "module-help.csv"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy CSV: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Read source entries
|
||||
source_header, source_rows = read_csv_rows(args.source)
|
||||
if not source_rows:
|
||||
print(f"Error: No data rows found in source {args.source}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Determine module codes being merged
|
||||
source_codes = extract_module_codes(source_rows)
|
||||
if not source_codes:
|
||||
print("Error: Could not determine module code from source rows", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Source module codes: {source_codes}", file=sys.stderr)
|
||||
print(f"Source rows: {len(source_rows)}", file=sys.stderr)
|
||||
|
||||
# Read existing target (may not exist)
|
||||
target_header, target_rows = read_csv_rows(args.target)
|
||||
target_existed = Path(args.target).exists()
|
||||
|
||||
if args.verbose:
|
||||
print(f"Target exists: {target_existed}", file=sys.stderr)
|
||||
if target_existed:
|
||||
print(f"Existing target rows: {len(target_rows)}", file=sys.stderr)
|
||||
|
||||
# Use source header if target doesn't exist or has no header
|
||||
header = target_header if target_header else (source_header if source_header else HEADER)
|
||||
|
||||
# Anti-zombie: remove all rows for each source module code
|
||||
filtered_rows = target_rows
|
||||
removed_count = 0
|
||||
for code in source_codes:
|
||||
before_count = len(filtered_rows)
|
||||
filtered_rows = filter_rows(filtered_rows, code)
|
||||
removed_count += before_count - len(filtered_rows)
|
||||
|
||||
if args.verbose and removed_count > 0:
|
||||
print(f"Removed {removed_count} existing rows (anti-zombie)", file=sys.stderr)
|
||||
|
||||
# Append source rows
|
||||
merged_rows = filtered_rows + source_rows
|
||||
|
||||
# Write result
|
||||
write_csv(args.target, header, merged_rows, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module CSV files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
if not args.module_code:
|
||||
print(
|
||||
"Error: --module-code is required when --legacy-dir is provided",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
legacy_deleted = cleanup_legacy_csvs(
|
||||
args.legacy_dir, args.module_code, args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
result = {
|
||||
"status": "success",
|
||||
"target_path": str(Path(args.target).resolve()),
|
||||
"target_existed": target_existed,
|
||||
"module_codes": sorted(source_codes),
|
||||
"rows_removed": removed_count,
|
||||
"rows_added": len(source_rows),
|
||||
"total_rows": len(merged_rows),
|
||||
"legacy_csvs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,408 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
"""Merge module configuration into shared _bmad/config.yaml and config.user.yaml.
|
||||
|
||||
Reads a module.yaml definition and a JSON answers file, then writes or updates
|
||||
the shared config.yaml (core values at root + module section) and config.user.yaml
|
||||
(user_name, communication_language, plus any module variable with user_setting: true).
|
||||
Uses an anti-zombie pattern for the module section in config.yaml.
|
||||
|
||||
Legacy migration: when --legacy-dir is provided, reads old per-module config files
|
||||
from {legacy-dir}/{module-code}/config.yaml and {legacy-dir}/core/config.yaml.
|
||||
Matching values serve as fallback defaults (answers override them). After a
|
||||
successful merge, the legacy config.yaml files are deleted. Only the current
|
||||
module and core directories are touched — other module directories are left alone.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
print("Error: pyyaml is required (PEP 723 dependency)", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module config into shared _bmad/config.yaml with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the module.yaml definition file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--answers",
|
||||
required=True,
|
||||
help="Path to JSON file with collected answers",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--user-config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.user.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module config files. "
|
||||
"Matching values are used as fallback defaults, then legacy files are deleted.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def load_yaml_file(path: str) -> dict:
|
||||
"""Load a YAML file, returning empty dict if file doesn't exist."""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return {}
|
||||
with open(file_path, "r", encoding="utf-8") as f:
|
||||
content = yaml.safe_load(f)
|
||||
return content if content else {}
|
||||
|
||||
|
||||
def load_json_file(path: str) -> dict:
|
||||
"""Load a JSON file."""
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
# Keys that live at config root (shared across all modules)
|
||||
_CORE_KEYS = frozenset(
|
||||
{"user_name", "communication_language", "document_output_language", "output_folder"}
|
||||
)
|
||||
|
||||
|
||||
def load_legacy_values(
|
||||
legacy_dir: str, module_code: str, module_yaml: dict, verbose: bool = False
|
||||
) -> tuple[dict, dict, list]:
|
||||
"""Read legacy per-module config files and return core/module value dicts.
|
||||
|
||||
Reads {legacy_dir}/core/config.yaml and {legacy_dir}/{module_code}/config.yaml.
|
||||
Only returns values whose keys match the current schema (core keys or module.yaml
|
||||
variable definitions). Other modules' directories are not touched.
|
||||
|
||||
Returns:
|
||||
(legacy_core, legacy_module, files_found) where files_found lists paths read.
|
||||
"""
|
||||
legacy_core: dict = {}
|
||||
legacy_module: dict = {}
|
||||
files_found: list = []
|
||||
|
||||
# Read core legacy config
|
||||
core_path = Path(legacy_dir) / "core" / "config.yaml"
|
||||
if core_path.exists():
|
||||
core_data = load_yaml_file(str(core_path))
|
||||
files_found.append(str(core_path))
|
||||
for k, v in core_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
legacy_core[k] = v
|
||||
if verbose:
|
||||
print(f"Legacy core config: {list(legacy_core.keys())}", file=sys.stderr)
|
||||
|
||||
# Read module legacy config
|
||||
mod_path = Path(legacy_dir) / module_code / "config.yaml"
|
||||
if mod_path.exists():
|
||||
mod_data = load_yaml_file(str(mod_path))
|
||||
files_found.append(str(mod_path))
|
||||
for k, v in mod_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
# Core keys duplicated in module config — only use if not already set
|
||||
if k not in legacy_core:
|
||||
legacy_core[k] = v
|
||||
elif k in module_yaml and isinstance(module_yaml[k], dict):
|
||||
# Module-specific key that matches a current variable definition
|
||||
legacy_module[k] = v
|
||||
if verbose:
|
||||
print(
|
||||
f"Legacy module config: {list(legacy_module.keys())}", file=sys.stderr
|
||||
)
|
||||
|
||||
return legacy_core, legacy_module, files_found
|
||||
|
||||
|
||||
def apply_legacy_defaults(answers: dict, legacy_core: dict, legacy_module: dict) -> dict:
|
||||
"""Apply legacy values as fallback defaults under the answers.
|
||||
|
||||
Legacy values fill in any key not already present in answers.
|
||||
Explicit answers always win.
|
||||
"""
|
||||
merged = dict(answers)
|
||||
|
||||
if legacy_core:
|
||||
core = merged.get("core", {})
|
||||
filled_core = dict(legacy_core) # legacy as base
|
||||
filled_core.update(core) # answers override
|
||||
merged["core"] = filled_core
|
||||
|
||||
if legacy_module:
|
||||
mod = merged.get("module", {})
|
||||
filled_mod = dict(legacy_module) # legacy as base
|
||||
filled_mod.update(mod) # answers override
|
||||
merged["module"] = filled_mod
|
||||
|
||||
return merged
|
||||
|
||||
|
||||
def cleanup_legacy_configs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy config.yaml files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "config.yaml"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy config: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def extract_module_metadata(module_yaml: dict) -> dict:
|
||||
"""Extract non-variable metadata fields from module.yaml."""
|
||||
meta = {}
|
||||
for k in ("name", "description"):
|
||||
if k in module_yaml:
|
||||
meta[k] = module_yaml[k]
|
||||
meta["version"] = module_yaml.get("module_version") # null if absent
|
||||
if "default_selected" in module_yaml:
|
||||
meta["default_selected"] = module_yaml["default_selected"]
|
||||
return meta
|
||||
|
||||
|
||||
def apply_result_templates(
|
||||
module_yaml: dict, module_answers: dict, verbose: bool = False
|
||||
) -> dict:
|
||||
"""Apply result templates from module.yaml to transform raw answer values.
|
||||
|
||||
For each answer, if the corresponding variable definition in module.yaml has
|
||||
a 'result' field, replaces {value} in that template with the answer. Skips
|
||||
the template if the answer already contains '{project-root}' to prevent
|
||||
double-prefixing.
|
||||
"""
|
||||
transformed = {}
|
||||
for key, value in module_answers.items():
|
||||
var_def = module_yaml.get(key)
|
||||
if (
|
||||
isinstance(var_def, dict)
|
||||
and "result" in var_def
|
||||
and "{project-root}" not in str(value)
|
||||
):
|
||||
template = var_def["result"]
|
||||
transformed[key] = template.replace("{value}", str(value))
|
||||
if verbose:
|
||||
print(
|
||||
f"Applied result template for '{key}': {value} → {transformed[key]}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
transformed[key] = value
|
||||
return transformed
|
||||
|
||||
|
||||
def merge_config(
|
||||
existing_config: dict,
|
||||
module_yaml: dict,
|
||||
answers: dict,
|
||||
verbose: bool = False,
|
||||
) -> dict:
|
||||
"""Merge answers into config, applying anti-zombie pattern.
|
||||
|
||||
Args:
|
||||
existing_config: Current config.yaml contents (may be empty)
|
||||
module_yaml: The module definition
|
||||
answers: JSON with 'core' and/or 'module' keys
|
||||
verbose: Print progress to stderr
|
||||
|
||||
Returns:
|
||||
Updated config dict ready to write
|
||||
"""
|
||||
config = dict(existing_config)
|
||||
module_code = module_yaml.get("code")
|
||||
|
||||
if not module_code:
|
||||
print("Error: module.yaml must have a 'code' field", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Migrate legacy core: section to root
|
||||
if "core" in config and isinstance(config["core"], dict):
|
||||
if verbose:
|
||||
print("Migrating legacy 'core' section to root", file=sys.stderr)
|
||||
config.update(config.pop("core"))
|
||||
|
||||
# Strip user-only keys from config — they belong exclusively in config.user.yaml
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in config:
|
||||
if verbose:
|
||||
print(f"Removing user-only key '{key}' from config (belongs in config.user.yaml)", file=sys.stderr)
|
||||
del config[key]
|
||||
|
||||
# Write core values at root (global properties, not nested under "core")
|
||||
# Exclude user-only keys — those belong exclusively in config.user.yaml
|
||||
core_answers = answers.get("core")
|
||||
if core_answers:
|
||||
shared_core = {k: v for k, v in core_answers.items() if k not in _CORE_USER_KEYS}
|
||||
if shared_core:
|
||||
if verbose:
|
||||
print(f"Writing core config at root: {list(shared_core.keys())}", file=sys.stderr)
|
||||
config.update(shared_core)
|
||||
|
||||
# Anti-zombie: remove existing module section
|
||||
if module_code in config:
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing existing '{module_code}' section (anti-zombie)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
del config[module_code]
|
||||
|
||||
# Build module section: metadata + variable values
|
||||
module_section = extract_module_metadata(module_yaml)
|
||||
module_answers = apply_result_templates(
|
||||
module_yaml, answers.get("module", {}), verbose
|
||||
)
|
||||
module_section.update(module_answers)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"Writing '{module_code}' section with keys: {list(module_section.keys())}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
config[module_code] = module_section
|
||||
|
||||
return config
|
||||
|
||||
|
||||
# Core keys that are always written to config.user.yaml
|
||||
_CORE_USER_KEYS = ("user_name", "communication_language")
|
||||
|
||||
|
||||
def extract_user_settings(module_yaml: dict, answers: dict) -> dict:
|
||||
"""Collect settings that belong in config.user.yaml.
|
||||
|
||||
Includes user_name and communication_language from core answers, plus any
|
||||
module variable whose definition contains user_setting: true.
|
||||
"""
|
||||
user_settings = {}
|
||||
|
||||
core_answers = answers.get("core", {})
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in core_answers:
|
||||
user_settings[key] = core_answers[key]
|
||||
|
||||
module_answers = answers.get("module", {})
|
||||
for var_name, var_def in module_yaml.items():
|
||||
if isinstance(var_def, dict) and var_def.get("user_setting") is True:
|
||||
if var_name in module_answers:
|
||||
user_settings[var_name] = module_answers[var_name]
|
||||
|
||||
return user_settings
|
||||
|
||||
|
||||
def write_config(config: dict, config_path: str, verbose: bool = False) -> None:
|
||||
"""Write config dict to YAML file, creating parent dirs as needed."""
|
||||
path = Path(config_path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing config to {path}", file=sys.stderr)
|
||||
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
yaml.dump(
|
||||
config,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
allow_unicode=True,
|
||||
sort_keys=False,
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Load inputs
|
||||
module_yaml = load_yaml_file(args.module_yaml)
|
||||
if not module_yaml:
|
||||
print(f"Error: Could not load module.yaml from {args.module_yaml}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
answers = load_json_file(args.answers)
|
||||
existing_config = load_yaml_file(args.config_path)
|
||||
|
||||
if args.verbose:
|
||||
exists = Path(args.config_path).exists()
|
||||
print(f"Config file exists: {exists}", file=sys.stderr)
|
||||
if exists:
|
||||
print(f"Existing sections: {list(existing_config.keys())}", file=sys.stderr)
|
||||
|
||||
# Legacy migration: read old per-module configs as fallback defaults
|
||||
legacy_files_found = []
|
||||
if args.legacy_dir:
|
||||
module_code = module_yaml.get("code", "")
|
||||
legacy_core, legacy_module, legacy_files_found = load_legacy_values(
|
||||
args.legacy_dir, module_code, module_yaml, args.verbose
|
||||
)
|
||||
if legacy_core or legacy_module:
|
||||
answers = apply_legacy_defaults(answers, legacy_core, legacy_module)
|
||||
if args.verbose:
|
||||
print("Applied legacy values as fallback defaults", file=sys.stderr)
|
||||
|
||||
# Merge and write config.yaml
|
||||
updated_config = merge_config(existing_config, module_yaml, answers, args.verbose)
|
||||
write_config(updated_config, args.config_path, args.verbose)
|
||||
|
||||
# Merge and write config.user.yaml
|
||||
user_settings = extract_user_settings(module_yaml, answers)
|
||||
existing_user_config = load_yaml_file(args.user_config_path)
|
||||
updated_user_config = dict(existing_user_config)
|
||||
updated_user_config.update(user_settings)
|
||||
if user_settings:
|
||||
write_config(updated_user_config, args.user_config_path, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module config files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
legacy_deleted = cleanup_legacy_configs(
|
||||
args.legacy_dir, module_yaml["code"], args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
module_code = module_yaml["code"]
|
||||
result = {
|
||||
"status": "success",
|
||||
"config_path": str(Path(args.config_path).resolve()),
|
||||
"user_config_path": str(Path(args.user_config_path).resolve()),
|
||||
"module_code": module_code,
|
||||
"core_updated": bool(answers.get("core")),
|
||||
"module_keys": list(updated_config.get(module_code, {}).keys()),
|
||||
"user_keys": list(user_settings.keys()),
|
||||
"legacy_configs_found": legacy_files_found,
|
||||
"legacy_configs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,218 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Merge module help entries into shared _bmad/module-help.csv.
|
||||
|
||||
Reads a source CSV with module help entries and merges them into a target CSV.
|
||||
Uses an anti-zombie pattern: all existing rows matching the source module code
|
||||
are removed before appending fresh rows.
|
||||
|
||||
Legacy cleanup: when --legacy-dir and --module-code are provided, deletes old
|
||||
per-module module-help.csv files from {legacy-dir}/{module-code}/ and
|
||||
{legacy-dir}/core/. Only the current module and core are touched.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
# CSV header for module-help.csv
|
||||
HEADER = [
|
||||
"module",
|
||||
"skill",
|
||||
"display-name",
|
||||
"menu-code",
|
||||
"description",
|
||||
"action",
|
||||
"args",
|
||||
"phase",
|
||||
"after",
|
||||
"before",
|
||||
"required",
|
||||
"output-location",
|
||||
"outputs",
|
||||
]
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module help entries into shared _bmad/module-help.csv with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target",
|
||||
required=True,
|
||||
help="Path to the target _bmad/module-help.csv file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--source",
|
||||
required=True,
|
||||
help="Path to the source module-help.csv with entries to merge",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module CSV files.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
help="Module code (required with --legacy-dir for scoping cleanup).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def read_csv_rows(path: str) -> tuple[list[str], list[list[str]]]:
|
||||
"""Read CSV file returning (header, data_rows).
|
||||
|
||||
Returns empty header and rows if file doesn't exist.
|
||||
"""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return [], []
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", newline="") as f:
|
||||
content = f.read()
|
||||
|
||||
reader = csv.reader(StringIO(content))
|
||||
rows = list(reader)
|
||||
|
||||
if not rows:
|
||||
return [], []
|
||||
|
||||
return rows[0], rows[1:]
|
||||
|
||||
|
||||
def extract_module_codes(rows: list[list[str]]) -> set[str]:
|
||||
"""Extract unique module codes from data rows."""
|
||||
codes = set()
|
||||
for row in rows:
|
||||
if row and row[0].strip():
|
||||
codes.add(row[0].strip())
|
||||
return codes
|
||||
|
||||
|
||||
def filter_rows(rows: list[list[str]], module_code: str) -> list[list[str]]:
|
||||
"""Remove all rows matching the given module code."""
|
||||
return [row for row in rows if not row or row[0].strip() != module_code]
|
||||
|
||||
|
||||
def write_csv(path: str, header: list[str], rows: list[list[str]], verbose: bool = False) -> None:
|
||||
"""Write header + rows to CSV file, creating parent dirs as needed."""
|
||||
file_path = Path(path)
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing {len(rows)} data rows to {path}", file=sys.stderr)
|
||||
|
||||
with open(file_path, "w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(header)
|
||||
for row in rows:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def cleanup_legacy_csvs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy per-module module-help.csv files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "module-help.csv"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy CSV: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Read source entries
|
||||
source_header, source_rows = read_csv_rows(args.source)
|
||||
if not source_rows:
|
||||
print(f"Error: No data rows found in source {args.source}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Determine module codes being merged
|
||||
source_codes = extract_module_codes(source_rows)
|
||||
if not source_codes:
|
||||
print("Error: Could not determine module code from source rows", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Source module codes: {source_codes}", file=sys.stderr)
|
||||
print(f"Source rows: {len(source_rows)}", file=sys.stderr)
|
||||
|
||||
# Read existing target (may not exist)
|
||||
target_header, target_rows = read_csv_rows(args.target)
|
||||
target_existed = Path(args.target).exists()
|
||||
|
||||
if args.verbose:
|
||||
print(f"Target exists: {target_existed}", file=sys.stderr)
|
||||
if target_existed:
|
||||
print(f"Existing target rows: {len(target_rows)}", file=sys.stderr)
|
||||
|
||||
# Use source header if target doesn't exist or has no header
|
||||
header = target_header if target_header else (source_header if source_header else HEADER)
|
||||
|
||||
# Anti-zombie: remove all rows for each source module code
|
||||
filtered_rows = target_rows
|
||||
removed_count = 0
|
||||
for code in source_codes:
|
||||
before_count = len(filtered_rows)
|
||||
filtered_rows = filter_rows(filtered_rows, code)
|
||||
removed_count += before_count - len(filtered_rows)
|
||||
|
||||
if args.verbose and removed_count > 0:
|
||||
print(f"Removed {removed_count} existing rows (anti-zombie)", file=sys.stderr)
|
||||
|
||||
# Append source rows
|
||||
merged_rows = filtered_rows + source_rows
|
||||
|
||||
# Write result
|
||||
write_csv(args.target, header, merged_rows, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module CSV files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
if not args.module_code:
|
||||
print(
|
||||
"Error: --module-code is required when --legacy-dir is provided",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
legacy_deleted = cleanup_legacy_csvs(
|
||||
args.legacy_dir, args.module_code, args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
result = {
|
||||
"status": "success",
|
||||
"target_path": str(Path(args.target).resolve()),
|
||||
"target_existed": target_existed,
|
||||
"module_codes": sorted(source_codes),
|
||||
"rows_removed": removed_count,
|
||||
"rows_added": len(source_rows),
|
||||
"total_rows": len(merged_rows),
|
||||
"legacy_csvs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,81 @@
|
||||
# Module Setup
|
||||
|
||||
Standalone module self-registration. This file is loaded when:
|
||||
- The user passes `setup`, `configure`, or `install` as an argument
|
||||
- The module is not yet registered in `{project-root}/_bmad/config.yaml`
|
||||
- The skill's first-run init flow detects this is a fresh installation (e.g., agent memory doesn't exist yet)
|
||||
|
||||
## Overview
|
||||
|
||||
Registers this standalone module into a project. Module identity (name, code, version) comes from `./assets/module.yaml` (sibling to this file). Collects user preferences and writes them to three files:
|
||||
|
||||
- **`{project-root}/_bmad/config.yaml`** — shared project config: core settings at root (e.g. `output_folder`, `document_output_language`) plus a section per module with metadata and module-specific values. User-only keys (`user_name`, `communication_language`) are **never** written here.
|
||||
- **`{project-root}/_bmad/config.user.yaml`** — personal settings intended to be gitignored: `user_name`, `communication_language`, and any module variable marked `user_setting: true` in `./assets/module.yaml`. These values live exclusively here.
|
||||
- **`{project-root}/_bmad/module-help.csv`** — registers module capabilities for the help system.
|
||||
|
||||
Both config scripts use an anti-zombie pattern — existing entries for this module are removed before writing fresh ones, so stale values never persist.
|
||||
|
||||
`{project-root}` is a **literal token** in config values — never substitute it with an actual path. It signals to the consuming LLM that the value is relative to the project root, not the skill root.
|
||||
|
||||
## Check Existing Config
|
||||
|
||||
1. Read `./assets/module.yaml` for module metadata and variable definitions (the `code` field is the module identifier)
|
||||
2. Check if `{project-root}/_bmad/config.yaml` exists — if a section matching the module's code is already present, inform the user this is an update (reconfiguration)
|
||||
|
||||
If the user provides arguments (e.g. `accept all defaults`, `--headless`, or inline values like `user name is BMad, I speak Swahili`), map any provided values to config keys, use defaults for the rest, and skip interactive prompting. Still display the full confirmation summary at the end.
|
||||
|
||||
## Collect Configuration
|
||||
|
||||
Ask the user for values. Show defaults in brackets. Present all values together so the user can respond once with only the values they want to change (e.g. "change language to Swahili, rest are fine"). Never tell the user to "press enter" or "leave blank" — in a chat interface they must type something to respond.
|
||||
|
||||
**Default priority** (highest wins): existing config values > `./assets/module.yaml` defaults.
|
||||
|
||||
### Core Config
|
||||
|
||||
Only collect if no core keys exist yet in `config.yaml` or `config.user.yaml`:
|
||||
|
||||
- `user_name` (default: BMad) — written exclusively to `config.user.yaml`
|
||||
- `communication_language` and `document_output_language` (default: English — ask as a single language question, both keys get the same answer) — `communication_language` written exclusively to `config.user.yaml`
|
||||
- `output_folder` (default: `{project-root}/_bmad-output`) — written to `config.yaml` at root, shared across all modules
|
||||
|
||||
### Module Config
|
||||
|
||||
Read each variable in `./assets/module.yaml` that has a `prompt` field. The module.yaml supports several question types:
|
||||
|
||||
- **Text input**: Has `prompt`, `default`, and optionally `result` (template), `required`, `regex`, `example` fields
|
||||
- **Single-select**: Has a `single-select` array of `value`/`label` options — present as a choice list
|
||||
- **Multi-select**: Has a `multi-select` array — present as checkboxes, default is an array
|
||||
- **Confirm**: `default` is a boolean — present as Yes/No
|
||||
|
||||
Ask using the prompt with its default value. Apply `result` templates when storing (e.g. `{project-root}/{value}`). Fields with `user_setting: true` go exclusively to `config.user.yaml`.
|
||||
|
||||
## Write Files
|
||||
|
||||
Write a temp JSON file with the collected answers structured as `{"core": {...}, "module": {...}}` (omit `core` if it already exists). Then run both scripts — they can run in parallel since they write to different files:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/merge-config.py --config-path "{project-root}/_bmad/config.yaml" --user-config-path "{project-root}/_bmad/config.user.yaml" --module-yaml ./assets/module.yaml --answers {temp-file}
|
||||
python3 ./scripts/merge-help-csv.py --target "{project-root}/_bmad/module-help.csv" --source ./assets/module-help.csv --module-code {module-code}
|
||||
```
|
||||
|
||||
Both scripts output JSON to stdout with results. If either exits non-zero, surface the error and stop.
|
||||
|
||||
Run `./scripts/merge-config.py --help` or `./scripts/merge-help-csv.py --help` for full usage.
|
||||
|
||||
## Create Output Directories
|
||||
|
||||
After writing config, create any output directories that were configured. For filesystem operations only (such as creating directories), resolve the `{project-root}` token to the actual project root and create each path-type value from `config.yaml` that does not yet exist — this includes `output_folder` and any module variable whose value starts with `{project-root}/`. The paths stored in the config files must continue to use the literal `{project-root}` token; only the directories on disk should use the resolved paths. Use `mkdir -p` or equivalent to create the full path.
|
||||
|
||||
If `./assets/module.yaml` contains a `directories` array, also create each listed directory (resolving any `{field_name}` variables from the collected config values).
|
||||
|
||||
## Confirm
|
||||
|
||||
Use the script JSON output to display what was written — config values set (written to `config.yaml` at root for core, module section for module values), user settings written to `config.user.yaml` (`user_keys` in result), help entries added, fresh install vs update.
|
||||
|
||||
If `./assets/module.yaml` contains `post-install-notes`, display them (if conditional, show only the notes matching the user's selected config values).
|
||||
|
||||
Then display the `module_greeting` from `./assets/module.yaml` to the user.
|
||||
|
||||
## Return to Skill
|
||||
|
||||
Setup is complete. Resume the main skill's normal activation flow — load config from the freshly written files and proceed with whatever the user originally intended.
|
||||
246
.agent/skills/bmad-module-builder/references/create-module.md
Normal file
246
.agent/skills/bmad-module-builder/references/create-module.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Create Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated files unless overridden by context.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module packaging specialist. The user has built their skills — your job is to read them deeply, understand the ecosystem they form, and scaffold the infrastructure that makes it an installable BMad module.
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Discover the Skills
|
||||
|
||||
Ask the user for the folder path containing their built skills, or accept a path to a single skill (folder or SKILL.md file — if they provide a path ending in `SKILL.md`, resolve to the parent directory). Also ask: do they have a plan document from an Ideate Module (IM) session? If they do, this is the recommended path — a plan document lets you auto-extract module identity, capability ordering, config variables, and design rationale, dramatically improving the quality of the scaffolded module. Read it first, focusing on the structured sections (frontmatter, Skills, Configuration, Build Roadmap) — skip Ideas Captured and other freeform sections that don't inform scaffolding.
|
||||
|
||||
**Read every SKILL.md in the folder.** For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning compact JSON: `{ name, description, capabilities: [{ name, args, outputs }], dependencies }`. This keeps the parent context lean while still understanding the full ecosystem.
|
||||
|
||||
For each skill, understand:
|
||||
|
||||
- Name, purpose, and capabilities
|
||||
- Arguments and interaction model
|
||||
- What it produces and where
|
||||
- Dependencies on other skills or external tools
|
||||
|
||||
**Single skill detection:** If the folder contains exactly one skill (one directory with a SKILL.md), or the user provided a direct path to a single skill, note this as a **standalone module candidate**.
|
||||
|
||||
### 1.5. Confirm Approach
|
||||
|
||||
**If single skill detected:** Present the standalone option:
|
||||
|
||||
> "I found one skill: **{skill-name}**. For single-skill modules, I recommend the **standalone self-registering** approach — instead of generating a separate setup skill, the registration logic is built directly into this skill via a setup reference file. When users pass `setup` or `configure` as an argument, the skill handles its own module registration.
|
||||
>
|
||||
> This means:
|
||||
> - No separate `-setup` skill to maintain
|
||||
> - Simpler distribution (single skill folder + marketplace.json)
|
||||
> - Users install by adding the skill and running it with `setup`
|
||||
>
|
||||
> Shall I proceed with the standalone approach, or would you prefer a separate setup skill?"
|
||||
|
||||
**If multiple skills detected:** Confirm with the user: "I found {N} skills: {list}. I'll generate a dedicated `-setup` skill to handle module registration for all of them. Sound good?"
|
||||
|
||||
If the user overrides the recommendation (e.g., wants a setup skill for a single skill, or standalone for multiple), respect their choice.
|
||||
|
||||
### 2. Gather Module Identity
|
||||
|
||||
Collect through conversation (or extract from a plan document in headless mode):
|
||||
|
||||
- **Module name** — Human-friendly display name (e.g., "Creative Intelligence Suite")
|
||||
- **Module code** — 2-4 letter abbreviation (e.g., "cis"). Used in skill naming, config sections, and folder conventions
|
||||
- **Description** — One-line summary of what the module does
|
||||
- **Version** — Starting version (default: 1.0.0)
|
||||
- **Module greeting** — Message shown to the user after setup completes
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? This affects how help CSV entries may reference capabilities from the parent module
|
||||
|
||||
### 3. Define Capabilities
|
||||
|
||||
Build the help CSV entries for each skill. A single skill can have multiple capabilities (rows). For each capability:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------- |
|
||||
| **display-name** | What the user sees in help/menus |
|
||||
| **menu-code** | 2-letter shortcut, unique across the module |
|
||||
| **description** | What this capability does (concise) |
|
||||
| **action** | The capability/action name within the skill |
|
||||
| **args** | Supported arguments (e.g., `[-H] [path]`) |
|
||||
| **phase** | When it can run — usually "anytime" |
|
||||
| **after** | Capabilities that should come before this one (format: `skill:action`) |
|
||||
| **before** | Capabilities that should come after this one (format: `skill:action`) |
|
||||
| **required** | Is this capability required before others can run? |
|
||||
| **output-location** | Where output goes (config variable name or path) |
|
||||
| **outputs** | What it produces |
|
||||
|
||||
Ask the user about:
|
||||
|
||||
- How capabilities should be ordered — are there natural sequences?
|
||||
- Which capabilities are prerequisites for others?
|
||||
- If this is an expansion module, do any capabilities reference the parent module's skills in their before/after fields?
|
||||
|
||||
**Standalone modules:** All entries map to the same skill. Include a capability entry for the `setup`/`configure` action (menu-code `SU` or similar, action `configure`, phase `anytime`). Populate columns correctly for bmad-help consumption:
|
||||
|
||||
- `phase`: typically `anytime`, but use workflow phases (`1-analysis`, `2-planning`, etc.) if the skill fits a natural workflow sequence
|
||||
- `after`/`before`: dependency chain between capabilities, format `skill-name:action`
|
||||
- `required`: `true` for blocking gates, `false` for optional capabilities
|
||||
- `output-location`: use config variable names (e.g., `output_folder`) not literal paths — bmad-help resolves these from config
|
||||
- `outputs`: describe file patterns bmad-help should look for to detect completion (e.g., "quality report", "converted skill")
|
||||
- `menu-code`: unique 1-3 letter shortcodes displayed as `[CODE] Display Name` in help
|
||||
|
||||
### 4. Define Configuration Variables
|
||||
|
||||
Does the module need custom installation questions? For each custom variable:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------------- |
|
||||
| **Key name** | Used in config.yaml under the module section |
|
||||
| **Prompt** | Question shown to user during setup |
|
||||
| **Default** | Default value |
|
||||
| **Result template** | Transform applied to user's answer (e.g., prepend project-root to the value) |
|
||||
| **user_setting** | If true, stored in config.user.yaml instead of config.yaml |
|
||||
|
||||
Remind the user: skills should always have sensible fallbacks if config hasn't been set. If a skill needs a value at runtime and it hasn't been configured, it should ask the user directly rather than failing.
|
||||
|
||||
**Full question spec:** module.yaml supports richer question types beyond simple text prompts. Use them when appropriate:
|
||||
|
||||
- **`single-select`** — constrained choice list with `value`/`label` options
|
||||
- **`multi-select`** — checkbox list, default is an array
|
||||
- **`confirm`** — boolean Yes/No (default is `true`/`false`)
|
||||
- **`required`** — field must have a non-empty value
|
||||
- **`regex`** — input validation pattern
|
||||
- **`example`** — hint text shown below the default
|
||||
- **`directories`** — array of paths to create during setup (e.g., `["{output_folder}", "{reports_folder}"]`)
|
||||
- **`post-install-notes`** — message shown after setup (simple string or conditional keyed by config values)
|
||||
|
||||
### 5. External Dependencies and Setup Extensions
|
||||
|
||||
Ask the user about requirements beyond configuration:
|
||||
|
||||
- **CLI tools or MCP servers** — Do any skills depend on externally installed tools? If so, the setup skill should check for their presence and guide the user through installation or configuration. These checks would be custom additions to the cloned setup SKILL.md.
|
||||
- **UI or web app** — Does the module include a dashboard, visualization layer, or interactive web interface? If the setup skill needs to install or configure a web app, scaffold UI files, or set up a dev server, capture those requirements.
|
||||
- **Additional setup actions** — Beyond config collection: scaffolding project directories, generating starter files, configuring external services, setting up webhooks, etc.
|
||||
|
||||
If any of these apply, let the user know the scaffolded setup skill will need manual customization after creation to add these capabilities. Document what needs to be added so the user has a clear checklist.
|
||||
|
||||
**Standalone modules:** External dependency checks would need to be handled within the skill itself (in the module-setup.md reference or the main SKILL.md). Note any needed checks for the user to add manually.
|
||||
|
||||
### 6. Generate and Confirm
|
||||
|
||||
Present the complete module.yaml and module-help.csv content for the user to review. Show:
|
||||
|
||||
- Module identity and metadata
|
||||
- All configuration variables with their prompts and defaults
|
||||
- Complete help CSV entries with ordering and relationships
|
||||
- Any external dependencies or setup extensions that need manual follow-up
|
||||
|
||||
Iterate until the user confirms everything is correct.
|
||||
|
||||
### 7. Scaffold
|
||||
|
||||
#### Multi-skill modules (setup skill approach)
|
||||
|
||||
Write the confirmed module.yaml and module-help.csv content to temporary files at `{bmad_builder_reports}/{module-code}-temp-module.yaml` and `{bmad_builder_reports}/{module-code}-temp-help.csv`. Run the scaffold script:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/scaffold-setup-skill.py \
|
||||
--target-dir "{skills-folder}" \
|
||||
--module-code "{code}" \
|
||||
--module-name "{name}" \
|
||||
--module-yaml "{bmad_builder_reports}/{module-code}-temp-module.yaml" \
|
||||
--module-csv "{bmad_builder_reports}/{module-code}-temp-help.csv"
|
||||
```
|
||||
|
||||
This creates `{code}-setup/` in the user's skills folder containing:
|
||||
|
||||
- `./SKILL.md` — Generic setup skill with module-specific frontmatter
|
||||
- `./scripts/` — merge-config.py, merge-help-csv.py, cleanup-legacy.py
|
||||
- `./assets/module.yaml` — Generated module definition
|
||||
- `./assets/module-help.csv` — Generated capability registry
|
||||
|
||||
#### Standalone modules (self-registering approach)
|
||||
|
||||
Write the confirmed module.yaml and module-help.csv directly to the skill's `assets/` folder (create the folder if needed). Then run the standalone scaffold script to copy the template infrastructure:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/scaffold-standalone-module.py \
|
||||
--skill-dir "{skill-folder}" \
|
||||
--module-code "{code}" \
|
||||
--module-name "{name}"
|
||||
```
|
||||
|
||||
This adds to the existing skill:
|
||||
|
||||
- `./assets/module-setup.md` — Self-registration reference (alongside module.yaml and module-help.csv)
|
||||
- `./scripts/merge-config.py` — Config merge script
|
||||
- `./scripts/merge-help-csv.py` — Help CSV merge script
|
||||
- `../.claude-plugin/marketplace.json` — Distribution manifest
|
||||
|
||||
After scaffolding, read the skill's SKILL.md and integrate the registration check into its **On Activation** section. How you integrate depends on whether the skill has an existing first-run init flow:
|
||||
|
||||
**If the skill has a first-run init** (e.g., agents with persistent memory — if the agent memory doesn't exist, the skill loads an init template for first-time onboarding): add the module registration to that existing first-run flow. The init reference should load `./assets/module-setup.md` before or as part of first-time setup, so the user gets both module registration and skill initialization in a single first-run experience. The `setup`/`configure` arg should still work independently for reconfiguration.
|
||||
|
||||
**If the skill has no first-run init** (e.g., simple workflows): add a standalone registration check before any config loading:
|
||||
|
||||
> Check if `{project-root}/_bmad/config.yaml` contains a `{module-code}` section. If not — or if user passed `setup` or `configure` — load `./assets/module-setup.md` and complete registration before proceeding.
|
||||
|
||||
In both cases, the `setup`/`configure` argument should always trigger `./assets/module-setup.md` regardless of whether the module is already registered (for reconfiguration).
|
||||
|
||||
Show the user the proposed changes and confirm before writing.
|
||||
|
||||
### 8. Confirm and Next Steps
|
||||
|
||||
#### Multi-skill modules
|
||||
|
||||
Show what was created — the setup skill folder structure and key file contents. Let the user know:
|
||||
|
||||
- To install this module in any project, run the setup skill
|
||||
- The setup skill handles config collection, writing, and help CSV registration
|
||||
- The module is now a complete, distributable BMad module
|
||||
|
||||
#### Standalone modules
|
||||
|
||||
Show what was added to the skill — the new files and the SKILL.md modification. Let the user know:
|
||||
|
||||
- The skill is now a self-registering BMad module
|
||||
- Users install by adding the skill and running it with `setup` or `configure`
|
||||
- On first normal run, if config is missing, it will automatically trigger registration
|
||||
- Review and fill in the `marketplace.json` fields (owner, license, homepage, repository) for distribution
|
||||
- The module can be validated with the Validate Module (VM) capability
|
||||
|
||||
## Headless Mode
|
||||
|
||||
When `--headless` is set, the skill requires either:
|
||||
|
||||
- A **plan document path** — extract all module identity, capabilities, and config from it
|
||||
- A **skills folder path** or **single skill path** — read skills and infer sensible defaults for module identity
|
||||
|
||||
**Required inputs** (must be provided or extractable — exit with error if missing):
|
||||
|
||||
- Module code (cannot be safely inferred)
|
||||
- Skills folder path or single skill path
|
||||
|
||||
**Inferrable inputs** (will use defaults if not provided — flag as inferred in output):
|
||||
|
||||
- Module name (inferred from folder name or skill themes)
|
||||
- Description (synthesized from skills)
|
||||
- Version (defaults to 1.0.0)
|
||||
- Capability ordering (inferred from skill dependencies)
|
||||
|
||||
**Approach auto-detection:** If the path contains a single skill, use the standalone approach automatically. If it contains multiple skills, use the setup skill approach.
|
||||
|
||||
In headless mode: skip interactive questions, scaffold immediately, and return structured JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success|error",
|
||||
"approach": "standalone|setup-skill",
|
||||
"module_code": "...",
|
||||
"setup_skill": "{code}-setup",
|
||||
"skill_dir": "/path/to/skill/",
|
||||
"location": "/path/to/...",
|
||||
"files_created": ["..."],
|
||||
"inferred": { "module_name": "...", "description": "..." },
|
||||
"warnings": []
|
||||
}
|
||||
```
|
||||
|
||||
For multi-skill modules: `setup_skill` and `location` point to the generated setup skill. For standalone modules: `skill_dir` points to the modified skill and `location` points to the marketplace.json parent.
|
||||
|
||||
The `inferred` object lists every value that was not explicitly provided, so the caller can spot wrong inferences. If critical information is missing and cannot be inferred, return `{ "status": "error", "message": "..." }`.
|
||||
216
.agent/skills/bmad-module-builder/references/ideate-module.md
Normal file
216
.agent/skills/bmad-module-builder/references/ideate-module.md
Normal file
@@ -0,0 +1,216 @@
|
||||
# Ideate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all conversation. Write plan document in `{document_output_language}`.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a creative collaborator and module architect — part brainstorming partner, part technical advisor. Your job is to help the user discover and articulate their vision for a BMad module. The user is the creative force. You draw out their ideas, build on them, and help them see possibilities they haven't considered yet. When the session is over, they should feel like every great idea was theirs.
|
||||
|
||||
## Session Resume
|
||||
|
||||
On activation, check `{bmad_builder_reports}` for an existing plan document matching the user's intent. If one exists with `status: ideation` or `status: in-progress`, load it and orient from its current state: identify which phase was last completed based on which sections have content, briefly summarize where things stand, and ask the user where they'd like to pick up. This prevents re-deriving state from conversation history after context compaction or a new session.
|
||||
|
||||
## Facilitation Principles
|
||||
|
||||
These are non-negotiable — they define the experience:
|
||||
|
||||
- **The user is the genius.** Build on their ideas. When you see a connection they haven't made, ask a question that leads them there — don't just state it. When they land on something great, celebrate it genuinely.
|
||||
- **"Yes, and..."** — Never dismiss. Every idea has a seed worth growing. Add to it, extend it, combine it with something else.
|
||||
- **Stay generative longer than feels comfortable.** The best ideas come after the obvious ones are exhausted. Resist the urge to organize or converge early. When the user starts structuring prematurely, gently redirect: "Love that — let's capture it. Before we organize, what else comes to mind?"
|
||||
- **Capture everything.** When the user says something in passing that's actually important, note it in the plan document and surface it at the right moment later.
|
||||
- **Soft gates at transitions.** "Anything else on this, or shall we explore...?" Users almost always remember one more thing when given a graceful exit ramp.
|
||||
- **Make it fun.** This should feel like the best brainstorming session the user has ever had — energizing, surprising, and productive. Match the user's energy. If they're excited, be excited with them. If they're thoughtful, go deep.
|
||||
|
||||
## Brainstorming Toolkit
|
||||
|
||||
Weave these into conversation naturally. Never name them or make the user feel like they're in a methodology. They're your internal playbook for keeping the conversation rich and multi-dimensional:
|
||||
|
||||
- **First Principles** — Strip away assumptions. "What problem is this actually solving at its core?" "If you could only do one thing for your users, what would it be?"
|
||||
- **What If Scenarios** — Expand possibility space. "What if this could also..." "What if we flipped that and..." "What would change if there were no technical constraints?"
|
||||
- **Reverse Brainstorming** — Find constraints through inversion. "What would make this terrible for users?" "What's the worst version of this module?" Then flip the answers.
|
||||
- **Assumption Reversal** — Challenge architecture decisions. "Do these really need to be separate?" "What if a single agent could handle all of that?" "What assumption are we making that might not be true?"
|
||||
- **Perspective Shifting** — Rotate viewpoints. Ask from the end-user angle, the developer maintaining it, someone extending it later, a complete beginner encountering it for the first time.
|
||||
- **Question Storming** — Surface unknowns. "What questions will users have when they first see this?" "What would a skeptic ask?" "What's the thing we haven't thought of yet?"
|
||||
|
||||
## Process
|
||||
|
||||
This is a phased process. Each phase has a clear purpose and should not be skipped, even if the user is eager to move ahead. The phases prevent critical details from being missed and avoid expensive rewrites later.
|
||||
|
||||
**Writing discipline:** During phases 1-2, write only to the **Ideas Captured** section — raw, generous, unstructured. Do not write structured Architecture or Skills sections yet. Starting at phase 3, begin writing structured sections. This avoids rewriting the entire document when the architecture shifts.
|
||||
|
||||
### Phase 1: Vision and Module Identity
|
||||
|
||||
Initialize the plan document by copying `./assets/module-plan-template.md` to `{bmad_builder_reports}` with a descriptive filename — use a `cp` command rather than reading the template into context. Set `created` and `updated` timestamps. Then immediately write "Not ready — complete in Phase 3+" as placeholder text in all structured sections (Architecture, Memory Architecture, Memory Contract, Cross-Agent Patterns, Skills, Configuration, External Dependencies, UI and Visualization, Setup Extensions, Integration, Creative Use Cases, Build Roadmap). This makes the writing discipline constraint visible in the document itself — only Ideas Captured and frontmatter should be written during Phases 1-2. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction.
|
||||
|
||||
**First: capture the spark.** Let the user talk freely — this is where the richest context comes from:
|
||||
|
||||
- What's the idea? What problem space or domain?
|
||||
- Who would use this and what would they get from it?
|
||||
- Is there anything that inspired this — an existing tool, a frustration, a gap they've noticed?
|
||||
|
||||
Don't rush to structure. Just listen, ask follow-ups, and capture.
|
||||
|
||||
**Then: lock down module identity.** Before any skill names are written, nail these down — they affect every name and path in the document:
|
||||
|
||||
- **Module name** — Human-friendly display name (e.g., "Content Creators' Creativity Suite")
|
||||
- **Module code** — 2-4 letter abbreviation (e.g., "cs3"). All skill names and memory paths derive from this. Changing it later means a find-and-replace across the entire plan.
|
||||
- **Description** — One-line summary of what the module does
|
||||
|
||||
Write these to the plan document frontmatter immediately. All subsequent skill names use `{modulecode}-{skillname}` (or `{modulecode}-agent-{name}` for agents). The `bmad-` prefix is reserved for official BMad creations.
|
||||
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? How do the new capabilities relate? Even expansion modules should provide value independently — the parent module being absent shouldn't break this one.
|
||||
|
||||
### Phase 2: Creative Exploration
|
||||
|
||||
This is the heart of the session — spend real time here. Use the brainstorming toolkit to help the user explore:
|
||||
|
||||
- What capabilities would serve users in this domain?
|
||||
- What would delight users? What would surprise them?
|
||||
- What are the edge cases and hard problems?
|
||||
- What would a power user want vs. a beginner?
|
||||
- How might different capabilities work together in unexpected ways?
|
||||
- What exists today that's close but not quite right?
|
||||
|
||||
Update **only the Ideas Captured section** of the plan document as ideas emerge — do not write to structured sections yet. Capture raw ideas generously — even ones that seem tangential. They're context for later.
|
||||
|
||||
Energy check: if the conversation plateaus, try a perspective shift or reverse brainstorming to open a new vein.
|
||||
|
||||
### Phase 3: Architecture
|
||||
|
||||
Before shifting to architecture, use a mandatory soft gate: "Anything else to capture before we shift to architecture? Once we start structuring, we'll still be creative — but this is the best moment to get any remaining raw ideas down." Only proceed when the user confirms.
|
||||
|
||||
This is where structured writing begins.
|
||||
|
||||
**Guide toward agent-with-capabilities when appropriate.** Many users default to thinking they need multiple specialized agents. But a well-designed single agent with rich internal capabilities and routing:
|
||||
|
||||
- Provides a more seamless user experience
|
||||
- Benefits from accumulated memory and context
|
||||
- Is simpler to maintain and configure
|
||||
- Can still have distinct modes or capabilities that feel like separate tools
|
||||
|
||||
However, **multiple agents make sense when:**
|
||||
|
||||
- The module spans genuinely different expertise domains that benefit from distinct personas
|
||||
- Users may want to interact with one agent without loading the others
|
||||
- Each agent needs its own memory context — personal history, learned preferences, domain-specific notes
|
||||
- Some capabilities are optional add-ons the user might not install
|
||||
|
||||
**Multiple workflows make sense when:**
|
||||
|
||||
- Capabilities serve different user journeys or require different tools
|
||||
- The workflow requires sequential phases with fundamentally different processes
|
||||
- No persistent persona or memory is needed between invocations
|
||||
|
||||
**The orchestrator pattern** is another option to present: a master agent that the user primarily talks to, which coordinates the domain agents. Think of it like a ship's commander — communications generally flow through them, but the user can still talk directly to a specialist when they want to go deep. This adds complexity but can provide a more cohesive experience for users who want a single conversational partner. Let the user decide if this fits their vision.
|
||||
|
||||
**Output check for multi-agent:** When defining agents, verify that each one produces tangible output. If an agent's primary role is planning or coordinating (not producing), that's usually a sign those capabilities should be distributed into the domain agents as native capabilities, with shared memory handling cross-domain coordination. The exception is an explicit orchestrator agent the user wants as a conversational hub.
|
||||
|
||||
Even with multiple agents, each should be self-contained with its own capabilities. Duplicating some common functionality across agents is fine — it keeps each agent coherent and independently useful. This is the user's decision, but guide them toward self-sufficiency per agent.
|
||||
|
||||
Present the trade-offs. Let the user decide. Document the reasoning either way — future-them will want to know why.
|
||||
|
||||
**Memory architecture for multi-agent modules.** If the module has multiple agents, explore how memory should work. Every agent has its own memory folder (personal memory at `{project-root}/_bmad/memory/{skillName}/`), but modules may also benefit from shared memory:
|
||||
|
||||
| Pattern | When It Fits | Example |
|
||||
| ------------------------------------------------------------------ | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Personal memory only** | Agents have distinct domains with little overlap | A module with a code reviewer and a test writer — each tracks different things |
|
||||
| **Personal + shared module memory** | Agents have their own context but also learn shared things about the user | Agents each remember domain specifics but share knowledge about the user's style and preferences |
|
||||
| **Single shared memory (recommended for tightly coupled agents)** | All agents benefit from full visibility into everything the suite has learned | A creative suite where every agent needs the user's voice, brand, and content history. Daily capture + periodic curation keeps it organized |
|
||||
|
||||
The **single shared memory with daily/curated memory** model works well for tightly coupled multi-agent modules:
|
||||
|
||||
- **Daily files** (`daily/YYYY-MM-DD.md`) — every session, the active agent appends timestamped entries tagged by agent name. Raw, chronological, append-only.
|
||||
- **Curated files** (organized by topic) — distilled knowledge that agents load on activation. Updated through inline curation (obvious updates go straight to the file) and periodic deep curation.
|
||||
- **Index** (`index.md`) — orientation document every agent reads first. Summarizes what curated files exist, when each was last updated, and recent activity. Agents selectively load only what's relevant.
|
||||
|
||||
If the memory architecture points entirely toward shared memory with no personal differentiation, gently surface whether a single agent with multiple capabilities might be the better design.
|
||||
|
||||
**Cross-agent interaction patterns.** If the module has multiple agents, explicitly define how they hand off work:
|
||||
|
||||
- Is the user the router (brings output from one agent to another)?
|
||||
- Are there service-layer relationships (e.g., a visual agent other agents can describe needs for)?
|
||||
- Does an orchestrator agent coordinate?
|
||||
- How does shared memory enable cross-domain awareness (e.g., blog agent sees a podcast was recorded)?
|
||||
|
||||
Document these patterns — they're critical for builders to understand.
|
||||
|
||||
### Phase 4: Module Context and Configuration
|
||||
|
||||
**Custom configuration.** Does the module need to ask users questions during setup? For each potential config variable, capture: key name, prompt, default, result template, and whether it's a user setting.
|
||||
|
||||
**Even if there are no config variables, explicitly state this in the plan** — "This module requires no custom configuration beyond core BMad settings." Don't leave the section blank or the builder won't know if it was considered.
|
||||
|
||||
Skills should always have sensible fallbacks if config hasn't been set, or ask at runtime for specific values they need.
|
||||
|
||||
**External dependencies.** Do any planned skills rely on externally installed CLI tools or MCP servers? If so, the setup skill may need to check for these, guide the user through installation, or configure connection details. Capture what's needed and why.
|
||||
|
||||
**UI or visualization.** Could the module benefit from a user interface? This could be a shared progress dashboard, per-skill visualizations, an interactive view showing how skills relate and flow together, or even a cohesive module-level dashboard. Some modules might warrant a bespoke web app. Not every module needs this, but it's worth exploring — users often don't think of it until prompted.
|
||||
|
||||
**Setup skill extensions.** Beyond config collection, does the setup process need to do anything special? Install a web app, scaffold project directories, configure external services, generate starter files? The setup skill is extensible — it can do more than just write config.
|
||||
|
||||
### Phase 5: Define Skills and Capabilities
|
||||
|
||||
For each planned skill (whether agent or workflow), build a **self-contained brief** that could be handed directly to the Agent Builder or Workflow Builder without any conversation context. Each brief should include:
|
||||
|
||||
**For agents:**
|
||||
|
||||
- **Name** — following `{modulecode}-agent-{name}` convention (agents) or `{modulecode}-{skillname}` (workflows)
|
||||
- **Persona** — who is this agent? Communication style, expertise, personality
|
||||
- **Core outcome** — what does success look like?
|
||||
- **The non-negotiable** — the one thing this agent must get right
|
||||
- **Capabilities** — each distinct action or mode, described as outcomes (not procedures). For each capability, define at minimum:
|
||||
- What it does (outcome-driven description)
|
||||
- **Inputs** — what does the user provide? (topic, transcript, existing content, etc.)
|
||||
- **Outputs** — what does the agent produce? (draft, plan, report, code, etc.) Call out when an output would be a good candidate for an **HTML report** (validation runs, analysis results, quality checks, comparison reports)
|
||||
- **Memory** — what files does it read on activation? What does it write to? What's in the daily log?
|
||||
- **Init responsibility** — what happens on first run?
|
||||
- **Activation modes** — interactive, headless, or both?
|
||||
- **Tool dependencies** — external tools with technical specifics (what the agent outputs, how it's invoked)
|
||||
- **Design notes** — non-obvious considerations, the "why" behind decisions
|
||||
- **Relationships** — ordering (before/after), cross-agent handoff patterns
|
||||
|
||||
**For workflows:**
|
||||
|
||||
- **Name**, **Purpose**, **Capabilities** with inputs/outputs, **Design notes**, **Relationships**
|
||||
|
||||
### Phase 6: Capability Review
|
||||
|
||||
**Do not skip this phase.** Present the complete capability list for each skill back to the user for review. For each skill:
|
||||
|
||||
- Walk through the capabilities — are they complete? Missing anything?
|
||||
- Are any capabilities too granular and should be consolidated?
|
||||
- Are any too broad and should be split?
|
||||
- Do the inputs and outputs make sense?
|
||||
- Are there capabilities that would benefit from producing structured output (HTML reports, dashboards, exportable artifacts)?
|
||||
- For multi-skill modules: are there capability overlaps between skills that should be resolved?
|
||||
|
||||
Offer to go deeper on any specific capability the user wants to explore further. Some capabilities may need more detailed planning — sub-steps, edge cases, format specifications. The user decides the depth.
|
||||
|
||||
Iterate until the user confirms the capability list is right. Update the plan document with any changes.
|
||||
|
||||
### Phase 7: Finalize the Plan
|
||||
|
||||
Complete all sections of the plan document. Do a final pass to ensure:
|
||||
|
||||
- **Module identity** (name, code, description) is in the frontmatter
|
||||
- **Architecture** section documents the decision and rationale
|
||||
- **Memory architecture** is explicit (which pattern, what files, what's shared)
|
||||
- **Cross-agent patterns** are documented (if multi-agent)
|
||||
- **Configuration** section is filled in — even if empty, state it explicitly
|
||||
- **Every skill brief** is self-contained enough for a builder agent with zero context
|
||||
- **Inputs and outputs** are defined for each capability
|
||||
- **Build roadmap** has a recommended order with rationale
|
||||
- **Ideas Captured** preserves raw brainstorming ideas that didn't make it into the structured plan
|
||||
|
||||
Update `status` to "complete" in the frontmatter.
|
||||
|
||||
**Close with next steps and active handoff:**
|
||||
|
||||
Point to the plan document location. Then, using the Build Roadmap's recommended order, identify the first skill to build and offer to start immediately:
|
||||
|
||||
- "Your plan is complete at `{path}`. The build roadmap suggests starting with **{first-skill-name}** — shall I invoke **Build an Agent (BA)** or **Build a Workflow (BW)** now to start building it? I'll pass the plan document as context so the builder understands the bigger picture."
|
||||
- "When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure."
|
||||
|
||||
This is the moment of highest user energy — leverage it. If they decline, that's fine — they have the plan document and can return anytime.
|
||||
|
||||
**Session complete.** The IM session ends here. Do not continue unless the user asks a follow-up question.
|
||||
@@ -0,0 +1,77 @@
|
||||
# Validate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated reports unless overridden by context.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module quality reviewer. Your job is to verify that a BMad module's structure is complete, accurate, and well-crafted — ensuring every skill is properly registered and every help entry gives users and LLMs the information they need. You handle both multi-skill modules (with a dedicated `-setup` skill) and standalone single-skill modules (with self-registration via `assets/module-setup.md`).
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Locate the Module
|
||||
|
||||
Ask the user for the path to their module's skills folder (or a single skill folder for standalone modules). The validation script auto-detects the module type:
|
||||
|
||||
- **Multi-skill module:** Identifies the setup skill (`*-setup`) and all other skill folders
|
||||
- **Standalone module:** Detected when no setup skill exists and the folder contains a single skill with `assets/module.yaml`. Validates: `assets/module-setup.md`, `assets/module.yaml`, `assets/module-help.csv`, `scripts/merge-config.py`, `scripts/merge-help-csv.py`
|
||||
|
||||
### 2. Run Structural Validation
|
||||
|
||||
Run the validation script for deterministic checks:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/validate-module.py "{module-skills-folder}"
|
||||
```
|
||||
|
||||
This checks: module structure (setup skill or standalone), module.yaml completeness, CSV integrity (missing entries, orphans, duplicate menu codes, broken before/after references, missing required fields). For standalone modules, it also verifies the presence of module-setup.md and merge scripts.
|
||||
|
||||
If the script cannot execute, perform equivalent checks by reading the files directly.
|
||||
|
||||
### 3. Quality Assessment
|
||||
|
||||
This is where LLM judgment matters. For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning structured findings: `{ name, capabilities_found: [...], quality_notes: [...], issues: [...] }`. Then review each CSV entry against what you learned:
|
||||
|
||||
**Completeness** — Does every distinct capability of every skill have its own CSV row? A skill with multiple modes or actions should have multiple entries. Look for capabilities described in SKILL.md overviews that aren't registered.
|
||||
|
||||
**Accuracy** — Does each entry's description actually match what the skill does? Are the action names correct? Do the args match what the skill accepts?
|
||||
|
||||
**Description quality** — Each description should be:
|
||||
|
||||
- Concise but informative — enough for a user to know what it does and for an LLM to route correctly
|
||||
- Action-oriented — starts with a verb (Create, Validate, Brainstorm, Scaffold)
|
||||
- Specific — avoids vague language ("helps with things", "manages stuff")
|
||||
- Not overly verbose — one sentence, no filler
|
||||
|
||||
**Ordering and relationships** — Do the before/after references make sense given what the skills actually do? Are required flags set appropriately?
|
||||
|
||||
**Menu codes** — Are they intuitive? Do they relate to the display name in a way users can remember?
|
||||
|
||||
### 4. Present Results
|
||||
|
||||
Combine script findings and quality assessment into a clear report:
|
||||
|
||||
- **Structural issues** (from script) — list with severity
|
||||
- **Quality findings** (from your review) — specific, actionable suggestions per entry
|
||||
- **Overall assessment** — is this module ready for use, or does it need fixes?
|
||||
|
||||
For each finding, explain what's wrong and suggest the fix. Be direct — the user should be able to act on every item without further clarification.
|
||||
|
||||
After presenting the report, offer to save findings to a durable file: "Save validation report to `{bmad_builder_reports}/module-validation-{module-code}-{date}.md`?" This gives the user a reference they can share, track as a checklist, and review in future sessions.
|
||||
|
||||
**Completion:** After presenting results, explicitly state: "Validation complete." If findings exist, offer to walk through fixes. If the module passes cleanly, confirm it's ready for use. Do not continue the conversation beyond what the user requests — the session is done once results are delivered and any follow-up questions are answered.
|
||||
|
||||
## Headless Mode
|
||||
|
||||
When `--headless` is set, run the full validation (script + quality assessment) without user interaction and return structured JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "pass|fail",
|
||||
"module_code": "...",
|
||||
"structural_issues": [{ "severity": "...", "message": "...", "file": "..." }],
|
||||
"quality_findings": [{ "severity": "...", "skill": "...", "message": "...", "suggestion": "..." }],
|
||||
"summary": "Module is ready for use.|Module has N issues requiring attention."
|
||||
}
|
||||
```
|
||||
|
||||
This enables CI pipelines to gate on module quality before release.
|
||||
@@ -0,0 +1,124 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Scaffold a BMad module setup skill from template.
|
||||
|
||||
Copies the setup-skill-template into the target directory as {code}-setup/,
|
||||
then writes the generated module.yaml and module-help.csv into the assets folder
|
||||
and updates the SKILL.md frontmatter with the module's identity.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Scaffold a BMad module setup skill from template"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target-dir",
|
||||
required=True,
|
||||
help="Directory to create the setup skill in (the user's skills folder)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
required=True,
|
||||
help="Module code (2-4 letter abbreviation, e.g. 'cis')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-name",
|
||||
required=True,
|
||||
help="Module display name (e.g. 'Creative Intelligence Suite')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the generated module.yaml content file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-csv",
|
||||
required=True,
|
||||
help="Path to the generated module-help.csv content file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose", action="store_true", help="Print progress to stderr"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
template_dir = Path(__file__).resolve().parent.parent / "assets" / "setup-skill-template"
|
||||
setup_skill_name = f"{args.module_code}-setup"
|
||||
target = Path(args.target_dir) / setup_skill_name
|
||||
|
||||
if not template_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Template not found: {template_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
for source_path in [args.module_yaml, args.module_csv]:
|
||||
if not Path(source_path).is_file():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Source file not found: {source_path}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
target_dir = Path(args.target_dir)
|
||||
if not target_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Target directory not found: {target_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
# Remove existing setup skill if present (anti-zombie)
|
||||
if target.exists():
|
||||
if args.verbose:
|
||||
print(f"Removing existing {setup_skill_name}/", file=sys.stderr)
|
||||
shutil.rmtree(target)
|
||||
|
||||
# Copy template
|
||||
if args.verbose:
|
||||
print(f"Copying template to {target}", file=sys.stderr)
|
||||
shutil.copytree(template_dir, target)
|
||||
|
||||
# Update SKILL.md frontmatter placeholders
|
||||
skill_md = target / "SKILL.md"
|
||||
content = skill_md.read_text(encoding="utf-8")
|
||||
content = content.replace("{setup-skill-name}", setup_skill_name)
|
||||
content = content.replace("{module-name}", args.module_name)
|
||||
content = content.replace("{module-code}", args.module_code)
|
||||
skill_md.write_text(content, encoding="utf-8")
|
||||
|
||||
# Write generated module.yaml
|
||||
yaml_content = Path(args.module_yaml).read_text(encoding="utf-8")
|
||||
(target / "assets" / "module.yaml").write_text(yaml_content, encoding="utf-8")
|
||||
|
||||
# Write generated module-help.csv
|
||||
csv_content = Path(args.module_csv).read_text(encoding="utf-8")
|
||||
(target / "assets" / "module-help.csv").write_text(csv_content, encoding="utf-8")
|
||||
|
||||
# Collect file list
|
||||
files_created = sorted(
|
||||
str(p.relative_to(target)) for p in target.rglob("*") if p.is_file()
|
||||
)
|
||||
|
||||
result = {
|
||||
"status": "success",
|
||||
"setup_skill": setup_skill_name,
|
||||
"location": str(target),
|
||||
"files_created": files_created,
|
||||
"files_count": len(files_created),
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
190
.agent/skills/bmad-module-builder/scripts/scaffold-standalone-module.py
Executable file
190
.agent/skills/bmad-module-builder/scripts/scaffold-standalone-module.py
Executable file
@@ -0,0 +1,190 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Scaffold standalone module infrastructure into an existing skill.
|
||||
|
||||
Copies template files (module-setup.md, merge scripts) into the skill directory
|
||||
and generates a .claude-plugin/marketplace.json for distribution. The LLM writes
|
||||
module.yaml and module-help.csv directly to the skill's assets/ folder before
|
||||
running this script.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Scaffold standalone module infrastructure into an existing skill"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skill-dir",
|
||||
required=True,
|
||||
help="Path to the existing skill directory (must contain SKILL.md)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
required=True,
|
||||
help="Module code (2-4 letter abbreviation, e.g. 'exc')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-name",
|
||||
required=True,
|
||||
help="Module display name (e.g. 'Excalidraw Tools')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--marketplace-dir",
|
||||
default=None,
|
||||
help="Directory to create .claude-plugin/ in (defaults to skill-dir parent)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose", action="store_true", help="Print progress to stderr"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
template_dir = (
|
||||
Path(__file__).resolve().parent.parent
|
||||
/ "assets"
|
||||
/ "standalone-module-template"
|
||||
)
|
||||
skill_dir = Path(args.skill_dir).resolve()
|
||||
marketplace_dir = (
|
||||
Path(args.marketplace_dir).resolve() if args.marketplace_dir else skill_dir.parent
|
||||
)
|
||||
|
||||
# --- Validation ---
|
||||
|
||||
if not template_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Template not found: {template_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
if not skill_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Skill directory not found: {skill_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
if not (skill_dir / "SKILL.md").is_file():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"No SKILL.md found in {skill_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
if not (skill_dir / "assets" / "module.yaml").is_file():
|
||||
print(
|
||||
json.dumps({
|
||||
"status": "error",
|
||||
"message": f"assets/module.yaml not found in {skill_dir} — the LLM must write it before running this script",
|
||||
}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
# --- Copy template files ---
|
||||
|
||||
files_created: list[str] = []
|
||||
files_skipped: list[str] = []
|
||||
warnings: list[str] = []
|
||||
|
||||
# 1. Copy module-setup.md to assets/ (alongside module.yaml and module-help.csv)
|
||||
assets_dir = skill_dir / "assets"
|
||||
assets_dir.mkdir(exist_ok=True)
|
||||
src_setup = template_dir / "module-setup.md"
|
||||
dst_setup = assets_dir / "module-setup.md"
|
||||
if args.verbose:
|
||||
print(f"Copying module-setup.md to {dst_setup}", file=sys.stderr)
|
||||
dst_setup.write_bytes(src_setup.read_bytes())
|
||||
files_created.append("assets/module-setup.md")
|
||||
|
||||
# 2. Copy merge scripts to scripts/
|
||||
scripts_dir = skill_dir / "scripts"
|
||||
scripts_dir.mkdir(exist_ok=True)
|
||||
|
||||
for script_name in ("merge-config.py", "merge-help-csv.py"):
|
||||
src = template_dir / script_name
|
||||
dst = scripts_dir / script_name
|
||||
if dst.exists():
|
||||
msg = f"scripts/{script_name} already exists — skipped to avoid overwriting"
|
||||
files_skipped.append(f"scripts/{script_name}")
|
||||
warnings.append(msg)
|
||||
if args.verbose:
|
||||
print(f"SKIP: {msg}", file=sys.stderr)
|
||||
else:
|
||||
if args.verbose:
|
||||
print(f"Copying {script_name} to {dst}", file=sys.stderr)
|
||||
dst.write_bytes(src.read_bytes())
|
||||
dst.chmod(0o755)
|
||||
files_created.append(f"scripts/{script_name}")
|
||||
|
||||
# 3. Generate marketplace.json
|
||||
plugin_dir = marketplace_dir / ".claude-plugin"
|
||||
plugin_dir.mkdir(parents=True, exist_ok=True)
|
||||
marketplace_json = plugin_dir / "marketplace.json"
|
||||
|
||||
# Read module.yaml for description and version
|
||||
module_yaml_path = skill_dir / "assets" / "module.yaml"
|
||||
module_description = ""
|
||||
module_version = "1.0.0"
|
||||
try:
|
||||
yaml_text = module_yaml_path.read_text(encoding="utf-8")
|
||||
for line in yaml_text.splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped.startswith("description:"):
|
||||
module_description = stripped.split(":", 1)[1].strip().strip('"').strip("'")
|
||||
elif stripped.startswith("module_version:"):
|
||||
module_version = stripped.split(":", 1)[1].strip().strip('"').strip("'")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
skill_dir_name = skill_dir.name
|
||||
marketplace_data = {
|
||||
"name": args.module_code,
|
||||
"owner": {"name": ""},
|
||||
"license": "",
|
||||
"homepage": "",
|
||||
"repository": "",
|
||||
"keywords": ["bmad"],
|
||||
"plugins": [
|
||||
{
|
||||
"name": args.module_code,
|
||||
"source": "./",
|
||||
"description": module_description,
|
||||
"version": module_version,
|
||||
"author": {"name": ""},
|
||||
"skills": [f"./{skill_dir_name}"],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
if args.verbose:
|
||||
print(f"Writing marketplace.json to {marketplace_json}", file=sys.stderr)
|
||||
marketplace_json.write_text(
|
||||
json.dumps(marketplace_data, indent=2) + "\n", encoding="utf-8"
|
||||
)
|
||||
files_created.append(".claude-plugin/marketplace.json")
|
||||
|
||||
# --- Result ---
|
||||
|
||||
result = {
|
||||
"status": "success",
|
||||
"skill_dir": str(skill_dir),
|
||||
"module_code": args.module_code,
|
||||
"files_created": files_created,
|
||||
"files_skipped": files_skipped,
|
||||
"warnings": warnings,
|
||||
"marketplace_json": str(marketplace_json),
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -0,0 +1,230 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Tests for scaffold-setup-skill.py"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT = Path(__file__).resolve().parent.parent / "scaffold-setup-skill.py"
|
||||
TEMPLATE_DIR = Path(__file__).resolve().parent.parent.parent / "assets" / "setup-skill-template"
|
||||
|
||||
|
||||
def run_scaffold(tmp: Path, **kwargs) -> tuple[int, dict]:
|
||||
"""Run the scaffold script and return (exit_code, parsed_json)."""
|
||||
target_dir = kwargs.get("target_dir", str(tmp / "output"))
|
||||
Path(target_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
module_code = kwargs.get("module_code", "tst")
|
||||
module_name = kwargs.get("module_name", "Test Module")
|
||||
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
yaml_path.write_text(kwargs.get("yaml_content", f'code: {module_code}\nname: "{module_name}"\n'))
|
||||
csv_path.write_text(
|
||||
kwargs.get(
|
||||
"csv_content",
|
||||
"module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\n"
|
||||
f'{module_name},{module_code}-example,Example,EX,An example skill,do-thing,,anytime,,,false,output_folder,artifact\n',
|
||||
)
|
||||
)
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", target_dir,
|
||||
"--module-code", module_code,
|
||||
"--module-name", module_name,
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
try:
|
||||
data = json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
data = {"raw_stdout": result.stdout, "raw_stderr": result.stderr}
|
||||
return result.returncode, data
|
||||
|
||||
|
||||
def test_basic_scaffold():
|
||||
"""Test that scaffolding creates the expected structure."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
code, data = run_scaffold(tmp, target_dir=str(target_dir))
|
||||
assert code == 0, f"Script failed: {data}"
|
||||
assert data["status"] == "success"
|
||||
assert data["setup_skill"] == "tst-setup"
|
||||
|
||||
setup_dir = target_dir / "tst-setup"
|
||||
assert setup_dir.is_dir()
|
||||
assert (setup_dir / "SKILL.md").is_file()
|
||||
assert (setup_dir / "scripts" / "merge-config.py").is_file()
|
||||
assert (setup_dir / "scripts" / "merge-help-csv.py").is_file()
|
||||
assert (setup_dir / "scripts" / "cleanup-legacy.py").is_file()
|
||||
assert (setup_dir / "assets" / "module.yaml").is_file()
|
||||
assert (setup_dir / "assets" / "module-help.csv").is_file()
|
||||
|
||||
|
||||
def test_skill_md_frontmatter_substitution():
|
||||
"""Test that SKILL.md placeholders are replaced."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
code, data = run_scaffold(
|
||||
tmp,
|
||||
target_dir=str(target_dir),
|
||||
module_code="xyz",
|
||||
module_name="XYZ Studio",
|
||||
)
|
||||
assert code == 0
|
||||
|
||||
skill_md = (target_dir / "xyz-setup" / "SKILL.md").read_text()
|
||||
assert "xyz-setup" in skill_md
|
||||
assert "XYZ Studio" in skill_md
|
||||
assert "{setup-skill-name}" not in skill_md
|
||||
assert "{module-name}" not in skill_md
|
||||
assert "{module-code}" not in skill_md
|
||||
|
||||
|
||||
def test_template_frontmatter_uses_quoted_name_placeholder():
|
||||
"""Test that the template frontmatter is valid before substitution."""
|
||||
template_skill_md = (TEMPLATE_DIR / "SKILL.md").read_text()
|
||||
assert 'name: "{setup-skill-name}"' in template_skill_md
|
||||
|
||||
|
||||
def test_generated_files_written():
|
||||
"""Test that module.yaml and module-help.csv contain generated content."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
custom_yaml = 'code: abc\nname: "ABC Module"\ndescription: "Custom desc"\n'
|
||||
custom_csv = "module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\nABC Module,bmad-abc-thing,Do Thing,DT,Does the thing,run,,anytime,,,false,output_folder,report\n"
|
||||
|
||||
code, data = run_scaffold(
|
||||
tmp,
|
||||
target_dir=str(target_dir),
|
||||
module_code="abc",
|
||||
module_name="ABC Module",
|
||||
yaml_content=custom_yaml,
|
||||
csv_content=custom_csv,
|
||||
)
|
||||
assert code == 0
|
||||
|
||||
yaml_content = (target_dir / "abc-setup" / "assets" / "module.yaml").read_text()
|
||||
assert "ABC Module" in yaml_content
|
||||
assert "Custom desc" in yaml_content
|
||||
|
||||
csv_content = (target_dir / "abc-setup" / "assets" / "module-help.csv").read_text()
|
||||
assert "bmad-abc-thing" in csv_content
|
||||
assert "DT" in csv_content
|
||||
|
||||
|
||||
def test_anti_zombie_replaces_existing():
|
||||
"""Test that an existing setup skill is replaced cleanly."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
# First scaffold
|
||||
run_scaffold(tmp, target_dir=str(target_dir))
|
||||
stale_file = target_dir / "tst-setup" / "stale-marker.txt"
|
||||
stale_file.write_text("should be removed")
|
||||
|
||||
# Second scaffold should remove stale file
|
||||
code, data = run_scaffold(tmp, target_dir=str(target_dir))
|
||||
assert code == 0
|
||||
assert not stale_file.exists()
|
||||
|
||||
|
||||
def test_missing_target_dir():
|
||||
"""Test error when target directory doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
nonexistent = tmp / "nonexistent"
|
||||
|
||||
# Write valid source files
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
yaml_path.write_text('code: tst\nname: "Test"\n')
|
||||
csv_path.write_text("header\n")
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", str(nonexistent),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
def test_missing_source_file():
|
||||
"""Test error when module.yaml source doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
# Remove the yaml after creation to simulate missing file
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
csv_path.write_text("header\n")
|
||||
# Don't create yaml_path
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", str(target_dir),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tests = [
|
||||
test_basic_scaffold,
|
||||
test_skill_md_frontmatter_substitution,
|
||||
test_template_frontmatter_uses_quoted_name_placeholder,
|
||||
test_generated_files_written,
|
||||
test_anti_zombie_replaces_existing,
|
||||
test_missing_target_dir,
|
||||
test_missing_source_file,
|
||||
]
|
||||
passed = 0
|
||||
failed = 0
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
print(f" PASS: {test.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(1 if failed else 0)
|
||||
@@ -0,0 +1,266 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Tests for scaffold-standalone-module.py"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT = Path(__file__).resolve().parent.parent / "scaffold-standalone-module.py"
|
||||
|
||||
|
||||
def make_skill_dir(tmp: Path, name: str = "my-skill") -> Path:
|
||||
"""Create a minimal skill directory with SKILL.md and assets/module.yaml."""
|
||||
skill_dir = tmp / name
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
(skill_dir / "SKILL.md").write_text("---\nname: my-skill\ndescription: A test skill\n---\n# My Skill\n")
|
||||
assets = skill_dir / "assets"
|
||||
assets.mkdir(exist_ok=True)
|
||||
(assets / "module.yaml").write_text(
|
||||
'code: tst\nname: "Test Module"\ndescription: "A test module"\nmodule_version: 1.0.0\n'
|
||||
)
|
||||
(assets / "module-help.csv").write_text(
|
||||
"module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\n"
|
||||
"Test Module,my-skill,Do Thing,DT,Does the thing,run,,anytime,,,false,output_folder,artifact\n"
|
||||
)
|
||||
return skill_dir
|
||||
|
||||
|
||||
def run_scaffold(skill_dir: Path, **kwargs) -> tuple[int, dict]:
|
||||
"""Run the standalone scaffold script and return (exit_code, parsed_json)."""
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--skill-dir", str(skill_dir),
|
||||
"--module-code", kwargs.get("module_code", "tst"),
|
||||
"--module-name", kwargs.get("module_name", "Test Module"),
|
||||
]
|
||||
if "marketplace_dir" in kwargs:
|
||||
cmd.extend(["--marketplace-dir", str(kwargs["marketplace_dir"])])
|
||||
if kwargs.get("verbose"):
|
||||
cmd.append("--verbose")
|
||||
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
try:
|
||||
data = json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
data = {"raw_stdout": result.stdout, "raw_stderr": result.stderr}
|
||||
return result.returncode, data
|
||||
|
||||
|
||||
def test_basic_scaffold():
|
||||
"""Test that scaffolding copies all expected template files."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp)
|
||||
|
||||
code, data = run_scaffold(skill_dir)
|
||||
assert code == 0, f"Script failed: {data}"
|
||||
assert data["status"] == "success"
|
||||
assert data["module_code"] == "tst"
|
||||
|
||||
# module-setup.md placed alongside module.yaml in assets/
|
||||
assert (skill_dir / "assets" / "module-setup.md").is_file()
|
||||
# merge scripts placed in scripts/
|
||||
assert (skill_dir / "scripts" / "merge-config.py").is_file()
|
||||
assert (skill_dir / "scripts" / "merge-help-csv.py").is_file()
|
||||
# marketplace.json at parent level
|
||||
assert (tmp / ".claude-plugin" / "marketplace.json").is_file()
|
||||
|
||||
|
||||
def test_marketplace_json_content():
|
||||
"""Test that marketplace.json contains correct module metadata."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp, name="bmad-exc-tools")
|
||||
|
||||
code, data = run_scaffold(
|
||||
skill_dir, module_code="exc", module_name="Excalidraw Tools"
|
||||
)
|
||||
assert code == 0
|
||||
|
||||
marketplace = json.loads(
|
||||
(tmp / ".claude-plugin" / "marketplace.json").read_text()
|
||||
)
|
||||
assert marketplace["name"] == "bmad-exc"
|
||||
plugin = marketplace["plugins"][0]
|
||||
assert plugin["name"] == "bmad-exc"
|
||||
assert plugin["skills"] == ["./bmad-exc-tools"]
|
||||
assert plugin["description"] == "A test module"
|
||||
assert plugin["version"] == "1.0.0"
|
||||
|
||||
|
||||
def test_does_not_overwrite_existing_scripts():
|
||||
"""Test that existing scripts are skipped with a warning."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp)
|
||||
|
||||
# Pre-create a merge-config.py with custom content
|
||||
scripts_dir = skill_dir / "scripts"
|
||||
scripts_dir.mkdir(exist_ok=True)
|
||||
existing_script = scripts_dir / "merge-config.py"
|
||||
existing_script.write_text("# my custom script\n")
|
||||
|
||||
code, data = run_scaffold(skill_dir)
|
||||
assert code == 0
|
||||
|
||||
# Should be skipped
|
||||
assert "scripts/merge-config.py" in data["files_skipped"]
|
||||
assert len(data["warnings"]) >= 1
|
||||
assert any("merge-config.py" in w for w in data["warnings"])
|
||||
|
||||
# Content should be preserved
|
||||
assert existing_script.read_text() == "# my custom script\n"
|
||||
|
||||
# merge-help-csv.py should still be created
|
||||
assert "scripts/merge-help-csv.py" in data["files_created"]
|
||||
|
||||
|
||||
def test_creates_missing_subdirectories():
|
||||
"""Test that scripts/ directory is created if it doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp)
|
||||
|
||||
# Verify scripts/ doesn't exist yet
|
||||
assert not (skill_dir / "scripts").exists()
|
||||
|
||||
code, data = run_scaffold(skill_dir)
|
||||
assert code == 0
|
||||
assert (skill_dir / "scripts").is_dir()
|
||||
assert (skill_dir / "scripts" / "merge-config.py").is_file()
|
||||
|
||||
|
||||
def test_preserves_existing_skill_files():
|
||||
"""Test that existing skill files are not modified or deleted."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp)
|
||||
|
||||
# Add extra files
|
||||
(skill_dir / "build-process.md").write_text("# Build\n")
|
||||
refs_dir = skill_dir / "references"
|
||||
refs_dir.mkdir()
|
||||
(refs_dir / "my-ref.md").write_text("# Reference\n")
|
||||
|
||||
original_skill_md = (skill_dir / "SKILL.md").read_text()
|
||||
|
||||
code, data = run_scaffold(skill_dir)
|
||||
assert code == 0
|
||||
|
||||
# Original files untouched
|
||||
assert (skill_dir / "SKILL.md").read_text() == original_skill_md
|
||||
assert (skill_dir / "build-process.md").read_text() == "# Build\n"
|
||||
assert (refs_dir / "my-ref.md").read_text() == "# Reference\n"
|
||||
|
||||
|
||||
def test_missing_skill_dir():
|
||||
"""Test error when skill directory doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
nonexistent = tmp / "nonexistent-skill"
|
||||
|
||||
cmd = [
|
||||
sys.executable, str(SCRIPT),
|
||||
"--skill-dir", str(nonexistent),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
def test_missing_skill_md():
|
||||
"""Test error when skill directory has no SKILL.md."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = tmp / "empty-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "assets").mkdir()
|
||||
(skill_dir / "assets" / "module.yaml").write_text("code: tst\n")
|
||||
|
||||
cmd = [
|
||||
sys.executable, str(SCRIPT),
|
||||
"--skill-dir", str(skill_dir),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
assert "SKILL.md" in data["message"]
|
||||
|
||||
|
||||
def test_missing_module_yaml():
|
||||
"""Test error when assets/module.yaml hasn't been written yet."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = tmp / "skill-no-yaml"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text("---\nname: test\n---\n")
|
||||
|
||||
cmd = [
|
||||
sys.executable, str(SCRIPT),
|
||||
"--skill-dir", str(skill_dir),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
assert "module.yaml" in data["message"]
|
||||
|
||||
|
||||
def test_custom_marketplace_dir():
|
||||
"""Test that --marketplace-dir places marketplace.json in a custom location."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
skill_dir = make_skill_dir(tmp)
|
||||
custom_dir = tmp / "custom-root"
|
||||
custom_dir.mkdir()
|
||||
|
||||
code, data = run_scaffold(skill_dir, marketplace_dir=custom_dir)
|
||||
assert code == 0
|
||||
|
||||
# Should be at custom location, not default parent
|
||||
assert (custom_dir / ".claude-plugin" / "marketplace.json").is_file()
|
||||
assert not (tmp / ".claude-plugin" / "marketplace.json").exists()
|
||||
assert data["marketplace_json"] == str((custom_dir / ".claude-plugin" / "marketplace.json").resolve())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tests = [
|
||||
test_basic_scaffold,
|
||||
test_marketplace_json_content,
|
||||
test_does_not_overwrite_existing_scripts,
|
||||
test_creates_missing_subdirectories,
|
||||
test_preserves_existing_skill_files,
|
||||
test_missing_skill_dir,
|
||||
test_missing_skill_md,
|
||||
test_missing_module_yaml,
|
||||
test_custom_marketplace_dir,
|
||||
]
|
||||
passed = 0
|
||||
failed = 0
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
print(f" PASS: {test.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(1 if failed else 0)
|
||||
@@ -0,0 +1,314 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Tests for validate-module.py"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT = Path(__file__).resolve().parent.parent / "validate-module.py"
|
||||
|
||||
CSV_HEADER = "module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\n"
|
||||
|
||||
|
||||
def create_module(tmp: Path, skills: list[str] | None = None, csv_rows: str = "",
|
||||
yaml_content: str = "", setup_name: str = "tst-setup") -> Path:
|
||||
"""Create a minimal module structure for testing."""
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
|
||||
# Setup skill
|
||||
setup = module_dir / setup_name
|
||||
setup.mkdir()
|
||||
(setup / "SKILL.md").write_text("---\nname: " + setup_name + "\n---\n# Setup\n")
|
||||
(setup / "assets").mkdir()
|
||||
(setup / "assets" / "module.yaml").write_text(
|
||||
yaml_content or 'code: tst\nname: "Test Module"\ndescription: "A test module"\n'
|
||||
)
|
||||
(setup / "assets" / "module-help.csv").write_text(CSV_HEADER + csv_rows)
|
||||
|
||||
# Other skills
|
||||
for skill in (skills or []):
|
||||
skill_dir = module_dir / skill
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(f"---\nname: {skill}\n---\n# {skill}\n")
|
||||
|
||||
return module_dir
|
||||
|
||||
|
||||
def run_validate(module_dir: Path) -> tuple[int, dict]:
|
||||
"""Run the validation script and return (exit_code, parsed_json)."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(SCRIPT), str(module_dir)],
|
||||
capture_output=True, text=True,
|
||||
)
|
||||
try:
|
||||
data = json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
data = {"raw_stdout": result.stdout, "raw_stderr": result.stderr}
|
||||
return result.returncode, data
|
||||
|
||||
|
||||
def test_valid_module():
|
||||
"""A well-formed module should pass."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,tst-foo,Do Foo,DF,Does the foo thing,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 0, f"Expected pass: {data}"
|
||||
assert data["status"] == "pass"
|
||||
assert data["summary"]["total_findings"] == 0
|
||||
|
||||
|
||||
def test_missing_setup_skill():
|
||||
"""Module with no setup skill should fail critically."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
skill = module_dir / "tst-foo"
|
||||
skill.mkdir()
|
||||
(skill / "SKILL.md").write_text("---\nname: tst-foo\n---\n")
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
assert any(f["category"] == "structure" for f in data["findings"])
|
||||
|
||||
|
||||
def test_missing_csv_entry():
|
||||
"""Skill without a CSV entry should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_module(tmp, skills=["tst-foo", "tst-bar"],
|
||||
csv_rows='Test Module,tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n')
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
missing = [f for f in data["findings"] if f["category"] == "missing-entry"]
|
||||
assert len(missing) == 1
|
||||
assert "tst-bar" in missing[0]["message"]
|
||||
|
||||
|
||||
def test_orphan_csv_entry():
|
||||
"""CSV entry for nonexistent skill should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,tst-ghost,Ghost,GH,Does not exist,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=[], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
orphans = [f for f in data["findings"] if f["category"] == "orphan-entry"]
|
||||
assert len(orphans) == 1
|
||||
assert "tst-ghost" in orphans[0]["message"]
|
||||
|
||||
|
||||
def test_duplicate_menu_codes():
|
||||
"""Duplicate menu codes should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = (
|
||||
'Test Module,tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n'
|
||||
'Test Module,tst-foo,Also Foo,DF,Also does foo,other,,anytime,,,false,output_folder,report\n'
|
||||
)
|
||||
module_dir = create_module(tmp, skills=["tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
dupes = [f for f in data["findings"] if f["category"] == "duplicate-menu-code"]
|
||||
assert len(dupes) == 1
|
||||
assert "DF" in dupes[0]["message"]
|
||||
|
||||
|
||||
def test_invalid_before_after_ref():
|
||||
"""Before/after references to nonexistent capabilities should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,tst-foo,Do Foo,DF,Does foo,run,,anytime,tst-ghost:phantom,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
refs = [f for f in data["findings"] if f["category"] == "invalid-ref"]
|
||||
assert len(refs) == 1
|
||||
assert "tst-ghost:phantom" in refs[0]["message"]
|
||||
|
||||
|
||||
def test_missing_yaml_fields():
|
||||
"""module.yaml with missing required fields should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["tst-foo"], csv_rows=csv_rows,
|
||||
yaml_content='code: tst\n')
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
yaml_findings = [f for f in data["findings"] if f["category"] == "yaml"]
|
||||
assert len(yaml_findings) >= 1 # at least name or description missing
|
||||
|
||||
|
||||
def test_empty_csv():
|
||||
"""CSV with header but no rows should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_module(tmp, skills=["tst-foo"], csv_rows="")
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
empty = [f for f in data["findings"] if f["category"] == "csv-empty"]
|
||||
assert len(empty) == 1
|
||||
|
||||
|
||||
def create_standalone_module(tmp: Path, skill_name: str = "my-skill",
|
||||
csv_rows: str = "", yaml_content: str = "",
|
||||
include_setup_md: bool = True,
|
||||
include_merge_scripts: bool = True) -> Path:
|
||||
"""Create a minimal standalone module structure for testing."""
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
|
||||
skill = module_dir / skill_name
|
||||
skill.mkdir()
|
||||
(skill / "SKILL.md").write_text(f"---\nname: {skill_name}\n---\n# {skill_name}\n")
|
||||
|
||||
assets = skill / "assets"
|
||||
assets.mkdir()
|
||||
(assets / "module.yaml").write_text(
|
||||
yaml_content or 'code: tst\nname: "Test Module"\ndescription: "A standalone test module"\n'
|
||||
)
|
||||
if not csv_rows:
|
||||
csv_rows = f'Test Module,{skill_name},Do Thing,DT,Does the thing,run,,anytime,,,false,output_folder,artifact\n'
|
||||
(assets / "module-help.csv").write_text(CSV_HEADER + csv_rows)
|
||||
|
||||
if include_setup_md:
|
||||
(assets / "module-setup.md").write_text("# Module Setup\nStandalone registration.\n")
|
||||
|
||||
if include_merge_scripts:
|
||||
scripts = skill / "scripts"
|
||||
scripts.mkdir()
|
||||
(scripts / "merge-config.py").write_text("# merge-config\n")
|
||||
(scripts / "merge-help-csv.py").write_text("# merge-help-csv\n")
|
||||
|
||||
return module_dir
|
||||
|
||||
|
||||
def test_valid_standalone_module():
|
||||
"""A well-formed standalone module should pass with standalone=true in info."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_standalone_module(tmp)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 0, f"Expected pass: {data}"
|
||||
assert data["status"] == "pass"
|
||||
assert data["info"].get("standalone") is True
|
||||
assert data["summary"]["total_findings"] == 0
|
||||
|
||||
|
||||
def test_standalone_missing_module_setup_md():
|
||||
"""Standalone module without assets/module-setup.md should fail."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_standalone_module(tmp, include_setup_md=False)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
structure_findings = [f for f in data["findings"] if f["category"] == "structure"]
|
||||
assert any("module-setup.md" in f["message"] for f in structure_findings)
|
||||
|
||||
|
||||
def test_standalone_missing_merge_scripts():
|
||||
"""Standalone module without merge scripts should fail."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_standalone_module(tmp, include_merge_scripts=False)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
structure_findings = [f for f in data["findings"] if f["category"] == "structure"]
|
||||
assert any("merge-config.py" in f["message"] for f in structure_findings)
|
||||
|
||||
|
||||
def test_standalone_csv_validation():
|
||||
"""Standalone module CSV should be validated the same as multi-skill."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
# Duplicate menu codes
|
||||
csv_rows = (
|
||||
'Test Module,my-skill,Do Thing,DT,Does thing,run,,anytime,,,false,output_folder,artifact\n'
|
||||
'Test Module,my-skill,Also Thing,DT,Also does thing,other,,anytime,,,false,output_folder,report\n'
|
||||
)
|
||||
module_dir = create_standalone_module(tmp, csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
dupes = [f for f in data["findings"] if f["category"] == "duplicate-menu-code"]
|
||||
assert len(dupes) == 1
|
||||
assert "DT" in dupes[0]["message"]
|
||||
|
||||
|
||||
def test_multi_skill_not_detected_as_standalone():
|
||||
"""A folder with two skills and no setup skill should fail (not detected as standalone)."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
|
||||
for name in ("skill-a", "skill-b"):
|
||||
skill = module_dir / name
|
||||
skill.mkdir()
|
||||
(skill / "SKILL.md").write_text(f"---\nname: {name}\n---\n")
|
||||
(skill / "assets").mkdir()
|
||||
(skill / "assets" / "module.yaml").write_text(f'code: tst\nname: "Test"\ndescription: "Test"\n')
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
# Should fail because it's neither a setup-skill module nor a single-skill standalone
|
||||
assert any("No setup skill found" in f["message"] for f in data["findings"])
|
||||
|
||||
|
||||
def test_nonexistent_directory():
|
||||
"""Nonexistent path should return error."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(SCRIPT), "/nonexistent/path"],
|
||||
capture_output=True, text=True,
|
||||
)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tests = [
|
||||
test_valid_module,
|
||||
test_missing_setup_skill,
|
||||
test_missing_csv_entry,
|
||||
test_orphan_csv_entry,
|
||||
test_duplicate_menu_codes,
|
||||
test_invalid_before_after_ref,
|
||||
test_missing_yaml_fields,
|
||||
test_empty_csv,
|
||||
test_valid_standalone_module,
|
||||
test_standalone_missing_module_setup_md,
|
||||
test_standalone_missing_merge_scripts,
|
||||
test_standalone_csv_validation,
|
||||
test_multi_skill_not_detected_as_standalone,
|
||||
test_nonexistent_directory,
|
||||
]
|
||||
passed = 0
|
||||
failed = 0
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
print(f" PASS: {test.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(1 if failed else 0)
|
||||
293
.agent/skills/bmad-module-builder/scripts/validate-module.py
Normal file
293
.agent/skills/bmad-module-builder/scripts/validate-module.py
Normal file
@@ -0,0 +1,293 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Validate a BMad module's structure and help CSV integrity.
|
||||
|
||||
Supports two module types:
|
||||
- Multi-skill modules with a dedicated setup skill (*-setup directory)
|
||||
- Standalone single-skill modules with self-registration (assets/module-setup.md)
|
||||
|
||||
Performs deterministic structural checks:
|
||||
- Required files exist (setup skill or standalone structure)
|
||||
- All skill folders have at least one capability entry in the CSV
|
||||
- No orphan CSV entries pointing to nonexistent skills
|
||||
- Menu codes are unique
|
||||
- Before/after references point to real capability entries
|
||||
- Required module.yaml fields are present
|
||||
- CSV column count is consistent
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
REQUIRED_YAML_FIELDS = {"code", "name", "description"}
|
||||
CSV_HEADER = [
|
||||
"module", "skill", "display-name", "menu-code", "description",
|
||||
"action", "args", "phase", "after", "before", "required",
|
||||
"output-location", "outputs",
|
||||
]
|
||||
|
||||
|
||||
def find_setup_skill(module_dir: Path) -> Path | None:
|
||||
"""Find the setup skill folder (*-setup)."""
|
||||
for d in module_dir.iterdir():
|
||||
if d.is_dir() and d.name.endswith("-setup"):
|
||||
return d
|
||||
return None
|
||||
|
||||
|
||||
def find_skill_folders(module_dir: Path, exclude_name: str = "") -> list[str]:
|
||||
"""Find all skill folders (directories with SKILL.md), optionally excluding one."""
|
||||
skills = []
|
||||
for d in module_dir.iterdir():
|
||||
if d.is_dir() and d.name != exclude_name and (d / "SKILL.md").is_file():
|
||||
skills.append(d.name)
|
||||
return sorted(skills)
|
||||
|
||||
|
||||
def detect_standalone_module(module_dir: Path) -> Path | None:
|
||||
"""Detect a standalone module: single skill folder with assets/module.yaml."""
|
||||
skill_dirs = [
|
||||
d for d in module_dir.iterdir()
|
||||
if d.is_dir() and (d / "SKILL.md").is_file()
|
||||
]
|
||||
if len(skill_dirs) == 1:
|
||||
candidate = skill_dirs[0]
|
||||
if (candidate / "assets" / "module.yaml").is_file():
|
||||
return candidate
|
||||
return None
|
||||
|
||||
|
||||
def parse_yaml_minimal(text: str) -> dict[str, str]:
|
||||
"""Parse top-level YAML key-value pairs (no nested structures)."""
|
||||
result = {}
|
||||
for line in text.splitlines():
|
||||
line = line.strip()
|
||||
if ":" in line and not line.startswith("#") and not line.startswith("-"):
|
||||
key, _, value = line.partition(":")
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"').strip("'")
|
||||
if value and not value.startswith(">"):
|
||||
result[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def parse_csv_rows(csv_text: str) -> tuple[list[str], list[dict[str, str]]]:
|
||||
"""Parse CSV text into header and list of row dicts."""
|
||||
reader = csv.DictReader(StringIO(csv_text))
|
||||
header = reader.fieldnames or []
|
||||
rows = list(reader)
|
||||
return header, rows
|
||||
|
||||
|
||||
def validate(module_dir: Path, verbose: bool = False) -> dict:
|
||||
"""Run all structural validations. Returns JSON-serializable result."""
|
||||
findings: list[dict] = []
|
||||
info: dict = {}
|
||||
|
||||
def finding(severity: str, category: str, message: str, detail: str = ""):
|
||||
findings.append({
|
||||
"severity": severity,
|
||||
"category": category,
|
||||
"message": message,
|
||||
"detail": detail,
|
||||
})
|
||||
|
||||
# 1. Find setup skill or detect standalone module
|
||||
setup_dir = find_setup_skill(module_dir)
|
||||
standalone_dir = None
|
||||
|
||||
if not setup_dir:
|
||||
standalone_dir = detect_standalone_module(module_dir)
|
||||
if not standalone_dir:
|
||||
finding("critical", "structure",
|
||||
"No setup skill found (*-setup directory) and no standalone module detected")
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
# Branch: standalone vs multi-skill
|
||||
if standalone_dir:
|
||||
info["standalone"] = True
|
||||
info["skill_dir"] = standalone_dir.name
|
||||
skill_dir = standalone_dir
|
||||
|
||||
# 2s. Check required files for standalone module
|
||||
required_files = {
|
||||
"assets/module.yaml": skill_dir / "assets" / "module.yaml",
|
||||
"assets/module-help.csv": skill_dir / "assets" / "module-help.csv",
|
||||
"assets/module-setup.md": skill_dir / "assets" / "module-setup.md",
|
||||
"scripts/merge-config.py": skill_dir / "scripts" / "merge-config.py",
|
||||
"scripts/merge-help-csv.py": skill_dir / "scripts" / "merge-help-csv.py",
|
||||
}
|
||||
for label, path in required_files.items():
|
||||
if not path.is_file():
|
||||
finding("critical", "structure", f"Missing required file: {label}")
|
||||
|
||||
if not all(p.is_file() for p in required_files.values()):
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
yaml_dir = skill_dir
|
||||
csv_dir = skill_dir
|
||||
else:
|
||||
info["setup_skill"] = setup_dir.name
|
||||
|
||||
# 2. Check required files in setup skill
|
||||
required_files = {
|
||||
"SKILL.md": setup_dir / "SKILL.md",
|
||||
"assets/module.yaml": setup_dir / "assets" / "module.yaml",
|
||||
"assets/module-help.csv": setup_dir / "assets" / "module-help.csv",
|
||||
}
|
||||
for label, path in required_files.items():
|
||||
if not path.is_file():
|
||||
finding("critical", "structure", f"Missing required file: {label}")
|
||||
|
||||
if not all(p.is_file() for p in required_files.values()):
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
yaml_dir = setup_dir
|
||||
csv_dir = setup_dir
|
||||
|
||||
# 3. Validate module.yaml
|
||||
yaml_text = (yaml_dir / "assets" / "module.yaml").read_text(encoding="utf-8")
|
||||
yaml_data = parse_yaml_minimal(yaml_text)
|
||||
info["module_code"] = yaml_data.get("code", "")
|
||||
info["module_name"] = yaml_data.get("name", "")
|
||||
|
||||
for field in REQUIRED_YAML_FIELDS:
|
||||
if not yaml_data.get(field):
|
||||
finding("high", "yaml", f"module.yaml missing or empty required field: {field}")
|
||||
|
||||
# 4. Parse and validate CSV
|
||||
csv_text = (csv_dir / "assets" / "module-help.csv").read_text(encoding="utf-8")
|
||||
header, rows = parse_csv_rows(csv_text)
|
||||
|
||||
# Check header
|
||||
if header != CSV_HEADER:
|
||||
missing = set(CSV_HEADER) - set(header)
|
||||
extra = set(header) - set(CSV_HEADER)
|
||||
detail_parts = []
|
||||
if missing:
|
||||
detail_parts.append(f"missing: {', '.join(sorted(missing))}")
|
||||
if extra:
|
||||
detail_parts.append(f"extra: {', '.join(sorted(extra))}")
|
||||
finding("high", "csv-header", f"CSV header mismatch: {'; '.join(detail_parts)}")
|
||||
|
||||
if not rows:
|
||||
finding("high", "csv-empty", "module-help.csv has no capability entries")
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
info["csv_entries"] = len(rows)
|
||||
|
||||
# 5. Check column count consistency
|
||||
expected_cols = len(CSV_HEADER)
|
||||
for i, row in enumerate(rows):
|
||||
if len(row) != expected_cols:
|
||||
finding("medium", "csv-columns", f"Row {i + 2} has {len(row)} columns, expected {expected_cols}",
|
||||
f"skill={row.get('skill', '?')}")
|
||||
|
||||
# 6. Collect skills from CSV and filesystem
|
||||
csv_skills = {row.get("skill", "") for row in rows}
|
||||
exclude_name = setup_dir.name if setup_dir else ""
|
||||
skill_folders = find_skill_folders(module_dir, exclude_name)
|
||||
info["skill_folders"] = skill_folders
|
||||
info["csv_skills"] = sorted(csv_skills)
|
||||
|
||||
# 7. Skills without CSV entries
|
||||
for skill in skill_folders:
|
||||
if skill not in csv_skills:
|
||||
finding("high", "missing-entry", f"Skill '{skill}' has no capability entries in the CSV")
|
||||
|
||||
# 8. Orphan CSV entries
|
||||
setup_name = setup_dir.name if setup_dir else ""
|
||||
for skill in csv_skills:
|
||||
if skill not in skill_folders and skill != setup_name:
|
||||
# Check if it's the setup skill itself (valid)
|
||||
if not (module_dir / skill / "SKILL.md").is_file():
|
||||
finding("high", "orphan-entry", f"CSV references skill '{skill}' which does not exist in the module folder")
|
||||
|
||||
# 9. Unique menu codes
|
||||
menu_codes: dict[str, list[str]] = {}
|
||||
for row in rows:
|
||||
code = row.get("menu-code", "").strip()
|
||||
if code:
|
||||
menu_codes.setdefault(code, []).append(row.get("display-name", "?"))
|
||||
|
||||
for code, names in menu_codes.items():
|
||||
if len(names) > 1:
|
||||
finding("high", "duplicate-menu-code", f"Menu code '{code}' used by multiple entries: {', '.join(names)}")
|
||||
|
||||
# 10. Before/after reference validation
|
||||
# Build set of valid capability references (skill:action)
|
||||
valid_refs = set()
|
||||
for row in rows:
|
||||
skill = row.get("skill", "").strip()
|
||||
action = row.get("action", "").strip()
|
||||
if skill and action:
|
||||
valid_refs.add(f"{skill}:{action}")
|
||||
|
||||
for row in rows:
|
||||
display = row.get("display-name", "?")
|
||||
for field in ("after", "before"):
|
||||
value = row.get(field, "").strip()
|
||||
if not value:
|
||||
continue
|
||||
# Can be comma-separated
|
||||
for ref in value.split(","):
|
||||
ref = ref.strip()
|
||||
if ref and ref not in valid_refs:
|
||||
finding("medium", "invalid-ref",
|
||||
f"'{display}' {field} references '{ref}' which is not a valid capability",
|
||||
"Expected format: skill-name:action-name")
|
||||
|
||||
# 11. Required fields in each row
|
||||
for row in rows:
|
||||
display = row.get("display-name", "?")
|
||||
for field in ("skill", "display-name", "menu-code", "description"):
|
||||
if not row.get(field, "").strip():
|
||||
finding("high", "missing-field", f"Entry '{display}' is missing required field: {field}")
|
||||
|
||||
# Summary
|
||||
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
|
||||
for f in findings:
|
||||
severity_counts[f["severity"]] = severity_counts.get(f["severity"], 0) + 1
|
||||
|
||||
status = "pass" if severity_counts["critical"] == 0 and severity_counts["high"] == 0 else "fail"
|
||||
|
||||
return {
|
||||
"status": status,
|
||||
"info": info,
|
||||
"findings": findings,
|
||||
"summary": {
|
||||
"total_findings": len(findings),
|
||||
"by_severity": severity_counts,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Validate a BMad module's setup skill structure and help CSV integrity"
|
||||
)
|
||||
parser.add_argument(
|
||||
"module_dir",
|
||||
help="Path to the module's skills folder (containing the setup skill and other skills)",
|
||||
)
|
||||
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||
args = parser.parse_args()
|
||||
|
||||
module_path = Path(args.module_dir)
|
||||
if not module_path.is_dir():
|
||||
print(json.dumps({"status": "error", "message": f"Not a directory: {module_path}"}))
|
||||
return 2
|
||||
|
||||
result = validate(module_path, verbose=args.verbose)
|
||||
print(json.dumps(result, indent=2))
|
||||
return 0 if result["status"] == "pass" else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user