refactor(ux): consolidate BMAD skills, update design system, and clean up Prisma generated client
This commit is contained in:
246
.cursor/skills/bmad-module-builder/references/create-module.md
Normal file
246
.cursor/skills/bmad-module-builder/references/create-module.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Create Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated files unless overridden by context.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module packaging specialist. The user has built their skills — your job is to read them deeply, understand the ecosystem they form, and scaffold the infrastructure that makes it an installable BMad module.
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Discover the Skills
|
||||
|
||||
Ask the user for the folder path containing their built skills, or accept a path to a single skill (folder or SKILL.md file — if they provide a path ending in `SKILL.md`, resolve to the parent directory). Also ask: do they have a plan document from an Ideate Module (IM) session? If they do, this is the recommended path — a plan document lets you auto-extract module identity, capability ordering, config variables, and design rationale, dramatically improving the quality of the scaffolded module. Read it first, focusing on the structured sections (frontmatter, Skills, Configuration, Build Roadmap) — skip Ideas Captured and other freeform sections that don't inform scaffolding.
|
||||
|
||||
**Read every SKILL.md in the folder.** For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning compact JSON: `{ name, description, capabilities: [{ name, args, outputs }], dependencies }`. This keeps the parent context lean while still understanding the full ecosystem.
|
||||
|
||||
For each skill, understand:
|
||||
|
||||
- Name, purpose, and capabilities
|
||||
- Arguments and interaction model
|
||||
- What it produces and where
|
||||
- Dependencies on other skills or external tools
|
||||
|
||||
**Single skill detection:** If the folder contains exactly one skill (one directory with a SKILL.md), or the user provided a direct path to a single skill, note this as a **standalone module candidate**.
|
||||
|
||||
### 1.5. Confirm Approach
|
||||
|
||||
**If single skill detected:** Present the standalone option:
|
||||
|
||||
> "I found one skill: **{skill-name}**. For single-skill modules, I recommend the **standalone self-registering** approach — instead of generating a separate setup skill, the registration logic is built directly into this skill via a setup reference file. When users pass `setup` or `configure` as an argument, the skill handles its own module registration.
|
||||
>
|
||||
> This means:
|
||||
> - No separate `-setup` skill to maintain
|
||||
> - Simpler distribution (single skill folder + marketplace.json)
|
||||
> - Users install by adding the skill and running it with `setup`
|
||||
>
|
||||
> Shall I proceed with the standalone approach, or would you prefer a separate setup skill?"
|
||||
|
||||
**If multiple skills detected:** Confirm with the user: "I found {N} skills: {list}. I'll generate a dedicated `-setup` skill to handle module registration for all of them. Sound good?"
|
||||
|
||||
If the user overrides the recommendation (e.g., wants a setup skill for a single skill, or standalone for multiple), respect their choice.
|
||||
|
||||
### 2. Gather Module Identity
|
||||
|
||||
Collect through conversation (or extract from a plan document in headless mode):
|
||||
|
||||
- **Module name** — Human-friendly display name (e.g., "Creative Intelligence Suite")
|
||||
- **Module code** — 2-4 letter abbreviation (e.g., "cis"). Used in skill naming, config sections, and folder conventions
|
||||
- **Description** — One-line summary of what the module does
|
||||
- **Version** — Starting version (default: 1.0.0)
|
||||
- **Module greeting** — Message shown to the user after setup completes
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? This affects how help CSV entries may reference capabilities from the parent module
|
||||
|
||||
### 3. Define Capabilities
|
||||
|
||||
Build the help CSV entries for each skill. A single skill can have multiple capabilities (rows). For each capability:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------- |
|
||||
| **display-name** | What the user sees in help/menus |
|
||||
| **menu-code** | 2-letter shortcut, unique across the module |
|
||||
| **description** | What this capability does (concise) |
|
||||
| **action** | The capability/action name within the skill |
|
||||
| **args** | Supported arguments (e.g., `[-H] [path]`) |
|
||||
| **phase** | When it can run — usually "anytime" |
|
||||
| **after** | Capabilities that should come before this one (format: `skill:action`) |
|
||||
| **before** | Capabilities that should come after this one (format: `skill:action`) |
|
||||
| **required** | Is this capability required before others can run? |
|
||||
| **output-location** | Where output goes (config variable name or path) |
|
||||
| **outputs** | What it produces |
|
||||
|
||||
Ask the user about:
|
||||
|
||||
- How capabilities should be ordered — are there natural sequences?
|
||||
- Which capabilities are prerequisites for others?
|
||||
- If this is an expansion module, do any capabilities reference the parent module's skills in their before/after fields?
|
||||
|
||||
**Standalone modules:** All entries map to the same skill. Include a capability entry for the `setup`/`configure` action (menu-code `SU` or similar, action `configure`, phase `anytime`). Populate columns correctly for bmad-help consumption:
|
||||
|
||||
- `phase`: typically `anytime`, but use workflow phases (`1-analysis`, `2-planning`, etc.) if the skill fits a natural workflow sequence
|
||||
- `after`/`before`: dependency chain between capabilities, format `skill-name:action`
|
||||
- `required`: `true` for blocking gates, `false` for optional capabilities
|
||||
- `output-location`: use config variable names (e.g., `output_folder`) not literal paths — bmad-help resolves these from config
|
||||
- `outputs`: describe file patterns bmad-help should look for to detect completion (e.g., "quality report", "converted skill")
|
||||
- `menu-code`: unique 1-3 letter shortcodes displayed as `[CODE] Display Name` in help
|
||||
|
||||
### 4. Define Configuration Variables
|
||||
|
||||
Does the module need custom installation questions? For each custom variable:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------------- |
|
||||
| **Key name** | Used in config.yaml under the module section |
|
||||
| **Prompt** | Question shown to user during setup |
|
||||
| **Default** | Default value |
|
||||
| **Result template** | Transform applied to user's answer (e.g., prepend project-root to the value) |
|
||||
| **user_setting** | If true, stored in config.user.yaml instead of config.yaml |
|
||||
|
||||
Remind the user: skills should always have sensible fallbacks if config hasn't been set. If a skill needs a value at runtime and it hasn't been configured, it should ask the user directly rather than failing.
|
||||
|
||||
**Full question spec:** module.yaml supports richer question types beyond simple text prompts. Use them when appropriate:
|
||||
|
||||
- **`single-select`** — constrained choice list with `value`/`label` options
|
||||
- **`multi-select`** — checkbox list, default is an array
|
||||
- **`confirm`** — boolean Yes/No (default is `true`/`false`)
|
||||
- **`required`** — field must have a non-empty value
|
||||
- **`regex`** — input validation pattern
|
||||
- **`example`** — hint text shown below the default
|
||||
- **`directories`** — array of paths to create during setup (e.g., `["{output_folder}", "{reports_folder}"]`)
|
||||
- **`post-install-notes`** — message shown after setup (simple string or conditional keyed by config values)
|
||||
|
||||
### 5. External Dependencies and Setup Extensions
|
||||
|
||||
Ask the user about requirements beyond configuration:
|
||||
|
||||
- **CLI tools or MCP servers** — Do any skills depend on externally installed tools? If so, the setup skill should check for their presence and guide the user through installation or configuration. These checks would be custom additions to the cloned setup SKILL.md.
|
||||
- **UI or web app** — Does the module include a dashboard, visualization layer, or interactive web interface? If the setup skill needs to install or configure a web app, scaffold UI files, or set up a dev server, capture those requirements.
|
||||
- **Additional setup actions** — Beyond config collection: scaffolding project directories, generating starter files, configuring external services, setting up webhooks, etc.
|
||||
|
||||
If any of these apply, let the user know the scaffolded setup skill will need manual customization after creation to add these capabilities. Document what needs to be added so the user has a clear checklist.
|
||||
|
||||
**Standalone modules:** External dependency checks would need to be handled within the skill itself (in the module-setup.md reference or the main SKILL.md). Note any needed checks for the user to add manually.
|
||||
|
||||
### 6. Generate and Confirm
|
||||
|
||||
Present the complete module.yaml and module-help.csv content for the user to review. Show:
|
||||
|
||||
- Module identity and metadata
|
||||
- All configuration variables with their prompts and defaults
|
||||
- Complete help CSV entries with ordering and relationships
|
||||
- Any external dependencies or setup extensions that need manual follow-up
|
||||
|
||||
Iterate until the user confirms everything is correct.
|
||||
|
||||
### 7. Scaffold
|
||||
|
||||
#### Multi-skill modules (setup skill approach)
|
||||
|
||||
Write the confirmed module.yaml and module-help.csv content to temporary files at `{bmad_builder_reports}/{module-code}-temp-module.yaml` and `{bmad_builder_reports}/{module-code}-temp-help.csv`. Run the scaffold script:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/scaffold-setup-skill.py \
|
||||
--target-dir "{skills-folder}" \
|
||||
--module-code "{code}" \
|
||||
--module-name "{name}" \
|
||||
--module-yaml "{bmad_builder_reports}/{module-code}-temp-module.yaml" \
|
||||
--module-csv "{bmad_builder_reports}/{module-code}-temp-help.csv"
|
||||
```
|
||||
|
||||
This creates `{code}-setup/` in the user's skills folder containing:
|
||||
|
||||
- `./SKILL.md` — Generic setup skill with module-specific frontmatter
|
||||
- `./scripts/` — merge-config.py, merge-help-csv.py, cleanup-legacy.py
|
||||
- `./assets/module.yaml` — Generated module definition
|
||||
- `./assets/module-help.csv` — Generated capability registry
|
||||
|
||||
#### Standalone modules (self-registering approach)
|
||||
|
||||
Write the confirmed module.yaml and module-help.csv directly to the skill's `assets/` folder (create the folder if needed). Then run the standalone scaffold script to copy the template infrastructure:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/scaffold-standalone-module.py \
|
||||
--skill-dir "{skill-folder}" \
|
||||
--module-code "{code}" \
|
||||
--module-name "{name}"
|
||||
```
|
||||
|
||||
This adds to the existing skill:
|
||||
|
||||
- `./assets/module-setup.md` — Self-registration reference (alongside module.yaml and module-help.csv)
|
||||
- `./scripts/merge-config.py` — Config merge script
|
||||
- `./scripts/merge-help-csv.py` — Help CSV merge script
|
||||
- `../.claude-plugin/marketplace.json` — Distribution manifest
|
||||
|
||||
After scaffolding, read the skill's SKILL.md and integrate the registration check into its **On Activation** section. How you integrate depends on whether the skill has an existing first-run init flow:
|
||||
|
||||
**If the skill has a first-run init** (e.g., agents with persistent memory — if the agent memory doesn't exist, the skill loads an init template for first-time onboarding): add the module registration to that existing first-run flow. The init reference should load `./assets/module-setup.md` before or as part of first-time setup, so the user gets both module registration and skill initialization in a single first-run experience. The `setup`/`configure` arg should still work independently for reconfiguration.
|
||||
|
||||
**If the skill has no first-run init** (e.g., simple workflows): add a standalone registration check before any config loading:
|
||||
|
||||
> Check if `{project-root}/_bmad/config.yaml` contains a `{module-code}` section. If not — or if user passed `setup` or `configure` — load `./assets/module-setup.md` and complete registration before proceeding.
|
||||
|
||||
In both cases, the `setup`/`configure` argument should always trigger `./assets/module-setup.md` regardless of whether the module is already registered (for reconfiguration).
|
||||
|
||||
Show the user the proposed changes and confirm before writing.
|
||||
|
||||
### 8. Confirm and Next Steps
|
||||
|
||||
#### Multi-skill modules
|
||||
|
||||
Show what was created — the setup skill folder structure and key file contents. Let the user know:
|
||||
|
||||
- To install this module in any project, run the setup skill
|
||||
- The setup skill handles config collection, writing, and help CSV registration
|
||||
- The module is now a complete, distributable BMad module
|
||||
|
||||
#### Standalone modules
|
||||
|
||||
Show what was added to the skill — the new files and the SKILL.md modification. Let the user know:
|
||||
|
||||
- The skill is now a self-registering BMad module
|
||||
- Users install by adding the skill and running it with `setup` or `configure`
|
||||
- On first normal run, if config is missing, it will automatically trigger registration
|
||||
- Review and fill in the `marketplace.json` fields (owner, license, homepage, repository) for distribution
|
||||
- The module can be validated with the Validate Module (VM) capability
|
||||
|
||||
## Headless Mode
|
||||
|
||||
When `--headless` is set, the skill requires either:
|
||||
|
||||
- A **plan document path** — extract all module identity, capabilities, and config from it
|
||||
- A **skills folder path** or **single skill path** — read skills and infer sensible defaults for module identity
|
||||
|
||||
**Required inputs** (must be provided or extractable — exit with error if missing):
|
||||
|
||||
- Module code (cannot be safely inferred)
|
||||
- Skills folder path or single skill path
|
||||
|
||||
**Inferrable inputs** (will use defaults if not provided — flag as inferred in output):
|
||||
|
||||
- Module name (inferred from folder name or skill themes)
|
||||
- Description (synthesized from skills)
|
||||
- Version (defaults to 1.0.0)
|
||||
- Capability ordering (inferred from skill dependencies)
|
||||
|
||||
**Approach auto-detection:** If the path contains a single skill, use the standalone approach automatically. If it contains multiple skills, use the setup skill approach.
|
||||
|
||||
In headless mode: skip interactive questions, scaffold immediately, and return structured JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success|error",
|
||||
"approach": "standalone|setup-skill",
|
||||
"module_code": "...",
|
||||
"setup_skill": "{code}-setup",
|
||||
"skill_dir": "/path/to/skill/",
|
||||
"location": "/path/to/...",
|
||||
"files_created": ["..."],
|
||||
"inferred": { "module_name": "...", "description": "..." },
|
||||
"warnings": []
|
||||
}
|
||||
```
|
||||
|
||||
For multi-skill modules: `setup_skill` and `location` point to the generated setup skill. For standalone modules: `skill_dir` points to the modified skill and `location` points to the marketplace.json parent.
|
||||
|
||||
The `inferred` object lists every value that was not explicitly provided, so the caller can spot wrong inferences. If critical information is missing and cannot be inferred, return `{ "status": "error", "message": "..." }`.
|
||||
216
.cursor/skills/bmad-module-builder/references/ideate-module.md
Normal file
216
.cursor/skills/bmad-module-builder/references/ideate-module.md
Normal file
@@ -0,0 +1,216 @@
|
||||
# Ideate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all conversation. Write plan document in `{document_output_language}`.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a creative collaborator and module architect — part brainstorming partner, part technical advisor. Your job is to help the user discover and articulate their vision for a BMad module. The user is the creative force. You draw out their ideas, build on them, and help them see possibilities they haven't considered yet. When the session is over, they should feel like every great idea was theirs.
|
||||
|
||||
## Session Resume
|
||||
|
||||
On activation, check `{bmad_builder_reports}` for an existing plan document matching the user's intent. If one exists with `status: ideation` or `status: in-progress`, load it and orient from its current state: identify which phase was last completed based on which sections have content, briefly summarize where things stand, and ask the user where they'd like to pick up. This prevents re-deriving state from conversation history after context compaction or a new session.
|
||||
|
||||
## Facilitation Principles
|
||||
|
||||
These are non-negotiable — they define the experience:
|
||||
|
||||
- **The user is the genius.** Build on their ideas. When you see a connection they haven't made, ask a question that leads them there — don't just state it. When they land on something great, celebrate it genuinely.
|
||||
- **"Yes, and..."** — Never dismiss. Every idea has a seed worth growing. Add to it, extend it, combine it with something else.
|
||||
- **Stay generative longer than feels comfortable.** The best ideas come after the obvious ones are exhausted. Resist the urge to organize or converge early. When the user starts structuring prematurely, gently redirect: "Love that — let's capture it. Before we organize, what else comes to mind?"
|
||||
- **Capture everything.** When the user says something in passing that's actually important, note it in the plan document and surface it at the right moment later.
|
||||
- **Soft gates at transitions.** "Anything else on this, or shall we explore...?" Users almost always remember one more thing when given a graceful exit ramp.
|
||||
- **Make it fun.** This should feel like the best brainstorming session the user has ever had — energizing, surprising, and productive. Match the user's energy. If they're excited, be excited with them. If they're thoughtful, go deep.
|
||||
|
||||
## Brainstorming Toolkit
|
||||
|
||||
Weave these into conversation naturally. Never name them or make the user feel like they're in a methodology. They're your internal playbook for keeping the conversation rich and multi-dimensional:
|
||||
|
||||
- **First Principles** — Strip away assumptions. "What problem is this actually solving at its core?" "If you could only do one thing for your users, what would it be?"
|
||||
- **What If Scenarios** — Expand possibility space. "What if this could also..." "What if we flipped that and..." "What would change if there were no technical constraints?"
|
||||
- **Reverse Brainstorming** — Find constraints through inversion. "What would make this terrible for users?" "What's the worst version of this module?" Then flip the answers.
|
||||
- **Assumption Reversal** — Challenge architecture decisions. "Do these really need to be separate?" "What if a single agent could handle all of that?" "What assumption are we making that might not be true?"
|
||||
- **Perspective Shifting** — Rotate viewpoints. Ask from the end-user angle, the developer maintaining it, someone extending it later, a complete beginner encountering it for the first time.
|
||||
- **Question Storming** — Surface unknowns. "What questions will users have when they first see this?" "What would a skeptic ask?" "What's the thing we haven't thought of yet?"
|
||||
|
||||
## Process
|
||||
|
||||
This is a phased process. Each phase has a clear purpose and should not be skipped, even if the user is eager to move ahead. The phases prevent critical details from being missed and avoid expensive rewrites later.
|
||||
|
||||
**Writing discipline:** During phases 1-2, write only to the **Ideas Captured** section — raw, generous, unstructured. Do not write structured Architecture or Skills sections yet. Starting at phase 3, begin writing structured sections. This avoids rewriting the entire document when the architecture shifts.
|
||||
|
||||
### Phase 1: Vision and Module Identity
|
||||
|
||||
Initialize the plan document by copying `./assets/module-plan-template.md` to `{bmad_builder_reports}` with a descriptive filename — use a `cp` command rather than reading the template into context. Set `created` and `updated` timestamps. Then immediately write "Not ready — complete in Phase 3+" as placeholder text in all structured sections (Architecture, Memory Architecture, Memory Contract, Cross-Agent Patterns, Skills, Configuration, External Dependencies, UI and Visualization, Setup Extensions, Integration, Creative Use Cases, Build Roadmap). This makes the writing discipline constraint visible in the document itself — only Ideas Captured and frontmatter should be written during Phases 1-2. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction.
|
||||
|
||||
**First: capture the spark.** Let the user talk freely — this is where the richest context comes from:
|
||||
|
||||
- What's the idea? What problem space or domain?
|
||||
- Who would use this and what would they get from it?
|
||||
- Is there anything that inspired this — an existing tool, a frustration, a gap they've noticed?
|
||||
|
||||
Don't rush to structure. Just listen, ask follow-ups, and capture.
|
||||
|
||||
**Then: lock down module identity.** Before any skill names are written, nail these down — they affect every name and path in the document:
|
||||
|
||||
- **Module name** — Human-friendly display name (e.g., "Content Creators' Creativity Suite")
|
||||
- **Module code** — 2-4 letter abbreviation (e.g., "cs3"). All skill names and memory paths derive from this. Changing it later means a find-and-replace across the entire plan.
|
||||
- **Description** — One-line summary of what the module does
|
||||
|
||||
Write these to the plan document frontmatter immediately. All subsequent skill names use `{modulecode}-{skillname}` (or `{modulecode}-agent-{name}` for agents). The `bmad-` prefix is reserved for official BMad creations.
|
||||
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? How do the new capabilities relate? Even expansion modules should provide value independently — the parent module being absent shouldn't break this one.
|
||||
|
||||
### Phase 2: Creative Exploration
|
||||
|
||||
This is the heart of the session — spend real time here. Use the brainstorming toolkit to help the user explore:
|
||||
|
||||
- What capabilities would serve users in this domain?
|
||||
- What would delight users? What would surprise them?
|
||||
- What are the edge cases and hard problems?
|
||||
- What would a power user want vs. a beginner?
|
||||
- How might different capabilities work together in unexpected ways?
|
||||
- What exists today that's close but not quite right?
|
||||
|
||||
Update **only the Ideas Captured section** of the plan document as ideas emerge — do not write to structured sections yet. Capture raw ideas generously — even ones that seem tangential. They're context for later.
|
||||
|
||||
Energy check: if the conversation plateaus, try a perspective shift or reverse brainstorming to open a new vein.
|
||||
|
||||
### Phase 3: Architecture
|
||||
|
||||
Before shifting to architecture, use a mandatory soft gate: "Anything else to capture before we shift to architecture? Once we start structuring, we'll still be creative — but this is the best moment to get any remaining raw ideas down." Only proceed when the user confirms.
|
||||
|
||||
This is where structured writing begins.
|
||||
|
||||
**Guide toward agent-with-capabilities when appropriate.** Many users default to thinking they need multiple specialized agents. But a well-designed single agent with rich internal capabilities and routing:
|
||||
|
||||
- Provides a more seamless user experience
|
||||
- Benefits from accumulated memory and context
|
||||
- Is simpler to maintain and configure
|
||||
- Can still have distinct modes or capabilities that feel like separate tools
|
||||
|
||||
However, **multiple agents make sense when:**
|
||||
|
||||
- The module spans genuinely different expertise domains that benefit from distinct personas
|
||||
- Users may want to interact with one agent without loading the others
|
||||
- Each agent needs its own memory context — personal history, learned preferences, domain-specific notes
|
||||
- Some capabilities are optional add-ons the user might not install
|
||||
|
||||
**Multiple workflows make sense when:**
|
||||
|
||||
- Capabilities serve different user journeys or require different tools
|
||||
- The workflow requires sequential phases with fundamentally different processes
|
||||
- No persistent persona or memory is needed between invocations
|
||||
|
||||
**The orchestrator pattern** is another option to present: a master agent that the user primarily talks to, which coordinates the domain agents. Think of it like a ship's commander — communications generally flow through them, but the user can still talk directly to a specialist when they want to go deep. This adds complexity but can provide a more cohesive experience for users who want a single conversational partner. Let the user decide if this fits their vision.
|
||||
|
||||
**Output check for multi-agent:** When defining agents, verify that each one produces tangible output. If an agent's primary role is planning or coordinating (not producing), that's usually a sign those capabilities should be distributed into the domain agents as native capabilities, with shared memory handling cross-domain coordination. The exception is an explicit orchestrator agent the user wants as a conversational hub.
|
||||
|
||||
Even with multiple agents, each should be self-contained with its own capabilities. Duplicating some common functionality across agents is fine — it keeps each agent coherent and independently useful. This is the user's decision, but guide them toward self-sufficiency per agent.
|
||||
|
||||
Present the trade-offs. Let the user decide. Document the reasoning either way — future-them will want to know why.
|
||||
|
||||
**Memory architecture for multi-agent modules.** If the module has multiple agents, explore how memory should work. Every agent has its own memory folder (personal memory at `{project-root}/_bmad/memory/{skillName}/`), but modules may also benefit from shared memory:
|
||||
|
||||
| Pattern | When It Fits | Example |
|
||||
| ------------------------------------------------------------------ | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Personal memory only** | Agents have distinct domains with little overlap | A module with a code reviewer and a test writer — each tracks different things |
|
||||
| **Personal + shared module memory** | Agents have their own context but also learn shared things about the user | Agents each remember domain specifics but share knowledge about the user's style and preferences |
|
||||
| **Single shared memory (recommended for tightly coupled agents)** | All agents benefit from full visibility into everything the suite has learned | A creative suite where every agent needs the user's voice, brand, and content history. Daily capture + periodic curation keeps it organized |
|
||||
|
||||
The **single shared memory with daily/curated memory** model works well for tightly coupled multi-agent modules:
|
||||
|
||||
- **Daily files** (`daily/YYYY-MM-DD.md`) — every session, the active agent appends timestamped entries tagged by agent name. Raw, chronological, append-only.
|
||||
- **Curated files** (organized by topic) — distilled knowledge that agents load on activation. Updated through inline curation (obvious updates go straight to the file) and periodic deep curation.
|
||||
- **Index** (`index.md`) — orientation document every agent reads first. Summarizes what curated files exist, when each was last updated, and recent activity. Agents selectively load only what's relevant.
|
||||
|
||||
If the memory architecture points entirely toward shared memory with no personal differentiation, gently surface whether a single agent with multiple capabilities might be the better design.
|
||||
|
||||
**Cross-agent interaction patterns.** If the module has multiple agents, explicitly define how they hand off work:
|
||||
|
||||
- Is the user the router (brings output from one agent to another)?
|
||||
- Are there service-layer relationships (e.g., a visual agent other agents can describe needs for)?
|
||||
- Does an orchestrator agent coordinate?
|
||||
- How does shared memory enable cross-domain awareness (e.g., blog agent sees a podcast was recorded)?
|
||||
|
||||
Document these patterns — they're critical for builders to understand.
|
||||
|
||||
### Phase 4: Module Context and Configuration
|
||||
|
||||
**Custom configuration.** Does the module need to ask users questions during setup? For each potential config variable, capture: key name, prompt, default, result template, and whether it's a user setting.
|
||||
|
||||
**Even if there are no config variables, explicitly state this in the plan** — "This module requires no custom configuration beyond core BMad settings." Don't leave the section blank or the builder won't know if it was considered.
|
||||
|
||||
Skills should always have sensible fallbacks if config hasn't been set, or ask at runtime for specific values they need.
|
||||
|
||||
**External dependencies.** Do any planned skills rely on externally installed CLI tools or MCP servers? If so, the setup skill may need to check for these, guide the user through installation, or configure connection details. Capture what's needed and why.
|
||||
|
||||
**UI or visualization.** Could the module benefit from a user interface? This could be a shared progress dashboard, per-skill visualizations, an interactive view showing how skills relate and flow together, or even a cohesive module-level dashboard. Some modules might warrant a bespoke web app. Not every module needs this, but it's worth exploring — users often don't think of it until prompted.
|
||||
|
||||
**Setup skill extensions.** Beyond config collection, does the setup process need to do anything special? Install a web app, scaffold project directories, configure external services, generate starter files? The setup skill is extensible — it can do more than just write config.
|
||||
|
||||
### Phase 5: Define Skills and Capabilities
|
||||
|
||||
For each planned skill (whether agent or workflow), build a **self-contained brief** that could be handed directly to the Agent Builder or Workflow Builder without any conversation context. Each brief should include:
|
||||
|
||||
**For agents:**
|
||||
|
||||
- **Name** — following `{modulecode}-agent-{name}` convention (agents) or `{modulecode}-{skillname}` (workflows)
|
||||
- **Persona** — who is this agent? Communication style, expertise, personality
|
||||
- **Core outcome** — what does success look like?
|
||||
- **The non-negotiable** — the one thing this agent must get right
|
||||
- **Capabilities** — each distinct action or mode, described as outcomes (not procedures). For each capability, define at minimum:
|
||||
- What it does (outcome-driven description)
|
||||
- **Inputs** — what does the user provide? (topic, transcript, existing content, etc.)
|
||||
- **Outputs** — what does the agent produce? (draft, plan, report, code, etc.) Call out when an output would be a good candidate for an **HTML report** (validation runs, analysis results, quality checks, comparison reports)
|
||||
- **Memory** — what files does it read on activation? What does it write to? What's in the daily log?
|
||||
- **Init responsibility** — what happens on first run?
|
||||
- **Activation modes** — interactive, headless, or both?
|
||||
- **Tool dependencies** — external tools with technical specifics (what the agent outputs, how it's invoked)
|
||||
- **Design notes** — non-obvious considerations, the "why" behind decisions
|
||||
- **Relationships** — ordering (before/after), cross-agent handoff patterns
|
||||
|
||||
**For workflows:**
|
||||
|
||||
- **Name**, **Purpose**, **Capabilities** with inputs/outputs, **Design notes**, **Relationships**
|
||||
|
||||
### Phase 6: Capability Review
|
||||
|
||||
**Do not skip this phase.** Present the complete capability list for each skill back to the user for review. For each skill:
|
||||
|
||||
- Walk through the capabilities — are they complete? Missing anything?
|
||||
- Are any capabilities too granular and should be consolidated?
|
||||
- Are any too broad and should be split?
|
||||
- Do the inputs and outputs make sense?
|
||||
- Are there capabilities that would benefit from producing structured output (HTML reports, dashboards, exportable artifacts)?
|
||||
- For multi-skill modules: are there capability overlaps between skills that should be resolved?
|
||||
|
||||
Offer to go deeper on any specific capability the user wants to explore further. Some capabilities may need more detailed planning — sub-steps, edge cases, format specifications. The user decides the depth.
|
||||
|
||||
Iterate until the user confirms the capability list is right. Update the plan document with any changes.
|
||||
|
||||
### Phase 7: Finalize the Plan
|
||||
|
||||
Complete all sections of the plan document. Do a final pass to ensure:
|
||||
|
||||
- **Module identity** (name, code, description) is in the frontmatter
|
||||
- **Architecture** section documents the decision and rationale
|
||||
- **Memory architecture** is explicit (which pattern, what files, what's shared)
|
||||
- **Cross-agent patterns** are documented (if multi-agent)
|
||||
- **Configuration** section is filled in — even if empty, state it explicitly
|
||||
- **Every skill brief** is self-contained enough for a builder agent with zero context
|
||||
- **Inputs and outputs** are defined for each capability
|
||||
- **Build roadmap** has a recommended order with rationale
|
||||
- **Ideas Captured** preserves raw brainstorming ideas that didn't make it into the structured plan
|
||||
|
||||
Update `status` to "complete" in the frontmatter.
|
||||
|
||||
**Close with next steps and active handoff:**
|
||||
|
||||
Point to the plan document location. Then, using the Build Roadmap's recommended order, identify the first skill to build and offer to start immediately:
|
||||
|
||||
- "Your plan is complete at `{path}`. The build roadmap suggests starting with **{first-skill-name}** — shall I invoke **Build an Agent (BA)** or **Build a Workflow (BW)** now to start building it? I'll pass the plan document as context so the builder understands the bigger picture."
|
||||
- "When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure."
|
||||
|
||||
This is the moment of highest user energy — leverage it. If they decline, that's fine — they have the plan document and can return anytime.
|
||||
|
||||
**Session complete.** The IM session ends here. Do not continue unless the user asks a follow-up question.
|
||||
@@ -0,0 +1,77 @@
|
||||
# Validate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated reports unless overridden by context.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module quality reviewer. Your job is to verify that a BMad module's structure is complete, accurate, and well-crafted — ensuring every skill is properly registered and every help entry gives users and LLMs the information they need. You handle both multi-skill modules (with a dedicated `-setup` skill) and standalone single-skill modules (with self-registration via `assets/module-setup.md`).
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Locate the Module
|
||||
|
||||
Ask the user for the path to their module's skills folder (or a single skill folder for standalone modules). The validation script auto-detects the module type:
|
||||
|
||||
- **Multi-skill module:** Identifies the setup skill (`*-setup`) and all other skill folders
|
||||
- **Standalone module:** Detected when no setup skill exists and the folder contains a single skill with `assets/module.yaml`. Validates: `assets/module-setup.md`, `assets/module.yaml`, `assets/module-help.csv`, `scripts/merge-config.py`, `scripts/merge-help-csv.py`
|
||||
|
||||
### 2. Run Structural Validation
|
||||
|
||||
Run the validation script for deterministic checks:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/validate-module.py "{module-skills-folder}"
|
||||
```
|
||||
|
||||
This checks: module structure (setup skill or standalone), module.yaml completeness, CSV integrity (missing entries, orphans, duplicate menu codes, broken before/after references, missing required fields). For standalone modules, it also verifies the presence of module-setup.md and merge scripts.
|
||||
|
||||
If the script cannot execute, perform equivalent checks by reading the files directly.
|
||||
|
||||
### 3. Quality Assessment
|
||||
|
||||
This is where LLM judgment matters. For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning structured findings: `{ name, capabilities_found: [...], quality_notes: [...], issues: [...] }`. Then review each CSV entry against what you learned:
|
||||
|
||||
**Completeness** — Does every distinct capability of every skill have its own CSV row? A skill with multiple modes or actions should have multiple entries. Look for capabilities described in SKILL.md overviews that aren't registered.
|
||||
|
||||
**Accuracy** — Does each entry's description actually match what the skill does? Are the action names correct? Do the args match what the skill accepts?
|
||||
|
||||
**Description quality** — Each description should be:
|
||||
|
||||
- Concise but informative — enough for a user to know what it does and for an LLM to route correctly
|
||||
- Action-oriented — starts with a verb (Create, Validate, Brainstorm, Scaffold)
|
||||
- Specific — avoids vague language ("helps with things", "manages stuff")
|
||||
- Not overly verbose — one sentence, no filler
|
||||
|
||||
**Ordering and relationships** — Do the before/after references make sense given what the skills actually do? Are required flags set appropriately?
|
||||
|
||||
**Menu codes** — Are they intuitive? Do they relate to the display name in a way users can remember?
|
||||
|
||||
### 4. Present Results
|
||||
|
||||
Combine script findings and quality assessment into a clear report:
|
||||
|
||||
- **Structural issues** (from script) — list with severity
|
||||
- **Quality findings** (from your review) — specific, actionable suggestions per entry
|
||||
- **Overall assessment** — is this module ready for use, or does it need fixes?
|
||||
|
||||
For each finding, explain what's wrong and suggest the fix. Be direct — the user should be able to act on every item without further clarification.
|
||||
|
||||
After presenting the report, offer to save findings to a durable file: "Save validation report to `{bmad_builder_reports}/module-validation-{module-code}-{date}.md`?" This gives the user a reference they can share, track as a checklist, and review in future sessions.
|
||||
|
||||
**Completion:** After presenting results, explicitly state: "Validation complete." If findings exist, offer to walk through fixes. If the module passes cleanly, confirm it's ready for use. Do not continue the conversation beyond what the user requests — the session is done once results are delivered and any follow-up questions are answered.
|
||||
|
||||
## Headless Mode
|
||||
|
||||
When `--headless` is set, run the full validation (script + quality assessment) without user interaction and return structured JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "pass|fail",
|
||||
"module_code": "...",
|
||||
"structural_issues": [{ "severity": "...", "message": "...", "file": "..." }],
|
||||
"quality_findings": [{ "severity": "...", "skill": "...", "message": "...", "suggestion": "..." }],
|
||||
"summary": "Module is ready for use.|Module has N issues requiring attention."
|
||||
}
|
||||
```
|
||||
|
||||
This enables CI pipelines to gate on module quality before release.
|
||||
Reference in New Issue
Block a user