Dhwani AI Playground

How To Use These

Each prompt below is designed to be pasted directly into Claude (UI or CLI). They guide AI step-by-step so you don't have to figure out the right words every time. Think of them as SOPs for your AI.

1

Copy the prompt using the copy button on each block

2

Paste it into Claude (chat or CLI) and hit enter

3

Follow along as AI guides you through each step

These are starting points, not scripts. Modify them for your context. Add your name, your domain, your specific needs. The more specific you are, the better AI performs.

When To Use CLI vs UI

Claude Code (CLI) and Claude Chat (UI) are the same AI, different modes. Here's when to use each.

Use Claude Code (CLI) for

1

Reading and writing code in your repos

2

Creating commits, branches, and PRs

3

Running tests and build commands

4

Research within codebases

5

File operations and automation

6

Hooks and safety guardrails

7

Connecting tools via MCPs

Use Claude Chat (UI) for

1

Brainstorming and ideation

2

Writing documents and reports

3

Image generation and visual work

4

Conversations and Q&A

5

Artifacts (interactive previews)

6

Research on the web

7

Quick one-off tasks

The simple rule: If you're working with files and code, use CLI. If you're thinking and writing, use UI. When in doubt, start in UI and move to CLI when you need to take action.

First Things To Try in CLI

Once you have Claude Code installed, try these to see what it can do. Each one is a single prompt you can paste.

Read your own repo

Read this project and give me a summary of what it does, what files are here, and what the current state looks like. Keep it short.

Check your setup

Check my CLAUDE.md and Security.md files. Are they complete? What's missing? Suggest improvements but don't make changes yet.

Create a PR from a change

Update the README.md with a better project description based on what you see in the codebase. Create a new branch, commit the change, and raise a PR. Keep the description concise.

Research your codebase

Search this codebase for any hardcoded secrets, API keys, or passwords. Check every file. Report what you find — even if it's nothing, confirm that explicitly.

CLAUDE.md Template

Your AI's onboarding document. Copy this, fill it in, commit it to the root of every repo. Your AI starts learning about you the moment you do.

# CLAUDE.md — [Your Name] ## Who I Am - Role: [Your title at Dhwani RIS] - Domain: [What you work on — grants, CSR, dashboards, MIS, etc.] - Technical level: [Be honest — "non-technical PM", "comfortable with Excel"] ## What I Work On - Products: [mGrant, Frappe LMS, dashboards, etc.] - Clients: [Who you serve, what they care about] - Current focus: [What you're working on this month] ## How I Like to Work - Be concise. Lead with the answer. - Don't ask me to write or edit code. - When in doubt, ask me — don't guess. - [Add your own preferences] ## Domain Knowledge I Have - [What do you know that the AI doesn't?] - [CSR Section 135 rules? Grant lifecycle? MIS report formats?] - [This is your superpower. Write it down.]

Start small. Even 5 bullet points is a massive upgrade over starting from zero. You can always add more.

Security.md Template

The non-negotiable safety standard. This goes in every repository — personal and organisation. It tells AI what it must never do.

# SECURITY.md — AI-Generated Code Standards ## 1. Secrets & Credentials NEVER hardcode secrets, API keys, tokens, or passwords in source code. Use environment variables or .env files for all sensitive values. Never commit .env files — ensure .gitignore excludes them. If a secret leaks: revoke immediately, rotate the credential, audit the commit history. ## 2. What AI Must Never Generate - Code that bypasses authentication or authorisation - Direct database queries without parameterisation (no raw SQL) - Code that disables security headers or CSRF protection - Hardcoded credentials in any file, including test files - Code that exposes internal APIs to unauthenticated users - Anything that modifies production data without explicit approval ## 3. Permissions & Access Control Every DocType must have role-based permissions defined. Every API endpoint must check user permissions before executing. Every whitelisted method must validate that the caller has the right role. Default deny: if no permission is defined, access is blocked. ## 4. Database & Query Safety Parameterised queries only — never concatenate user input into SQL. Use Frappe ORM (frappe.get_doc, frappe.get_list) instead of raw SQL. Validate all inputs before using them in queries or logic. Never trust client-side data — always re-validate on the server. ## 5. Quick Reference Secrets in code? NEVER. Use .env or environment variables. Raw SQL? NEVER. Use parameterised queries or ORM. Push to main/develop? NEVER. Always use a feature branch + PR. Skip PR review? NEVER. All AI code requires developer review. DocType without permissions? NEVER. Every DocType gets role-based access. Disable CSRF? NEVER. Security headers stay on. Trust client input? NEVER. Validate on the server, always. ## 6. Required Checks Before Merge [ ] No secrets, tokens, or API keys in any file [ ] All database queries use parameterised inputs [ ] Every new DocType has permissions defined [ ] Every whitelisted method checks user roles [ ] No code that bypasses authentication [ ] .gitignore updated if new sensitive file types are introduced [ ] Feature branch used (not main or develop) [ ] PR raised and assigned to a developer for review [ ] CLAUDE.md updated with what changed and why

Builder Protocol — FRD-First Project Setup — Session 5 pattern

Kashish’s ATMA build in Session 5 proved one rule: the FRD is the build. If your functional requirements are crisp in markdown, Claude Code can scaffold a Frappe project end-to-end — doc types, workflows, web forms, roles, permissions — straight to staging. This is the prompt skeleton she started from.

You are helping me set up a new Frappe project end-to-end on staging, following the Builder Protocol (Theme → Form → Dashboard). I am a PM, not a developer. Go one phase at a time. Do NOT move forward until I confirm the current phase works. # My Setup - Project: [Client / product name, e.g. ATMA Education Foundation] - Frappe site: [staging URL, e.g. stg.atma.dhwaniris.in] - Source of truth: ./Requirements/FRD.md (you will read this, not guess) - Builder Protocol doc: https://prody-dris.github.io/ai-playground-sessions/frameworks/builder-protocol.html # Inputs you will read before building 1. ./Requirements/FRD.md — functional requirements, module-by-module 2. ./Requirements/BRD.md — business rules, roles, personas 3. ./Requirements/*.png / *.pdf — reference screens and flowcharts 4. ./HEARTBEAT.md — where we are in the build 5. ./memory/ — persistent notes across sessions # What I need you to do (in this exact order) ## Phase 1 — Read and summarise 1. Read the FRD and BRD in full. Summarise in your own words: entities, relationships, workflows, roles. 2. List every DocType you think we need, with fields, options, and relationships. 3. Stop. Wait for my approval on the DocType list before creating anything. ## Phase 2 — Theme 4. Apply the mGrant design system skill if we’re on an mGrant surface. 5. Confirm the sidebar, workspace, and branding align with the existing product. ## Phase 3 — Form (DocTypes, workflows, permissions) 6. Create DocTypes one module at a time. After each: show me the Frappe doctype list URL to verify. 7. Define roles and permissions per DocType. Default deny; only assign what’s explicitly in the BRD. 8. Set up workflows with explicit states and transitions. Match the FRD exactly. 9. If the FRD calls for external NGO / vendor / partner forms — build Web Forms with the right guest-user access. ## Phase 4 — Dashboard 10. Build admin / MIS views using the existing product’s dashboard patterns. Role-gate where specified. ## Phase 5 — Verification 11. For each module: give me a checklist of URLs to click and data to enter so I can verify live on staging. 12. Update HEARTBEAT.md with what was built, what’s pending, and any FRD gaps. # Rules - Never guess what’s in the FRD. If it’s ambiguous, ask me. - Reference the mGrant design system skill on any UI-facing work. - Small commits, labelled by module. Never a single “set up project” commit. - If you find a gap in the FRD, flag it in HEARTBEAT.md — don’t silently decide. Start with Phase 1, Step 1. Read the FRD and summarise.

What this unlocks: the same thing Kashish unlocked — a full Frappe project, built by a PM, deployed direct to staging, with the FRD as the spec. Customise the phases for your project. The discipline (stop-and-verify between phases) is the part that keeps it from running away.

Skills Template — SOPs for AI

A skill is a cheat sheet you give your AI. Write it once, use it forever. It's your domain knowledge in a format AI can understand and follow every time.

A properly structured skill has three parts: frontmatter (Tier 1 — always loaded), instructions (Tier 2 — loads when relevant), and optionally a resources folder with scripts or reference docs (Tier 3 — zero token cost until accessed). The frontmatter trigger is the most important line — it decides when the skill activates.

--- name: [Short skill name, e.g. mgrant-csr-reporting] description: Use when [specific trigger: "creating MCA Section 135 CSR reports or quarterly utilisation statements"]. Be precise — this decides when the skill activates. type: [organisation / team / partner] --- ## Context [Why does this skill exist? What problem does it solve? What domain knowledge underpins it?] Example: "Dhwani RIS operates under MCA Section 135 CSR compliance. All reporting must meet specific indicator, attribution, and apportionment rules that generic AI does not know." ## Procedure Step-by-step. Include decision points explicitly. Don’t leave judgment gaps. 1. [First step] 2. [Second step] - If [condition A]: do [X] - If [condition B]: ask the user before proceeding 3. [Continue...] ## Constraints - [What must AI never do in this skill’s context?] - Example: "Never submit a report without human review." - Example: "Never use generic indicator templates — all indicators must map to the client’s approved M&E framework." ## Example Output [Paste one real example of the correct output. AI calibrates quality against examples, not abstract descriptions.] ## Eval (Test This Skill) Input: [A real task from your domain] Expected output: [What correct output looks like] Pass if: [The specific criterion that tells you the skill worked]

The one rule: If you correct AI for the same mistake twice on the same task type — that correction belongs in a skill, not in your next prompt. Write it once. Save it forever.

Quick References

CheatsheetClaude Code Cheatsheet — commands, shortcuts, and tips for daily CLI use
How We PlayHow We Play — what the Thursday sessions become after S6: open build, bring a problem
Session 6Of Chaos and Craft — four-layer AI pipeline, Skills Portal, Design Library MCP, the pivot
Session 5Open Build & Problem-Solving — Skills vs. MCP vs. Plugin vs. Agent explained, three live builder spotlights, skill anatomy, mGrant design system coaching moment
Session 4CLI & Hooks — detailed session guide for setup and hooks
Session 3Hands On — original templates with walkthroughs and screenshots
ReadingsReadings — curated learning resources on Claude Code, GitHub, and AI