Dhwani AI Playground

Watch the Session

Watch the Full Session Recording

Dhwani RIS members only · ~75 minutes

What We Covered

Session 3 was all doing, not just talking. We opened laptops, set up GitHub repositories, generated tokens, connected Claude to our repos, and watched AI read our project context for the first time. We also heard from Shweta and Samarth about their IDH dashboard — a real project built with AI.

What We Accomplished

  • Everyone set up a personal GitHub repository
  • CLAUDE.md files created with personal profiles
  • Security.md shared and deployed to repos
  • GitHub tokens generated and connected to Claude UI
  • First domain skills written
  • Live demo of Claude reading project context
  • Case study: IDH Dashboard by Shweta & Samarth

Our Journey So Far

Session 0: Why AI matters. Tools overview. Open build format.

Session 1: Vibe coding mechanics. Tokens, CLAUDE.md, Security.md, tools.

Session 2: GitHub as a knowledge base. Repos, branches, PRs, CLI intro.

3

Session 3: We stopped talking and started building. Hands on keyboards.

The Problem: Context Bloat

"Context bloat is to AI what burnout is to humans." Said in Session 3

Every new Claude conversation starts from scratch — like a brilliant stranger walking into your office. They know nothing about your project, your preferences, or your history. You have to explain everything from the beginning.

But here's the trap: the longer a conversation runs, the higher the token cost, the more hallucinations creep in, and the worse the output gets. It's like giving someone work at 9 AM vs 6 PM — same task, completely different quality.

The solution: close sessions, start fresh, carry forward only what matters via CLAUDE.md.

Think of it this way

You wouldn't ask a colleague to work a 16-hour shift and expect the same quality at hour 15. AI works the same way. Fresh context = fresh thinking.

Rule of thumb: When a conversation gets long, close it. Start a new one. Update your CLAUDE.md with what you learned. 2 minutes of updates save 20 minutes of re-explaining.

The Knowledge Architecture

During the session, we walked through a real project repository (mGrant V3) to show how knowledge files are structured. Here's the architecture that makes AI remember everything.

mGrant V3 context folder on GitHub showing MEMORY.md and supporting files
The .context folder in mGrant V3 — MEMORY.md indexing persistent learnings
my-project/ ├── .context/ │ ├── CLAUDE.md ← Identity & instructions │ ├── heartbeat.md ← What's in progress right now │ ├── MEMORY.md ← Index of all persistent learnings │ ├── memories/ ← Individual memory files │ ├── decisions.md ← Why we chose X over Y │ └── glossary.md ← Domain-specific terms └── SECURITY.md ← Security guardrails (every repo)
1

CLAUDE.md — Your AI's identity card. Who you are, what you work on, how you like to work. This is what turns a generic AI into YOUR AI.

2

heartbeat.md — What's happening right now. Active deployments, in-progress tasks, current blockers. Updated every session.

3

MEMORY.md — The index of everything AI has learned. Past decisions, completed work, patterns. Append-only — never delete, mark superseded.

4

decisions.md — The "why" behind choices. Why we picked approach A over B. Future-you (and future-AI) will thank you.

5

SECURITY.md — Non-negotiable guardrails. What AI must never do. Every repo gets this file.

"Think of CLAUDE.md as the brain. Heartbeat is the pulse. Memory is the journal. Decisions are the reasoning. Security is the conscience." Session 3

Hands-On: What We Built

1
Created a Personal Repository

Created a private GitHub repo with README. This is your AI's home base — everything about you, your projects, and your domain knowledge lives here.

Always start with Private. You can make it public later if needed. Never the other way around.

2
Generated a Personal Access Token

Settings → Developer Settings → Personal Access Tokens → Tokens (classic). Named it, gave it repo scope, copied it immediately.

GitHub token generation page showing personal access token creation with repo scope selected
Namandeep's screen — generating a classic personal access token

Security note: Never store tokens on Teams, WhatsApp, or any internet-connected app. Store it in a local .env file or a secure notes app. If it leaks, revoke immediately from GitHub Settings.

3
Connected Claude to GitHub

Opened Claude UI, pasted the token and GitHub profile URL, and asked Claude to confirm access. Two things needed: your token + your repo URL.

Claude UI with CLAUDE.md template being pasted and committed to a GitHub repository
Creating a CLAUDE.md file through Claude UI — pasting the template and asking Claude to update the repo
4
Watched Claude Read Our Context

The moment that makes it click. We asked Claude: "Tell me what you know about me and this project." And it did — role, client, tech stack, current blockers, staging site, everything. No re-explaining. No wasted time.

Claude reading project context from GitHub, displaying knowledge of PM role, client Bajaj Auto, 4-layer tech stack, custom DocTypes, staging site, and current blockers
Live demo — Claude reads the mGrant project context and knows: PM role, client (Bajaj Auto), 4-layer stack, custom DocTypes, staging site, and current blockers
"Instead of wasting 20-30 minutes doing whatever the hell I want, I start from where I was last." Nihaan, during the live demo
5
Deployed Security.md Everywhere

The Security.md file was shared and committed to every repository. This file sets the guardrails — what AI must never do, what checks must pass before any code is merged.

Security.md goes in EVERY repo. No exceptions. It's the safety net for AI-generated code.

CLAUDE.md Template

This is the template we used in the session. Copy it, fill it in, commit it to your repo. Your AI starts learning about you the moment you do.

# CLAUDE.md — [Your Name] ## Who I Am - Role: [Your title at Dhwani RIS] - Domain: [What you work on — grants, CSR, dashboards, MIS, etc.] - Technical level: [Be honest — "non-technical PM", "comfortable with Excel"] ## What I Work On - Products: [mGrant, Frappe LMS, dashboards, etc.] - Clients: [Who you serve, what they care about] - Current focus: [What you're working on this month] ## How I Like to Work - Be concise. Lead with the answer. - Don't ask me to write or edit code. - When in doubt, ask me — don't guess. - [Add your own preferences] ## Domain Knowledge I Have - [What do you know that the AI doesn't?] - [CSR Section 135 rules? Grant lifecycle? MIS report formats?] - [This is your superpower. Write it down.]

Start small. Even 5 bullet points is a massive upgrade over starting from zero. You can always add more. The goal is to exist, not to be perfect.

Security.md

This is the security standard shared during the session. It goes in every repository — personal and organisation. Copy the full text below.

# SECURITY.md — AI-Generated Code Standards ## 1. Secrets & Credentials NEVER hardcode secrets, API keys, tokens, or passwords in source code. Use environment variables or .env files for all sensitive values. Never commit .env files — ensure .gitignore excludes them. If a secret leaks: revoke immediately, rotate the credential, audit the commit history. ## 2. What AI Must Never Generate - Code that bypasses authentication or authorisation - Direct database queries without parameterisation (no raw SQL) - Code that disables security headers or CSRF protection - Hardcoded credentials in any file, including test files - Code that exposes internal APIs to unauthenticated users - Anything that modifies production data without explicit approval ## 3. Permissions & Access Control Every DocType must have role-based permissions defined. Every API endpoint must check user permissions before executing. Every whitelisted method must validate that the caller has the right role. Default deny: if no permission is defined, access is blocked. ## 4. Database & Query Safety Parameterised queries only — never concatenate user input into SQL. Use Frappe ORM (frappe.get_doc, frappe.get_list) instead of raw SQL. Validate all inputs before using them in queries or logic. Never trust client-side data — always re-validate on the server. ## 5. Quick Reference Secrets in code? NEVER. Use .env or environment variables. Raw SQL? NEVER. Use parameterised queries or ORM. Push to main/develop? NEVER. Always use a feature branch + PR. Skip PR review? NEVER. All AI code requires developer review. DocType without permissions? NEVER. Every DocType gets role-based access. Disable CSRF? NEVER. Security headers stay on. Trust client input? NEVER. Validate on the server, always. ## 6. Required Checks Before Merge [ ] No secrets, tokens, or API keys in any file [ ] All database queries use parameterised inputs [ ] Every new DocType has permissions defined [ ] Every whitelisted method checks user roles [ ] No code that bypasses authentication [ ] .gitignore updated if new sensitive file types are introduced [ ] Feature branch used (not main or develop) [ ] PR raised and assigned to a developer for review [ ] CLAUDE.md updated with what changed and why

This is a living document. It will evolve as we learn. But the core rules are non-negotiable: no secrets in code, parameterised queries only, permissions on every DocType.

Skills = SOPs for AI

A skill is a cheat sheet you give your AI. Write it once, use it forever. It's an SOP in markdown format.

Why this matters

Imagine explaining CSR rules to a new analyst every single morning. That's what happens without skills — your AI starts from zero every conversation. A skill file makes day one the only day you explain.

# Skill: [Your Area] ## What This Covers [One paragraph: what area does this skill address?] ## Key Concepts - [Concept 1]: [Brief explanation] - [Concept 2]: [Brief explanation] ## Rules and Constraints - [What must always be true?] - [Compliance requirements?] - [Common mistakes to avoid?] ## Common Tasks - [Task 1]: [How it should be done] - [Task 2]: [How it should be done] ## Vocabulary - [Term] = [What it means in your context]
"If you can't explain it simply, you don't understand it well enough." Richard Feynman

Claude UI vs CLI

One of the key discoveries in this session: Claude UI is great for reading, but limited for writing.

Claude UI can access your GitHub repo via token and understand your context — it reads your CLAUDE.md, your skills, your project history. But it struggles to push changes back: creating PRs, committing files, and updating repos consistently.

Claude Code (CLI) can both read AND write. It's the full-power version — it creates branches, commits code, raises PRs, runs tests, and connects to external tools. That's what we'll set up in Session 4.

For now: use Claude UI for conversations with context. CLI comes next.

Claude UI attempting to create a pull request but showing limitations compared to CLI
Claude UI can read your repo but pushes back on write operations — CLI bridges this gap

Why not just use CLI? Because learning to manage context, write CLAUDE.md files, and think in terms of knowledge architecture is more important than the tool. The tool will change. The discipline won't.

From the Field: IDH Dashboard Case Study

Shweta Singh & Samarth Rana

Built and deployed a complete live monitoring dashboard for IDH (Initiative for Sustainable Trade) — from scratch, using AI. Here's their story.

IDH Dashboard showing Regen Agricultural Practices with live data, state and district filters, and multiple sub-dashboard tabs
The IDH dashboard — live data, 8 sub-dashboards, filters by state/district/block/village, snapshot and export features
69
Indicators
8
Sub-Dashboards
30-40h
First Build
8-12h
Estimated Now

The dashboard started on Lovable (a free AI coding tool) because it was free on March 8th. From Lovable, it moved to GitHub, then to Claude for refinements. The stack: Lovable for UI scaffolding, GitHub for version control, Claude for logic and debugging.

The Challenge: When AI Can't Read Your Logic

After deployment, 30+ of the 69 indicators showed no data. The team had given exact form numbers, schemas, and logic — but the language was "too human" for AI to map correctly.

IDH revised logic spreadsheet with structured categories, form references, and descriptive examples that AI could parse
The revised logic sheet — structured examples that AI could actually parse and map to database tables
"Instead of putting prompting effort for two weeks, we can put manual effort for two days to clarify and build that context that Claude would understand better. Use your human skill — improve the documentation, improve the context, improve the logic sheet as much as you can, and then give it to AI." Shweta Singh

The fix took 1.5 days — creating a revised logic sheet with structured categories, form references, and descriptive examples. After that, every indicator worked. The lesson: invest in documentation, not more prompts.

A forbidden keyword in the backend was silently blocking database queries. After 6 attempts, Sonnet couldn't find it. Switching to Opus with extended thinking finally cracked it.

"Don't be lazy with your prompts. The more structured your input, the better AI performs on the first attempt." From the session discussion

Shweta and Samarth showed that AI doesn't replace expertise — it amplifies it. The 1.5 days spent writing clear logic saved weeks of prompt debugging. The dashboard is now a standalone product for the client.

Key Takeaways

1

Build, don't just learn. The best way to understand AI is to use it on real work. Today proved that.

2

CLAUDE.md is your AI's onboarding doc. Write it once, update after every session. 2 minutes saves 20.

3

Security.md is non-negotiable. Every repo, every project. AI-generated code needs guardrails.

4

Invest in documentation, not prompts. Shweta's story: 1.5 days of clear logic > 2 weeks of prompt trial-and-error.

5

Start fresh, carry forward. Close bloated conversations. Start new ones. Let your files carry the context.

Further Reading

Guide
Hello World — GitHub Docs

GitHub's official beginner tutorial. 10 minutes to your first repo.

Guide
Editing Files on GitHub

How to edit files directly in the browser — no terminal needed.

Docs
Managing Personal Access Tokens

GitHub's official guide to creating, scoping, and revoking tokens safely.

Article
Practical AI Tips — Ethan Mollick

Common questions about AI answered with practical, experience-based advice.