LLM Coding Guide Series:
15. PRD Task Master: The PM for your AI agentTODO
Structured planning for consistent LLM results
Walking LLMs through planning and task management is an important skill but can be inconsistent. You explain the same architectural decisions over and over, lose track of what you've already covered, and watch the AI forget context halfway through complex refactors. This tool promises to structure that chaos.
Why This Has Promise
We know planning with LLMs is crucial—it's the difference between getting a working solution and getting garbage. But doing it manually every time is brutal. it's a more advanced version of creating refactor-checklist.md files to maintain state during big changes.
16. QA Agent Reviewer: AI-Driven Code QualityTODO
Context-Aware Code Review
Linters such as ESLINT catch syntax errors and are essential in enforcing style consistency, and preventing basic mistakes. But they can't tell you "your product domain should be split and here's why" or "this pattern will cause performance issues at scale." A QA agent reviewer could fill that gap by understanding code intent and project context.
Beyond Traditional Linting
Traditional Linters vs. AI QA Agents
Traditional Linters
What They Catch
How They Work
- Rule-based checking
- No context awareness
- Limited to predefined patterns
AI QA Agents
What They Analyze
How They Work
- Context-aware analysis
- Understands code intent
- Learns project patterns
How could I run these agents?
You've got two main approaches for running QA agents, and they solve completely different problems. One fits into your existing workflow, the other changes how you think about coding entirely.
GitHub PR Integration
Best For
How It Works
- Automated review on every pull request
- Comments directly on code changes
- Integrates with existing workflow
- Team-wide consistency
- Catches issues before merge
Local Parallel Agent
Best For
How It Works
- Real-time feedback as you code
- Watches file changes continuously
- Immediate architectural guidance
- Prevents issues before they happen
- Personal development assistant
Which Should You Start With?
QA Agent Setup Prompt:
"Act as a senior code reviewer. Analyze this code for performance, security, and maintainability issues. Provide specific, actionable feedback with examples."
17. Parallel-Agent Experimentation: Testing Multiple ApproachesTODO
Git Worktree for Prompt Variation Testing
Ever wonder if a different prompt would have given you a better solution? Instead of guessing, you can test multiple approaches simultaneously using git worktree. Create separate working directories for each experiment, run different AI agents with different prompts, then compare the results. It's like A/B testing for AI development, but way more practical than containers.
Git Worktree Setup for Parallel Testing
In a real parallel agent workflow, you'd automate this entire process, but here are the manual commands to show how it works:
# Create separate worktrees for each experiment
git worktree add ../dashboard-corporate main
git worktree add ../dashboard-modern main
git worktree add ../dashboard-minimal main
# Now you have three identical codebases to experiment with
cd ../dashboard-corporate # Terminal 1: Corporate approach
cd ../dashboard-modern # Terminal 2: Modern approach
cd ../dashboard-minimal # Terminal 3: Minimal approach
../dashboard-corporate
../dashboard-modern
../dashboard-minimal
Live UI Previews
You pick the modern glassmorphism style but with the corporate layout
Learning Storage
Why Git Worktree
You get separate working directories that share the same git history, so you can easily compare changes, cherry-pick the best parts, and merge results back.
# After testing, pick the winner and merge back
cd ../dashboard-modern # This one won
git add .
git commit -m "Modern dashboard with glassmorphism"
cd ../original-project
git merge dashboard-modern # Bring the winning approach back
# Clean up the worktrees
git worktree remove ../dashboard-corporate
git worktree remove ../dashboard-modern
git worktree remove ../dashboard-minimal
18. Role of Memory in AI-Driven DevelopmentTODO
Learning from Past Interactions
Memory-enabled AI can learn from your coding sessions and improve over time. Instead of starting fresh every conversation, it remembers what worked, what didn't, and your preferences. This means better suggestions and fewer repeated mistakes.
How Memory Improves LLM Coding
What It Remembers | Coding Improvement | Example |
---|---|---|
Your coding style | Generates code that matches your patterns | Remembers you prefer functional components over classes |
Project architecture | Suggests consistent file structure | Knows your folder naming conventions and imports |
Common errors you make | Warns before you repeat mistakes | Reminds you to handle async errors in your typical pattern |
Successful solutions | Suggests proven approaches first | Recommends the state management pattern that worked last time |
19. Prompt Generator and Improver: Better AI CommunicationTODO
Crafting Better Prompts
Writing good prompts is harder than it looks. You end up tweaking the same prompt over and over, wondering why the AI keeps missing the mark. Anthropic's prompt tools solve this by taking your rough idea and turning it into something that actually works consistently.
Prompt Generator Example
Asking for a Custom Prompt
Your Goal Request
"I need a prompt to help me review code better"
You tell the Prompt Generator what you want to accomplish.
- State your goal
- Describe the task
- Mention any constraints
- Ask for specific format
Generated Custom Prompt
"Review this code for security vulnerabilities, performance issues, and maintainability concerns. For each issue found, provide: 1) The specific problem, 2) Why it's problematic, 3) A concrete fix with code example, 4) Priority level (High/Medium/Low). Focus on actionable feedback that can be implemented immediately."
The tool creates a structured prompt you can reuse for consistent code reviews.
- Specific review criteria
- Structured output format
- Clear deliverables
- Reusable template
Prompt Improver Example
Refining Existing Prompts
Original Prompt
"Write a Python function"
Vague and likely to produce generic, unhelpful results.
- No specific requirements
- Missing context
- Unclear expectations
- Generic output likely
Improved Prompt
"Write a Python function that validates email addresses, handles edge cases like international domains, follows PEP 8 style guidelines, and includes comprehensive docstrings with examples."
Specific, actionable, and likely to produce high-quality results.
- Clear requirements
- Specific constraints
- Quality expectations
- Actionable guidance
20. System Prompts with ClaudeCodeTODO
Direct Control Over AI Behavior
Cursor and other AI IDEs are constantly modifying your prompts behind the scenes. They add their own instructions, context, and formatting that you never see. With ClaudeCode, you have more control over the system prompt.
The Hidden Prompt Problem
AI IDE vs. Direct API Control
AI IDEs (Cursor, etc.)
What Happens Behind the Scenes
Control Level
- Black box prompt modification
- No control over system behavior
- Automatic context management
- Hidden prompt engineering
ClaudeCode Direct API
What You Control
Control Level
- Complete prompt transparency
- Direct system prompt control
- Manual context management
- Explicit prompt engineering
Why This Control Matters
When you can't see what's actually being sent to the AI, you can't debug why it's behaving strangely. With direct system prompt control, you know exactly what instructions the AI is following.
Prefilling: Force Specific Output Formats
With direct prompt control, you can also use prefilling—literally starting the AI's response for them. Want JSON? Start with "{" and the LLM will complete it.
# Simple Prefilling Examples
Prompt: "Return user data as JSON: {"
AI completes: "name": "John", "email": "john@example.com"}
Prompt: "Create a Python function: def validate_email("
AI completes: email): return "@" in email and "." in email
Prompt: "SQL query for active users: SELECT"
AI completes: * FROM users WHERE status = 'active'
LLM Coding Guide Series: