LLM Coding Guide Series:
9. Commit Often to Enable Experimentation
Safe Experimentation
Git commit, stash, or stage often—or use something like Cursor's "Restore to checkpoint." That way you feel free to experiment, try different approaches, and never feel like you just wasted hours.
Git Workflow for AI Development
Git Command | Use Case | Benefit |
---|---|---|
git commit -m "message" | Save completed features or fixes | Permanent checkpoint you can return to |
git stash | Temporarily save incomplete changes | Quick save when switching tasks |
git add . | Stage changes you want to keep | Prepare changes for commit |
Cursor Checkpoints
Cursor automatically creates checkpoints of your codebase at each request you make, as well as every time the AI makes changes to your codebase.
To revert to a previous state, you can either:
Click the Restore Checkpoint button that appears within the input box of a previous request
My Workflow
Progress Saves
I do `git add .` when I feel like I've made some progress but not quite stable yet. I then make a `git commit -m msg` when I'm stable.
Quick Rollbacks
If I know it was the last 1-2 prompts that caused the issue, I don't use git—I'll just use Cursor's checkpoint to go back.
Full Resets
If I feel like I need a full reset by going down the wrong path, then I'll git reset.
10. Include Docs for Specialized Libraries and APIs
Bridge Knowledge Cutoffs
LLMs are stuck in time—they're trained on data up to a certain cutoff date, so they don't know about newer library versions, recent API changes, or updated syntax. Sometimes you just have to search for the latest version and tell it explicitly what version to use. Other times you need to paste in examples from the current docs. Instead of fighting with outdated suggestions for 20 minutes, just give it the current info. Takes 30 seconds and suddenly the AI knows exactly what you want.
Documentation Tools
Cursor @Docs
Cursor has @Docs built in—just type @Docs and it'll pull in documentation for your project's dependencies automatically.
LLM Documentation Hub
Access and copy documentation for popular development libraries, optimized for LLM context windows. Choose from full, minified, or core versions to fit your needs.
llm-docs.comDocumentation Strategy
Sometimes you just have to go search for the latest version and tell it the latest version. Sometimes you gotta paste in examples from the docs with that specific version. Tell it to search the web—it has access to this tool if you let it use it.
If you already have the right version of the package, you can also give it your package.json or requirements.txt file so that it knows the right stuff to reference.
Documentation Sources
Use official docs, GitHub READMEs, or API specifications. Include version numbers when relevant to ensure the AI uses the correct syntax.
Package File Prompt:
"Here's my package.json file: [paste file]. Use the exact versions listed to implement this feature."
Documentation Prompt:
"Here's the latest NumPy array_split docs: [paste docs]. Split this array into three equal parts using this method."
Web Search Prompt:
"Search the web for the current FastAPI authentication patterns and implement JWT auth using the latest approach."
11. Use 'Magic Words' That Consistently Steer AI Behavior
Magic Words That Actually Work
Certain phrases consistently produce better results from LLMs. These "magic words" act as steering mechanisms that guide AI behavior in predictable ways.
Steering Phrases
Category | Effective Phrases | What You Get |
---|---|---|
Simplification | "Simplify it!", "Keep it simple" | Prevents over-engineering, cleaner solutions |
Frontend Enhancement | "Make it more interesting", "Add thoughtful details like hover states", "Make it better", "Dont hold back. Give it your all." | More engaging, polished UI with micro-interactions |
Architecture | "Use domain driven design", "Follow SOLID principles" | Better structured, maintainable code |
Focus | "Be specific", "Focus on the details" | More precise, detailed responses |
12. Start Frontend Development with a Strong Visual Base
Get the Layout Right, Then Add the Logic
Why Visual Foundation Matters
When you have a solid visual base, the AI can focus on functionality instead of layout decisions. It's like having a blueprint—you're not debating where the buttons go, you're making them work.
My v0.dev to Cursor Workflow
I typically iterate in v0.dev first, building only the UI with mock data until I'm happy with the design. Once I have something solid, I export it and continue building in Cursor. This lets me focus purely on the visual design without getting distracted by backend integration.
Design in v0.dev
Build the UI with mock data. Focus purely on layout, styling, and visual hierarchy. No backend concerns.
Export to Cursor
Export it via the v0.dev npx CLI. Now you can focus on making it functional.
Iterate with AI
Use "Make it more interesting and modern" prompts to refine the design. The AI has a solid foundation to work from.
Image Input for Design Inspiration
Image input is incredibly powerful for design work. You can take screenshots from other sites with styles you like or want to match, then ask the AI to adapt those patterns to your component. It's like having a designer who can instantly replicate visual styles.
Design Iteration Prompt:
"Make this component more interesting and modern. Add thoughtful details like hover states, better spacing, and subtle animations"
Style Matching Prompt:
"Look at this screenshot and adapt the visual style to my component. Keep the same layout but match the color scheme, typography, and spacing patterns"
Enhancement Prompt:
"Don't hold back. Give it your all. Make this interface more engaging with micro-interactions and polished details"
13. Keep CursorRules Dynamic and Refer to Them Often
Train the AI to Avoid It's Repeated Mistakes
CursorRules are like having a conversation with the AI about your preferences before every session. I update mine whenever the AI makes a mistake I don't want to see again. "Stop adding fallback paths—keep the code simple." "Use specific types and models, not Any.".
Getting Started with CursorRules
Start simple, then add rules for mistakes the AI keeps repeating. This is way more effective than trying to write comprehensive rules upfront.
CursorRules Generator
AI-powered generator for creating project-specific rules. Great starting point for common frameworks.
cursorrules.orgOfficial Cursor Rules Docs
Complete documentation including auto-attach patterns and project-specific rules that activate based on file patterns.
Cursor Rules DocumentationMy Essential CursorRules
These rules prevent the most common issues I've encountered. Every time the AI does something that annoys me, I add a rule to prevent it from happening again.
# CursorRules - AI Development Guidelines
## Code Quality - NO EXCEPTIONS
- DO NOT CREATE FALLBACKS OR ALTERNATE PATHS. SIMPLIFY.
- ALWAYS USE SPECIFIC TYPES—NEVER any, dict, or generic types
- LIMIT COMMENTS to explain 'WHY,' not 'WHAT'
- NO CACHING or RETRIES unless explicitly requested
## Error Handling - FAIL FAST
- FAIL FAST—don't mask problems with fallbacks
- LOG errors clearly with context
## Architecture - KEEP IT SIMPLE
- USE DOMAIN-DRIVEN DESIGN principles
- SEPARATE concerns: controllers, services, repositories
- NO business logic in controllers
- NO premature optimization
## Testing - PRACTICAL OVER PERFECT
- TEST real scenarios, not implementation details
How to Use CursorRules Effectively
Cursor now has auto-attach rules that activate based on file patterns, but you can also reference them explicitly in prompts when you need to context prime the AI.
Reference Rules Prompt:
"Refer to our rules: no fallbacks, no retries. Rewrite this code to handle errors simply"
Enforce Standards Prompt:
"Follow our CursorRules for type safety. Replace any generic types with specific ones"
Rules Don't Always Work
I'll be the first to admit CursorRules don't seem to always work, especially if the context window gets big. But I do notice improvement when using them, and I sometimes specifically say "Refer to our rules" to context prime the AI.
14. Use the Right Kind of Tests
Test What Actually Breaks
Testing Strategy That Actually Works
Start with integration tests through API routes because they're more immune to refactors as code evolves. With LLMs, files might change more often, so you don't want to test implementation details until you have your foundation. Then you can add unit tests for specific business logic.
Test Type | Priority | When to Use | Example |
---|---|---|---|
Integration Tests | High | Test full API endpoints and workflows | POST /users creates user in database |
Snapshot Tests | Medium | Catch unintended changes in output | API response format consistency |
Unit Tests | Low | Complex business logic with edge cases | Pricing calculator with multiple rules |
E2E Tests | Selective | Critical user journeys only | Complete checkout process |
Integration Test Example
# Integration Test - Survives refactoring because it tests through the API
def test_create_user_endpoint():
user_data = CreateUserRequest(
name="John Doe",
email="john@example.com",
age=30
)
response = client.post("/users", json=user_data.model_dump())
assert response.status_code == 201
assert response.json()["name"] == "John Doe"
Snapshot Test Example
# Snapshot Test - Catches unintended API response changes
def test_user_response_format():
response = client.get("/users/123")
# Load expected response from golden test case
with open("tests/fixtures/user_response.json") as f:
expected = json.load(f)
assert response.json() == expected
Unit Test for Business Logic
# Unit Test - Only for complex business logic that has edge cases
def test_pricing_calculator_with_discounts():
calculator = PricingCalculator()
# This calculation has multiple business rules worth testing
price = calculator.calculate(
quantity=10,
unit_price=100,
customer_type="premium",
bulk_discount=True
)
# 10 * $100 = $1000
# 20% bulk discount = $800
# 10% premium discount = $720
assert price == 720
Integration Test Prompt:
"Write integration tests for the users API that test the full flow from HTTP request to database. Do not mock anything. Use helpers where it makes sense. Use models not generics or raw json."
I prefer to start with no mocking
Since I usually start with integration tests and snapshot tests, I like to limit or avoid mocking unless it's 3rd party that I have no control over. That way I know it's actually working all the way through, then I can break parts away and start mocking dependencies. If you start the other way around and mock everything with tons of unit tests, you can have 99 passing tests but your implementation actually doesn't work anymore because you missed a connection point that got mocked.
LLM Coding Guide Series: