LLM Coding Guide Series:
Using Large Language Models (LLMs), especially in Agent mode, can be an incredible experience and a total headache all at once. When things are clicking, you can whip up a prototype in minutes—something that might've eaten up a full day's worth of effort. But when it's not working, it's very frustrating. You convince yourself "one more prompt" will fix it, only to watch yourself sink deeper into the mess. To help you out, I've put together some techniques that have made a huge difference for me.
1. Treat LLMs Like a Team of Talented Interns
The Right Mindset for LLM Development
I put this technique first because it is the lens through which you should view all the other techniques as well. Don't treat LLMs like perfect robots—treat them like a team of talented interns who are great once you put them on the right path, but they must be steered and reset from time to time.
Just like with real interns, you need to act like a Tech Lead. Set the proper vision, establish guardrails, and give them the schemas and contracts they need. They can iterate on these with you, but as you progress, you must carve certain things in stone to maintain consistency and quality.
What This Means in Practice
Like a Good Tech Lead, You Should:
- •Provide clear, specific requirements and acceptance criteria
- •Establish coding standards and architectural patterns upfront
- •Review their work and provide constructive feedback
- •Break down complex tasks into manageable pieces
- •Course-correct when they go off track (which they will)
2. Always Start with New Context
Understanding Context Windows and Fresh Starts
Every single message you send includes ALL your previous messages in that conversation. When you say "no, that's wrong" or "try again differently," the AI sees that entire history every time. It's like trying to give directions while someone keeps shouting conflicting instructions in your ear.
Messy Context vs Clean Start
Messy Context (Don't Do This)
You: "Create a user authentication system"
AI: "Here's a basic auth system with JWT..."
You: "No, that's wrong. Use sessions instead."
You: "Actually, go back to JWT but make it secure."
You: "This isn't working. Try a different approach."
💥 Result: AI is confused by all the conflicting messages and produces poor results.
Clean Start (Always Do This)
✨ Start a new conversation with a clear, specific request:
You: "Create a secure JWT-based authentication system for a Node.js API. Include token refresh, proper error handling, and rate limiting."
AI: "I'll create a comprehensive JWT auth system with all the security features you requested..."
✅ Result: Clean, focused implementation that meets your exact requirements
Large vs. Effective Context Windows
A large context window doesn't guarantee a large effective context window. Many LLM interfaces quietly summarize or compact your context, losing details. Your instructions get blurred with previous attempts, and the model starts working with degraded information without you realizing it.
Anthropic's Context Window Documentation - Official docs on context limitations
How Context Gets Compressed and Details Are Lost
Your Original Message:
"Create a user authentication system with JWT tokens, include password hashing with bcrypt, add rate limiting for login attempts, implement refresh tokens with 7-day expiry, and make sure to handle edge cases like concurrent logins and token revocation."
How AI Sees It After Context Compression:
"User wants auth system with JWT..." (missing: bcrypt requirement, rate limiting, refresh token expiry, edge cases)
Critical details lost during summarization → incomplete implementation
3. Communicate Clearly: Context, Goals, and Concise Instructions
Strategic Communication with AI
Effective AI communication has three pillars: precise context (what the AI needs to know), clear goals (what you're trying to achieve), and concise instructions (how to get there). Master these and you'll get dramatically better results.
1. Give Precise Context, Not Everything
Give the AI precisely what it needs to succeed—no more, no less. Too much context overwhelms the AI, diluting its focus. Too little leaves it guessing, leading to poor results.
Context Strategy: Quality Over Quantity
Too Much Context
Your Prompt:
"Help me fix this authentication bug"
📎 Attached Files (47 files):
• Entire src/ folder (32 files)
• All test files (8 files)
• package.json, webpack.config.js, .env.example
• README.md, CONTRIBUTING.md, docs/
• Git history and commit messages
• ... and more unrelated files
❌ Result: AI gets lost in irrelevant files and gives generic advice about "checking your auth middleware" instead of fixing the actual bug.
Precise Context
Your Prompt:
"Fix the error where users can't log in - getting 401 errors"
📎 Attached Files:
• auth-middleware.js
• routes/login.js
• models/user.js
• error-logs.txt (last failed attempt)
✅ Result: AI immediately spots the JWT secret mismatch and provides the exact fix needed.
2. Share Your Goal and 'Why' for Better Solutions
Sometimes you know exactly what you want and can tell the AI step-by-step what to do. Other times, you should share your goal because the AI might think of a better approach. The way you phrase it determines whether the AI goes straight into implementation mode or plans first.
Communication Strategy Impact
Direct HOW to Solve
"Create a UserService class with getUserById, createUser, and updateUser methods. Use dependency injection for the database connection."
AI implements exactly the structure you specified. Fast execution when you know the right approach.
- Fast implementation
- Predictable outcome
- Good when you know the solution
- Skips planning phase
State Desired Outcome
"I need clean user management that's easy to test and maintain. Follow domain-driven design principles and keep business logic separate from data access."
AI considers architecture patterns, suggests repository pattern, service layer, and proper separation of concerns.
- AI plans before implementing
- Considers alternatives
- May find better solutions
- Good for complex problems
3. Write Prompts Concisely
Be concise and direct. Pasting in 5,000 words of LLM-generated instructions hasn't worked well for me—just tell it specifically what you want. The AI gets overwhelmed by rambling prompts and gives you generic advice instead of solving your actual problem.
Effective vs. Ineffective Prompts
Rambling Prompt
"So I have this function here and I'm not really sure what's wrong with it but it seems like it might be slow or something and I was thinking maybe you could take a look at it and see if there's anything that could be optimized or improved because I read somewhere that loops can be slow and this has a loop in it and also I'm not sure if I'm handling the edge cases correctly and maybe there are some best practices I should be following... *insert 500 word essay from ChatGpt*"
Direct Prompt
"Optimize this loop for speed—it's too slow with large arrays."
Clear, specific, actionable. The AI knows exactly what to focus on.
Essential Metadata
Context isn't just code—it includes metadata that helps the AI grasp the big picture without file-hunting.
# Project Overview
Brief description of what this project does and its main purpose.
## File Hierarchy
```
src/
users/ # User management domain
routes.ts # API endpoints
service.ts # Business logic
models.ts # Data models
payments/ # Payment processing domain
routes.ts
service.ts
models.ts
```
Pro Tip
If you're using Cursor, enable "Full folder contents" and "Include Project Structure" in Cursor Settings → Features to automatically include file hierarchy in every prompt.
4. Use Domain/Vertical Folder Structure, Not Organized by Layer
Domain-Driven Organization
Stop scattering related code across controllers, services, and models folders. Group everything by domain instead—users/, payments/, orders/—so you can grab one complete folder when working on a feature, not hunt through 3+ directories every time.
Vertical vs. Horizontal Organization
Folder Structure Comparison
Traditional (Horizontal)
src/
controllers/
users.js
payments.js
orders.js
services/
users.js
payments.js
orders.js
models/
user.js
payment.js
order.js
- Related code scattered across folders
- Hard to include all relevant files
- Difficult to maintain domain boundaries
Domain-Driven (Vertical)
src/
users/
routes.js
service.js
models.js
tests.js
payments/
routes.js
service.js
models.js
tests.js
orders/
routes.js
service.js
models.js
tests.js
- All related code in one place
- Easy to grab entire domain for context
- Clear domain boundaries
5. Use Tooling to Guide AI with Instant Feedback
Use automated tools to keep code functional and guide the AI with instant feedback. These tools catch errors early and give the AI clear constraints to work within.
Essential Tool Options
Category | Tool Options | Why It Helps AI |
---|---|---|
Linting | Flake8, Pylint, Ruff (Python) | ESLint (JS/TS) | Catches syntax issues immediately |
Testing | pytest, unittest (Python) | Jest, Vitest (JS) | Verifies AI-generated code works |
Type Checking | mypy, pyright (Python) | TypeScript | Gives AI clear constraints |
Python Type Hints
Use type hints to give the AI clear constraints. Avoid vague types like Any, object, or dicts with any.
Type Specificity in Python
Vague Types
# Avoid these vague types
def process_data(data: Any) -> Any:
return [item.value for item in data]
class UserData:
def __init__(self, info: Any, settings: dict):
self.info = info
self.settings = settings
- No type safety
- AI can't infer constraints
- Runtime errors likely
Specific Types
# Clear, specific types
from typing import List, Literal
from dataclasses import dataclass
@dataclass
class User:
id: str
name: str
email: str
age: int
def process_users(users: List[User]) -> List[str]:
return [user.name for user in users]
@dataclass
class UserSettings:
theme: Literal['light', 'dark']
notifications: bool
language: str
- Type safety guaranteed
- Clear AI constraints
- Runtime error prevention
Cursor's Iterate on Linting Feature
Cursor can automatically fix linting errors as you code, giving the AI immediate feedback and keeping your codebase clean. Enable this in Cursor Settings → Features → "Iterate on lints".
Immediate Feedback Loop
When the AI generates code with linting errors, Cursor automatically shows fixes. This creates a tight feedback loop that trains the AI to write cleaner code in subsequent responses.
Formatters: Pattern Machines Need Clean Patterns
Formatters like Black (Python) or Prettier (JS) do two critical things: they shrink context window size by removing formatting noise, and they give LLMs clean patterns to follow. LLMs are pattern machines—if you feed them messy code, they'll generate messy code.
6. Choose the Right Models, Languages, and Tools
Strategic Tool Selection
Your choice of model, programming language, and development environment dramatically affects AI performance. Pick tools that work well together and have strong AI support—the difference is huge.
1. Choose Models That Follow Instructions, Not Just Beat Benchmarks
Pick a model that excels at both coding and following your instructions. Some models ace benchmarks but completely ignore your project context and coding patterns. You want a model that executes intent, not just spits out code.
Model | Strengths | Weaknesses | Best For |
---|---|---|---|
Claude Sonnet 4 ⭐ | Good instruction following, context awareness, consistent patterns | Can be verbose, slower | Daily driver for most coding tasks |
OpenAI O3 | Strong reasoning, good at complex algorithms | Expensive, slower | Complex planning and architecture decisions |
Gemini 2.5 Pro | Excellent one-shot code generation, fast | Poor instruction following (personal experience) | Quick prototypes, standalone functions |
2. Stick to Popular Languages for Better AI Training Data
Use popular languages like TypeScript and Python. They have the most training data, best library support, and most AI tools were tested on these. The difference is huge—you'll get way better suggestions and fewer hallucinations.
Language | Use Case | Popular Libraries |
---|---|---|
Python | Backend APIs, Data Science, AI/ML | FastAPI, Flask, Django, Pandas, NumPy |
TypeScript | Full-stack Web Development | React, Next.js, Express, Node.js, Prisma |
3. Use AI-Integrated IDEs, Not Copy-Paste ChatGPT
Use an IDE like Cursor or Claude Code—don't copy paste your code into ChatGPT and expect it to have proper context of your codebase. The time you waste copy/pasting files, you could be much more efficient.
Development Environment Impact
ChatGPT Copy-Paste
No project context, manual copy-pasting, generic solutions that don't fit your codebase.
- No project context
- Manual copy-pasting
- Generic solutions
- Integration issues
AI-Integrated IDE
Full codebase awareness, auto apply changes, project-specific solutions that actually work.
- Full project context
- Instant application
- Project-specific solutions
The Stack Effect
When you combine the right model, popular language, and AI-integrated IDE, the results are better than using any one piece alone. The AI understands your code better, makes fewer mistakes, and integrates easier into your workflow.
Recommended Stack
Cursor + Claude Sonnet 4 + TypeScript/Python/React = The sweet spot for most development tasks when you are first learning.
7. Perfect One Pattern, Then Clone It
Pattern-Based Development
Spend extra time getting one file, module, or function exactly how you want it, then use it as a template for similar tasks. Once you have a solid example, the AI can clone it across other domains fast—often in one shot.
Why This Works
For CRUD apps, once you nail the layers, folder structure, and error handling in one domain, copying that pattern is way faster than starting from scratch each time. The AI has a concrete example to follow instead of guessing what you want.
src/
users/ # ✅ Perfect this domain first
routes.ts # All CRUD endpoints
service.ts # Business logic
models.ts # Data types
schemas.ts # Validation schemas
tests.ts # Unit tests
products/ # 🎯 Clone the users pattern here
routes.ts # Copy users structure
service.ts # Same layers and error handling
models.ts # Same naming conventions
schemas.ts # Same validation approach
tests.ts # Same test patterns
Get the users domain perfect first—nail the error handling, validation, and file structure. Then when scaffolding the new domain, drag the perfect domain into context the first time so it has a pattern to copy. Tell the AI: "Use the users domain as a template to build the products domain with the same structure."
Refactor Early Before Bad Patterns Spread
LLMs will look at your messy 2,000-line controller with database calls scattered everywhere and think "oh, this is how we do things here" and replicate that exact mess in every new feature. Bad patterns compound exponentially when AI replicates them.
The Snowball Effect
Bad patterns compound exponentially when AI replicates them. A messy controller becomes ten messy controllers. Fix the root cause before it spreads—trust me, cleaning up later is brutal.
Quick Refactor Prompt:
"This file is doing too many things. Please refactor but don't over engineer it. Plan first.."
Extract Service Prompt:
"Move all the database calls from this controller into the repository layer like we do in <insert example file>."
Codebase Analysis Prompt:
"Analyze my codebase and identify potential issues like: large files, mixed concerns, duplicated code, or architectural problems. Give me a prioritized list of what to refactor first."
When You Don't Have an Example Yet
If you haven't built it yet, search GitHub for similar examples. For UI projects with existing backends, save the OpenAPI/Swagger spec in your project so the AI knows exactly what endpoints and data structures to work with.
API Integration Tip
If I'm building a UI and already have the backend built, I'll save the OpenAPI/Swagger spec or at minimum the cURL commands + data models into a file in the UI project temporarily. This gives the AI the exact contract and also lets it cURL to investigate when it runs into issues. Way better than the AI guessing what endpoints exist or what the data looks like.
8. Use Development Tools and Scripts to Work Smarter
Optimize Your Development Workflow
1. How to Get Unstuck
When you get stuck, stop asking the AI to fix the problem directly. Instead, shift to debugging mode with prompts that build investigation tools:
Investigation Prompts:
"Don't try and fix it, just add detailed logs then run a test" OR "Let's stop and build a debug view that can help us troubleshoot this" OR "Help me write a script to inspect the current state (e.g: cURL requests, Docker inspection scripts, view Kafka topics, database queries)"
Switch Models When Really Stuck
Sometimes the context you're providing is digging your hole deeper without you realizing it. When this happens, go to a different LLM (e.g., ChatGPT) and paste your question with a minimal set of context.
2. Build Custom Tools When Debugging Gets Stuck
When you're stuck debugging the same issue for hours, stop. Build a different tool to attack the problem from a new angle. I've solved more bugs with quick scripts that bypass the normal flow than with traditional debugging.
Tool Type | When Stuck On | Prompt to Use |
---|---|---|
cURL Scripts | API behaving differently than expected | "Create cURL commands to test this API endpoint with different payloads" |
Database Bypass | API returning wrong data | "Create a script that queries the database directly and compares to API response" |
Integration Tests | Hard-to-reproduce bugs | "Build an integration test that reproduces this exact scenario" |
Debug Interface | Complex state management | "Build a simple admin page to inspect this component's state" |
Environment Reset | Inconsistent test results | "Create a script that resets the test environment / database to a clean state" |
3. Explicitly Direct Tool Use
If you want the AI to invoke a particular tool, tell it explicitly. The AI has access to tools but won't always use them unless you ask. I've wasted time watching the AI guess about what imports it needs to update when I could have just said "grep for that import pattern."
Tool | When to Request | Example Request |
---|---|---|
Read File | Need to check config or code not in context | "Read config.json to find the API key" |
Edit File | Need to create or modify files | "Create a new component file UserProfile.tsx" |
Web Search | Need current information or trends | "Search the web for latest React 18 features" |
Grep/Search | Find patterns across codebase | "Grep my codebase for all TODO comments" |
Bash | Run tests, check docker logs, system commands | "Run npm test to check if tests pass" |
cURL | Test API endpoints when hooking up UI to backend | "Use cURL to test the POST /users endpoint" |
Pro Tip: Enable Hot Reload
Use `uvicorn main:app --reload` or similar hot reload commands to avoid the start-stop-restart cycle. The AI makes changes and you see them instantly—no more killing zombie processes or cluttering context with startup logs.
LLM Coding Guide Series: