Setup & Gotchas
Install, first run, and the Windows thing
Connect it to everything
MCPs, skills, integrations. Stop copy-pasting between tools.
The human is the kernel
You are the privileged environment. Make the AI use its own privileges.
Document, document, document
Documentation is the persistence layer. AI is great at writing it.
Use GitHub. Use repos.
Everything in version control. Even your notes.
Make it write less
Default output is too long. You are the editorial check.
Vocabulary & how to talk to AI
Harness, MCP, subagent, hook, plugin. Plus prompt habits.
1 Setup & Common Gotchas
Three install paths
Claude Code ships in three flavors. All three run the same underlying engine and share the same CLAUDE.md, skills, permissions, MCPs, and memory. They differ only in interface.
| Path | How to get it | Best for |
|---|---|---|
| VSCode extension | Open the Extensions panel in VSCode, search "Claude Code", install. Sign in from the Claude side panel. The extension prompts you for any missing prerequisites on first run. | Most users. You get a side panel beside your editor. |
| CLI | Install Node.js 18+ from nodejs.org, then npm install -g @anthropic-ai/claude-code. Run claude from inside any project folder. |
Terminal-first workflows. Works with any editor. Best for scripting, automation, and CI. |
| JetBrains plugin | Install "Claude Code" from the JetBrains marketplace inside IntelliJ, PyCharm, WebStorm, etc. | JetBrains shops. |
Pick the one that matches the editor you already live in. The rest of this page applies regardless of which path you choose.
Sign in
First run prompts you to authenticate. Three options:
- Claude.ai account (Pro or Max plan). OAuth flow in your browser. Easiest for individuals.
- Anthropic API key from console.anthropic.com. Pay-as-you-go pricing. Useful if your workplace VPN breaks the OAuth flow.
- Enterprise: AWS Bedrock, Google Vertex, or your company's Claude gateway. Set the relevant environment variables per your org's docs.
- Git Bash (easiest): install Git for Windows from git-scm.com/download/win. It bundles Git Bash. Whether you use the CLI, the VSCode extension, or the JetBrains plugin, the engine will find Git Bash on PATH.
- WSL2 (heavier, more capable): Microsoft's full Linux subsystem. Better long-term if you do real dev work on Windows. Run
wsl --installin PowerShell as admin, restart, then install Node and Claude Code inside the Ubuntu shell.
Drop in your first CLAUDE.md
Before you do anything else in a new repo, create a CLAUDE.md file at the root. Even a 10-line version pays off immediately:
Or run the built-in /init command inside Claude and it'll generate a starter CLAUDE.md for you based on the repo it sees.
Common gotchas in the first week
| Gotcha | What's actually happening | Fix |
|---|---|---|
| "Requires Git Bash" error on Windows | You launched from PowerShell or CMD. Claude Code is built around POSIX shell semantics. | Install Git for Windows, launch from Git Bash. Or use WSL2. |
| Constant permission prompts | By default Claude asks before running shell commands, editing files, or hitting the web. The first day feels like a firewall. | Approve common safe commands ("always allow") as they come up. Within a few sessions the prompts get rare. |
| Claude doesn't see your project context | You launched claude from your home directory or some random folder, not from inside the actual project. |
cd into the project root first. Claude reads the working directory's files and CLAUDE.md. |
| It gets slower / dumber as the session goes on | You're filling the context window. Long research, lots of file reads, big diffs all eat space. | Use /compact to summarize, or /clear for a fresh start. Use subagents for exploration (Section 6). |
| Auth keeps expiring or asking again | Usually a corporate VPN or proxy interfering with the auth callback. | Try the API key flow instead of OAuth login. Get a key from console.anthropic.com. |
| "It can't find git" or other tools | The shell Claude launched in doesn't have those tools on PATH. Git Bash on Windows includes most basics; PowerShell shells often don't. | Verify in your terminal first (git --version, node --version). If they work there, Claude will see them. |
| Claude edits the wrong file or wrong line | Almost always means it didn't read the file fresh before editing. Or you have the file open elsewhere with unsaved changes. | Save your editor's changes first. If it persists, ask Claude to re-read the file before editing. |
Skip the prompt storm with permissions
By default Claude Code asks before running shell commands, editing files, or hitting the web. The first day feels like a firewall. Two ways to reduce the prompts:
- Approve as you go. When prompted, choose "Always allow" for safe commands. After a few sessions the prompts get rare.
- Pre-populate
settings.json. Add apermissions.allowarray with the commands you trust:
There are also tools that automate this. The fewer-permission-prompts skill (and others like it) scans your past sessions for commands you keep approving and writes them to settings.json for you. Conservative defaults are good. The point is to skip the prompts you would always click yes to anyway, not to disable safety.
cd into a real project → run claude → type /init to generate a starter CLAUDE.md → ask it to do one small task you'd normally do yourself. Don't try to do anything ambitious on day one. The whole point is to feel the loop, not to ship something.
2 Connect it to everything
Claude Code becomes a coworker the moment it can read your calendar, post to Slack, query your database, browse the web, edit your repos, transcribe your meetings, and trigger your deploys.
Most of that ships free as MCPs (Model Context Protocols). The rest is a 20-line skill away.
If you find yourself copy-pasting from one tool into Claude and then copy-pasting Claude's output into another tool, you have already failed. Wire it directly.
What you can wire in
Anything with an API or CLI is reachable. Common categories:
- Calendar and email (Google Workspace, Outlook)
- Chat (Slack, Teams, Discord)
- Code hosts (GitHub, GitLab, Bitbucket)
- Databases (Supabase, Postgres, SQL Server)
- Browser automation (Playwright, Puppeteer, Chrome DevTools)
- Cloud platforms (Cloudflare, Netlify, Azure, AWS)
- Audio recording and transcription (Plaud, Otter)
- Secret managers (1Password, Vault, environment files)
You are connecting a powerful automation tool to admin tasks. Treat the access boundary the way you would for a junior employee: give it the systems it needs to do the work, withhold the ones where the blast radius is too high, and audit what it does.
The fix when an MCP is too heavy: write a script. A 200-line Python file using the same OAuth credentials as the MCP can give you the same functionality with zero background processes. Reach for the script when the integration is for one-off tasks. Reach for the MCP when you need it persistently in conversation.
Skills turn multi-tool workflows into one-line invocations
Skills (slash commands) are reusable workflows. Each is a markdown file with a prompt and a few rules. Examples of what people define:
| Command | What it does |
|---|---|
| /closeout | End-of-session: updates state file, commits, deploys, posts to Slack. |
| /deploy | Pre-flight checks (right repo, right protocol, right platform), then ships. |
| /standup | Generates a standup summary from yesterday's commits. |
| /triage | Reads new emails, classifies them, drafts replies for review. |
| /release | Drafts release notes from a git range. |
They exist because typing the long version was annoying enough that defining the skill paid back inside a week.
Plugins bundle skills, hooks, and commands
Plugins are how power-user setups travel. A plugin is a bundle of skills, hooks, and commands shipped as one installable unit. The Claude ecosystem has plugins for code review, git operations, Supabase work, browser automation, and more. Install one and you inherit a small workflow library someone else built.
You can also publish your own. Once you have ~5 skills you reuse across projects, packaging them as a plugin is the difference between you having a personal stash and your team having a shared toolkit.
3 The human is the kernel
Your runtime is the privileged environment. The AI is a process running inside it.
This inverts the usual framing. AI is not replacing you. AI is a userspace program that calls your kernel only when it needs to touch a system it cannot reach: production credentials, your physical signature, a judgment call about a client relationship. The AI is compute. The human is the kernel.
That makes one habit non-negotiable: when the AI keeps asking you to do things it could do itself, tell it to knock it off.
| What it asks | The right answer |
|---|---|
"Could you run git status and paste the output?" | No. It has Bash. Use it. |
| "Can you tell me what's in that file?" | No. It has Read. Use it. |
| "Would you mind opening the dashboard?" | No. It has a browser MCP. Use it. |
| "What was in the last commit?" | git log -1. It can run that. |
Every time you do work the AI could have done, you train it to keep deferring. Every time you push back, you train it to use the privileges it has.
The same principle applies in reverse. When something genuinely needs the kernel (your credentials, your signature, your judgment), the AI delegates and waits. That handoff is the system working correctly. The handoff for things the AI could do itself is the system being lazy.
Sub-processes: the same idea, one layer down
The AI can spawn its own sub-processes (subagents) for the same reason you delegate to it: a sub-process protects the parent context.
Why subagents matter:
- Save tokens. A research task that reads 30 files in a subagent returns 200 words to the main thread. The same task in the main thread eats your whole session.
- Parallelize work. Three independent questions can run as three subagents at once. The main thread waits for all three answers, then proceeds.
- Stay specialized. Some subagents are tuned for one job (code search, security review, planning). They beat general-purpose for their specialty.
- Quarantine bad context. If a research thread goes down a rabbit hole, the subagent absorbs the mess and you only see the final answer (or the failure).
When to send one: any time you catch yourself thinking "let me check..." Stop and consider whether you can hand it off and keep your main thread clean.
4 Document, document, document
Documentation is not overhead. It is the persistence layer for AI collaboration.
Every session starts cold. The model has no memory of what you decided last week, why you rejected the obvious approach, who the stakeholders are, or what the constraints really are. The documentation IS that memory. Without it, you re-explain the project on every session and the AI guesses at the parts you skipped.
The good news: AI is excellent at writing the documentation it needs. Ask it to summarize your decisions, draft the ADR, write the runbook, capture the architecture. Read what it wrote, fix what is wrong, commit. Docs are rocket fuel for AI, and AI is rocket fuel for docs.
The minimum viable doc layer
- CLAUDE.md / AGENTS.md at the repo root: who you are, what the project is, your style, your hard rules. The AI reads this on every session start. The filename depends on the tool — Claude Code reads
CLAUDE.md, other agents (Cursor, Cline, Aider) readAGENTS.mdor.cursorrules. Same idea, different filename. Some tools read both. Mine is ~400 lines. Yours can start at 20. - STATE.md at the root: current blockers, in-flight decisions, what changed last session, what needs to happen next.
- ADRs in
docs/adr/: why we chose X over Y. Dated. Immutable. Supersede, never delete. - Wiki / notes: people, projects, domain concepts, anything that took time to learn the first time.
CLAUDE.md vs memory: not the same thing
These get conflated. They are different mechanisms with different purposes.
- CLAUDE.md is the repo handshake. Lives in the project, gets committed, applies to anyone working in that repo. Use it for project context, conventions, and shared rules.
- Memory is personal and persistent. Lives outside the repo (Claude Code stores it under
~/.claude/projects/.../memory/), auto-loads at session start, and follows you across every session in that project. Use it for things about you — your preferences, lessons learned, things to never do again — rather than about the project.
A correction like "Medallus runs on Azure" goes in CLAUDE.md (project fact, applies to anyone). A correction like "Bert prefers tables over prose" goes in memory (personal preference, follows you everywhere).
Sample CLAUDE.md fragment:
The rule that matters most
Every time you correct the AI, write down the correction. Whatever made you frustrated this session is what will make you frustrated next session unless you capture it.
Real entries from mine:
| The friction | The rule it became |
|---|---|
| Pivoted to Netlify when I asked about Medallus | "Medallus runs on Azure. Default to Azure Static Web Apps." |
| Searched the codebase from scratch instead of reading the wiki | "Wiki first. Check wiki/ before grepping." |
| Replied "Continue." to multi-part pasted content | "Wait silently when I'm pasting. Don't reply with 'Continue.'" |
Each is one line. Each kills a recurring frustration.
5 Use GitHub. Use repos.
Everything in version control. Including the notes about how to use the AI.
This sounds obvious. Most people do not do it. They run the AI inside one giant uncommitted folder. They lose work to a bad refactor and have no way to recover. They cannot answer "what did this look like last week?" They cannot share context with another collaborator (human or AI) without manual export.
What "use repos" means in practice
- Every project, even small ones, gets a git repo.
git initon day one, not day thirty. - Every meaningful change gets a commit. Atomic. Reviewable. Revertable.
- Push to GitHub. Local-only repos protect against nothing.
- Even your notes are a repo. Mine has hundreds of commits and a real history.
- Branches for experiments. Merge what works, delete what does not.
- Pull requests even when you are working solo. The PR view is a sanity check the AI cannot give you.
AI works dramatically better with version control. It can see the diff. It can read the commit history. It can trace why a file looks the way it does today. It can diff against any prior state to answer "what did this used to do?" Without git, you and the AI are both flying blind in a single mutable folder.
Combined with documentation (Section 4), the repo becomes the project's persistent brain. Code, decisions, and history all live in one place the AI can read on demand.
6 Make it write less
Default AI output is too long. Your job is to cut.
The model was trained on academic writing, marketing copy, and corporate documentation. All three reward verbosity. Without intervention you will get five sentences where one would do, three adjectives where none belong, and a thoughtful closing paragraph that restates the thing you just read.
You are the editorial check. Teach it explicitly. A line for your CLAUDE.md:
Reinforce the rule every time the AI reverts. Reversion happens often, especially in long-form pages, proposals, and anything emailed to executives. Those formats pull the model toward thought-leadership voice.
Tells to kill on sight
- Setup paragraphs ("Most people..." / "In today's world...")
- Triplet rhythms ("no clever prompt, no secret setting, no hidden flag")
- Pull quotes that summarize what you just said
- Closing aphorisms ("That's the whole game.")
- "Your future self will thank you"
- Em dashes used as a rhetorical device
- Any sentence that adds nothing to the previous sentence
If you tolerate verbose output, you ship verbose output. If you want Claude to work in your style and write in your voice, you have to configure it to do so.
7 Vocabulary & how to talk to AI
The page above uses words that mean specific things. Here is the short version, plus the prompt habits that go with them.
Terms
| Term | Definition |
|---|---|
| IDE (Integrated Development Environment) | A code editor with built-in tools (file tree, debugger, terminal, extensions). VSCode, the JetBrains family (IntelliJ, PyCharm, WebStorm), and Cursor are IDEs. Claude Code can run as an extension inside one. |
| CLI (Command-Line Interface) | A text-based way to drive software from a terminal. Claude Code's CLI is the claude command. Useful for terminal-first workflows, scripting, and CI. |
| SDK (Software Development Kit) | A library you import into your own code to use a service programmatically. The Claude Agent SDK lets you build your own harness on the same engine that powers Claude Code. |
| Harness | The runtime that runs the model and exposes tools to it. Claude Code is a harness whether it's running as a CLI, an IDE extension, or via the SDK. Cursor, Cline, and Aider are other harnesses. |
| Model | The actual AI (Opus, Sonnet, or Haiku for Claude). Runs inside the harness and uses the tools the harness provides. The harness is the body. The model is the brain. |
| Tool | A function the harness exposes to the model: read a file, run a bash command, search the web, edit code. Tools are how the model actually does things in your environment. |
| Token | The unit the model thinks in. Roughly 0.75 of an English word. Both your input and the model's output cost tokens. Long sessions, big files, and verbose responses burn through them. |
| Context window | The model's working memory for the current session, measured in tokens. Finite. Once you fill it, performance degrades. Subagents and /compact are how you protect it. |
| MCP (Model Context Protocol) | A standard for connecting external services to the harness as tools. Slack, calendar, databases, browsers can all be MCPs. |
| Subagent | A sub-process the main agent spawns for one task. Has its own context window. Returns just the answer to the parent. |
| Skill (slash command) | A reusable workflow. Markdown file with a prompt and rules. Invoked with /name. |
| Hook | A shell command the harness runs automatically at lifecycle events (before a tool runs, after a tool runs, on session start, on stop). Enforces behavior at the harness level instead of relying on the model to remember. |
| Plugin | A bundle of skills, hooks, and commands shipped together as one installable unit. The way power-user setups travel. |
| CLAUDE.md / AGENTS.md | A markdown file at the repo root the harness loads at session start. Tells the model who you are, project context, your hard rules. Different harnesses use different filenames (CLAUDE.md, AGENTS.md, .cursorrules) but the idea is the same. |
| Memory | A persistent personal store outside any single session. Auto-loads at session start. Different from CLAUDE.md (which is repo-level and committed); memory is personal and follows you across every session in a project. |
| Plan mode | A mode where the model writes a plan before it can edit anything. Forces planning instead of one-shot editing. Toggle with shift+tab in Claude Code. |
| Permissions | A settings.json structure that pre-approves (or denies) tools and commands so you stop getting prompted on every routine action. |
How to talk to AI
- Direct, declarative. "Add a button to the header" beats "could you maybe add a button?" Politeness wastes tokens and signals you don't expect compliance.
- Be specific. Name files, paste error messages verbatim, give exact strings when wording matters. Vague input gets generic output.
- Tell it what NOT to do. Negative space is information. "Don't change the styles" or "don't add tests for this" prevents whole categories of unwanted output.
- Quote yourself. If exact phrasing matters, hand it the phrasing. "Use this verbatim: ..." is faster than describing what you want.
- Push back. When the output is wrong, lazy, or off-voice, say so plainly. The model adjusts. Tolerating mediocre output trains it that mediocre is acceptable.
- Plan before big changes. "List the files you plan to edit and what changes go in each. Wait for my OK." Catches divergence before code is written.
- Ask for summaries. Long sessions drift. "Summarize what we've decided in this session" is both a memory check and a context compactor.