The AI Corner
How to Use Claude Like the Top 1% of Users
Ruben Dominguez
Apr 27, 2026
How to Use Claude Like the Top 1% of Users
Source: The AI Corner · Author: Ruben Dominguez · Date: April 12, 2026 · Original article
Most people use Claude like a vending machine: open it, type something, get an answer, close the tab. Every session starts from zero — Claude has no idea who you are, what you're building, or how you think — so the output comes out generic, and the user walks away mildly disappointed.
The people getting genuinely different results don't treat Claude as a chatbot. They treat it as a system: they onboard it once, build structure around it, and let the returns compound every day after. This article is that system — files, prompts, Cowork hacks, context tricks, and the workflows that actually move the needle.
Part 1 — The File System (set it up once, benefit forever)
The core idea is simple: Claude reads a fixed set of files at the start of every session. Those files tell it who you are, how you work, and what good output looks like for you. You stop re-explaining yourself. Claude stops starting from zero.
The model itself is improving fast, but context is still the biggest lever a user has in 2026. Five files do most of the heavy lifting.
about-me.md — who you are, before every task
A short profile Claude reads first. What to put in it:
- Your role and industry.
- What you're focused on this quarter.
- Decisions you've already made that Claude should build on, not relitigate.
- Your single biggest current goal.
This stops Claude from suggesting things you've already ruled out and frames every answer around your actual situation.
voice-profile.md — the file most people skip
This is the file that makes the difference between Claude writing in your voice versus producing the corporate-email tone that every AI defaults to. Your voice is your beliefs, your contrarian takes, the rhythm of your sentences, the things you find cringe. A generic prompt will never capture that.
The author's recommended way to build it: ask Claude to interview you in the role of a sharp journalist. Tell it to ask hard questions about how you think, what you believe, and what you would never say. The interview surfaces things about your own voice you didn't know were there. Save the resulting profile, and every piece of content Claude writes for you afterward starts sounding like you.
anti-ai-writing-style.md — taste defined by what you reject
A list of everything Claude should never sound like when writing as you. Words, structures, tones, formatting habits.
A starter banned list from the article: utilize, synergy, leverage, foster, delve, tapestry, testament, showcase, pivotal, underscore. Plus structural offenders: long throat-clearing intros, summary paragraphs at the end, rule-of-three adjective chains ("bold, agile, and innovative"), excessive bolding.
The intuition: telling Claude what good looks like is hard, but telling it what to never produce is concrete and easy to enforce.
The Cowork folder — a four-folder structure on your machine
Claude Cowork (Anthropic's desktop agent) reads files directly from your computer. The recommended layout:
ABOUT ME/— your identity and writing rules (the files above live here).PROJECTS/— one subfolder per project, each containing a brief, drafts, and references.TEMPLATES/— finished work you reuse as patterns (e.g. a newsletter issue you liked).CLAUDE OUTPUTS/— the only place Claude is allowed to deliver new work.
Point Cowork at this root folder once. From then on, when you ask it to do something, it reads your full context before touching anything: it knows what project it's on, what your templates look like, and where the output goes.
Global Instructions — rules Claude follows before every task
Set once in Settings → Cowork → Edit Global Instructions. Useful defaults:
- Always read
ABOUT ME/before starting. - Always read the matching
PROJECTS/subfolder. - Only deliver work in
CLAUDE OUTPUTS/. - Use this naming convention:
project_content-type_v1.ext.
Roughly two hours to build all of this properly. After that, every session starts from full context, and the compounding starts immediately.
Part 2 — Prompting Shifts That Actually Change Outputs
Most people think prompting is about finding the magic phrase. It isn't. It's about giving Claude the right structure to reason inside.
Golden rule: show your prompt to a colleague who knows nothing about the task. If they'd be confused, Claude will be too.
Stop giving orders. Start asking questions. (Socratic prompting)
The single highest-leverage shift in the article. Instead of telling Claude what to produce, ask it what it would need to know to do the task well.
The template the author recommends:
I want to [TASK] so that [SUCCESS CRITERIA].
First, read my folder. Then ask me questions.
Refine the approach with me before you execute.
What happens: Claude generates a clickable form of clarifying questions, you answer them, Claude shows you a plan, you approve or redirect, and then it executes. If something is off mid-task, you redirect again and it recalibrates.
Why it works: it forces unstated assumptions to the surface — yours and Claude's — before any output is committed to. The author calls this the single pattern responsible for more output-quality improvement than anything else in the piece.
Use XML tags for complex prompts
Power users structure prompts the way they'd structure a spec, using XML-like tags so Claude can process each section cleanly:
<context>
[background on the situation]
</context>
<task>
[what you want produced]
</task>
<constraints>
[format, length, tone, what to avoid]
</constraints>
<examples>
[one or two samples of what good looks like]
</examples>
Output consistency goes up significantly because Claude no longer has to guess which sentence in your wall of text is the actual instruction versus background versus a constraint.
Give Claude a role and a reason
Don't just assign a role — explain why the role matters. Compare:
- ❌ "Summarize this spreadsheet."
- ✅ "You are a senior financial analyst. I need this to help a non-technical founder understand their burn rate. Prioritize clarity over precision."
The second framing produces a completely different output, because Claude can generalize from the motivation (a non-technical founder needs to understand burn) far more than from the bare instruction.
Build skills for repeatable workflows
Every time you re-explain your preferences in a new session, you're paying a tax. Claude Skills are saved, reusable workflows that Claude triggers automatically when the context matches. You teach it once (refine the output until it's right, then save the process); every future session applies it without you re-explaining your newsletter format, code-review preferences, or output structure.
This is the practical face of context engineering, which the author argues has replaced prompt engineering as the real leverage point in 2026: the model is rarely the bottleneck, context almost always is.
Interview Claude before asking it to produce
For larger or ambiguous tasks, flip the script: send a minimal prompt and explicitly ask Claude to interview you first. It will ask about implementation details, edge cases, and tradeoffs you wouldn't have surfaced on your own. This consistently beats jumping straight to production — for articles, strategy docs, system designs, and agent builds alike.
Make Claude take the work seriously
There's a specific technique for tasks where you want Claude to reason carefully rather than produce fast: tell it the stakes. Who will see this. What failure looks like. Why it matters. The model responds to framing — give it a reason to care, and the depth of reasoning visibly increases.
Part 3 — Cowork Hacks Most People Miss
Cowork is the desktop agent that reads local files, runs sub-agents in parallel, and connects to outside apps. The gap between people using it well and people using it as a fancy chatbot is large and growing.
Use outcome-based descriptions, not step-by-step
Cowork plans better when you describe the destination, not the route.
- ❌ "Open this file, copy column B, then paste into a new doc, then…"
- ✅ "Analyze this spreadsheet and produce a Word report summarizing spending by category, with an executive summary and a table of the top 5 expenses."
Claude plans the work, you review the plan, you approve or redirect, it executes.
Process files in parallel
A concrete example from the article: processing 10 files one-by-one takes ~30 minutes; processing them in parallel via sub-agents takes ~4 minutes. The trick is to phrase the task so the parallelism is obvious: "Process each of these 10 files and produce a separate one-page summary for each one." Cowork's sub-agents handle the fan-out automatically.
Set recurring tasks and walk away
Cowork's scheduled tasks let you describe a recurring job once and have it run on a cadence (as long as your machine is awake). Real examples: Friday file cleanup, weekly expense report, morning inbox triage, meeting prep the night before. Set once, run forever.
Stack connectors for cross-app workflows
The real power shows up when multiple connectors compound. Example from the article: your team takes meeting notes in Notion but auto-generates transcripts in Google Drive. You can tell Cowork to check the transcript against the Notion notes and surface commitments that didn't make it in. That's a workflow no single app can do.
Useful connectors to stack: Slack, Google Drive, Notion, Gmail, Calendar, Microsoft 365. Each connection multiplies the value of every other one.
Use Projects to stop context bleeding
Every Cowork workstream deserves its own Project — its own instructions, files, and memory. Without Projects, Claude carries assumptions from your marketing work into your financial analysis. With them, each context is clean and contained. The author calls this the single most underused Cowork feature among long-time users.
Use Dispatch to control Cowork from your phone
Dispatch creates a persistent link between Claude mobile and your desktop (paired once via QR code). You queue tasks from anywhere, and your computer does the work. Queue from bed in the morning, walk into finished deliverables.
Part 4 — Context Tricks That Change How Claude Reasons
A Claude session running near 90% context usage isn't just slow — it's actively producing worse outputs. Important instructions get buried, the model starts making mistakes it wouldn't make with a clean window. People blame the model; it's almost always a context problem.
Start fresh for every new topic
Counterintuitive but worth internalizing: a new conversation performs better than a long one, because there's no stale context muddying the model's focus. Start a new chat for every new topic, or whenever you notice quality dropping.
Write a handoff document before starting fresh
If you're mid-task and need to swap to a clean session without losing your place, ask Claude:
Summarize what we have done, what worked, what the next
step is, and any decisions made. Write it so that a fresh
Claude with no prior context can pick up exactly where
we left off and finish the task.
Save the handoff to PROJECTS/. Load it at the start of the next session. Clean context, full continuity.
Tell Claude what context it's operating in
Claude behaves differently depending on whether it's in a chat, inside a Cowork session with file access, or running as a Code agent. Tell it explicitly at the top of complex tasks, e.g.:
"You are working inside a Cowork session with access to the
PROJECTS/folder. The session has persistent file access. Save all outputs toCLAUDE OUTPUTS/."
That single paragraph eliminates a whole class of errors most users hit (Claude refusing to use file tools, or saving in the wrong place, or hallucinating that it can't do something it actually can).
Context engineering > prompting
The techniques that worked in 2024 actively hurt results today in some cases. The leverage point has shifted from what you say to what you load: system prompts, files, memory, examples. The structure around the task now matters more than the wording of the task.
Part 5 — Four Use Cases Worth Going Deeper On
The article points to four places where the gap between casual and pro users is widest:
- Investing. A four-level progression from search-engine-style questions (level 1) up to running institutional-grade research workflows with structured prompts, multi-source synthesis, and hedge-fund-style equity analysis (level 4). Most retail users sit at level 1, and the gap is mostly knowing the framework, not the model.
- Content creation. A structured loop — voice profile + anti-AI file + output templates + Socratic prompting — produces content that sounds like a person wrote it, and scales from single LinkedIn posts to full newsletter issues without quality collapse.
- Building agents. Most people who try this fail silently. The architecture matters more than the model: the author cites a Karpathy case study where an agent autonomously tuned its own code for two days, as an example of what's possible at the frontier.
- Revenue. Case study referenced: an agency owner used Claude Code to build a pipeline that fully replaced proposal-writing. The delta between a $500/month consultant and a $50,000/month one is mostly workflow architecture, not skill.
The One Shift That Ties It All Together
Most people are still using Claude the way they used ChatGPT in 2023 — one-off prompts, no structure, no memory, no context, starting from zero every session.
The people pulling ahead are building systems: files that load automatically, skills that trigger when needed, workflows that run while they sleep, agents that handle the repeatable parts so they can focus on the irreplaceable ones.
The gap between users who treat Claude as infrastructure and users who treat it as a chatbot is widening every month.
The setup in this article takes a weekend to build properly. The compounding starts immediately after.
FAQ Highlights
A few of the more useful Q&As from the article's FAQ section:
What's the single most impactful thing you can do? Build a persistent context system before you prompt for anything: an identity file, a voice profile, and an anti-AI-writing file, loaded automatically via Global Instructions. The improvement from this alone is bigger than switching models.
What is context engineering? Structuring everything Claude receives — system prompts, files, memory, examples, role framing, constraints — rather than obsessing over the wording of any one prompt. In 2026, the structure around the task matters more than its phrasing.
When should you build a Skill? For any task you do more than once a week with consistent output requirements: content formats, code-review patterns, research synthesis, slide templates, outreach writing.
Chat vs. Cowork vs. Code?
- Claude Chat — quick questions, feedback, short-form tasks.
- Claude Cowork — desktop agent with local file access, parallel sub-agents, connectors, scheduled tasks. Best for most knowledge workers.
- Claude Code — terminal-based agentic coding environment for developers who want maximum control. Many serious users run both Cowork and Code depending on the task.
How do you stop Claude from sounding like AI? Three things together: (1) a voice profile built via a Claude interview, (2) an anti-AI-writing file banning specific words/structures/tones, and (3) Socratic prompting so Claude pulls your thinking out before it writes.
When should you start a new session? For every new topic, whenever quality drops, or whenever the task has no real dependency on earlier conversation. Use a handoff document to preserve continuity without dragging stale context along.
Which connectors are worth adding? Slack (message search), Google Drive (docs), Notion (pages), Gmail (inbox), Calendar (meeting context). Microsoft 365 is the single most powerful connector for enterprise users (Outlook + SharePoint + OneDrive + the rest).
Why do Projects matter? They isolate workspaces inside Cowork so assumptions from one workstream don't bleed into another. One of the highest-leverage configuration changes available, and one of the most underused.
Author
Ruben Dominguez
Continued reading
Keep your momentum

MKT1 Newsletter
100 B2B Startups, 100+ Stats, and 14 Graphs on Web, Social, and Content
This is Part 2 of MKT1's three-part State of B2B Marketing Report. Where Part 1 looked at teams and leadership , Part 2 turns to what marketing teams are actually doing — what their websites look like, how they use social, and what "content fuel" they're producing. Emily Kramer u
Apr 28 · 10m
Lenny's Newsletter (Lenny's Podcast)
Why Half of Product Managers Are in Trouble — Nikhyl Singhal on the AI Reinvention Threshold
Nikhyl Singhal is a serial founder and a former senior product executive at Meta, Google, and Credit Karma . Today he runs The Skip ( skip.show (https://skip.show)), a community for senior product leaders, plus offshoots like Skip Community , Skip Coach , and Skip.help . Lenny de
Apr 27 · 7m

The AI Corner
The AI Agent That Thinks Like Jensen Huang, Elon Musk, and Dario Amodei
Dominguez opens with a claim that is easy to skim past but worth stopping on: the difference between elite founders and everyone else is not raw IQ or speed — it is that each of them has internalized a repeatable mental procedure they run on every important decision. The procedur
Apr 27 · 6m