The AI Corner

The AI Agent That Thinks Like Jensen Huang, Elon Musk, and Dario Amodei

RD

Ruben Dominguez

Apr 27, 2026

6 min read

The AI Agent That Thinks Like Jensen Huang, Elon Musk, and Dario Amodei

Source: The AI Corner · Author: Ruben Dominguez · Date: Apr 19, 2026 · Original post

Header image — six founders mental models

⚠️ Note on completeness: Most of this post sits behind a paywall. The full playbook (system prompts, IDENTITY.md, MEMORY.md, the six copy‑paste prompts) is not accessible. This summary captures the public preview — the thesis, the framing, and the table of contents of what the paid section delivers. Treat it as a map of the territory, not the territory itself.


The core idea: great founders don't think better, they think differently

Dominguez opens with a claim that is easy to skim past but worth stopping on: the difference between elite founders and everyone else is not raw IQ or speed — it is that each of them has internalized a repeatable mental procedure they run on every important decision. The procedure is the moat, not the person.

He gives four quick illustrations, each of which is meant to feel like a move you could imitate, not a personality trait you'd have to be born with:

  • Jensen Huang (NVIDIA) — separates the stated constraint from the actual constraint. When a team says "we can't ship because of X," Jensen's habit is to ask what would still block them if X disappeared overnight. Often X is a symptom; the real bottleneck is something nobody named. Naming it correctly is most of the work.
  • Elon Muskdeletes requirements before he builds. His well-known "first, question every requirement" step from the SpaceX/Tesla algorithm: assume every spec on the list is wrong until proven otherwise, and rip out as many as possible before any engineering begins. Building the wrong thing efficiently is still failure.
  • Dario Amodei (Anthropic)red-teams every major bet before committing. Before saying yes to a big move, he runs an adversarial pass: what is the strongest case against this decision, and what would have to be true for that case to win? It's a pre-mortem baked into the decision itself.
  • Sam Altman — runs every opportunity through one filter that, per Dominguez, kills 90% of bad options on contact. (The specific filter sits behind the paywall, but the structural lesson is: a single sharp question can do the work of a long evaluation framework.)

The takeaway: each of these is a framework you can write down, not a vibe. And once you can write a framework down, you can hand it to an LLM.

The pitch: encode the frameworks into an AI "thinking partner"

The post's thesis is that you don't use these models more often by trying to remember them — you use them more often by building an agent that always applies them for you. Dominguez frames the agent not as a chatbot that answers questions but as something that:

  1. Challenges your reasoning rather than confirming it.
  2. Names the mental model it is applying out loud, so you can learn it by watching it work.
  3. Tells you what a specific operator would do with your specific problem (e.g. "Here's how Huang would reframe this constraint…").

He sums up the philosophical shift with a line worth quoting:

"Most people use AI to get answers faster. The founders building real companies use it to think harder."

That single sentence is the whole pitch. The agent is not a productivity tool — it's a Socratic partner designed to slow you down at the moments when slowing down is worth more than speeding up.

What the full (paywalled) playbook contains

The author lists six deliverables behind the paywall. Even without access, the table of contents itself is informative because it shows the shape of a serious "agent personality" build — useful as a checklist if you want to construct your own:

  1. Six mental models, fully unpacked. The roster: Jensen Huang, Elon Musk, Dario Amodei, Sam Altman, Brian Chesky (Airbnb — known for going deep on a single customer journey end-to-end), Paul Graham (YC — known for "make something people want" and the formal/informal essay-as-thinking-tool). Each model is described in operational terms — how it actually runs in practice, not the famous quote version.
  2. The Claude setup. A Skill file (a reusable instruction module Claude loads on demand), a brand context file (so the model speaks to your business, not a generic founder), and Project instructions that turn Opus 4.7 into what he calls a "founder-grade reasoning engine."
  3. The ChatGPT setup. Custom Instructions templates, Memory seeds you paste in by hand so ChatGPT remembers the frameworks across sessions, and a Founder Council Custom GPT — a single GPT that runs all six founders in parallel and lets them argue with each other before answering.
  4. The OpenClaw setup. An IDENTITY.md file (who the agent is), a MEMORY.md file (what it remembers about you), trigger phrases that automatically route a question to the right framework (e.g. saying "what's the real constraint here" might invoke the Jensen mode), and the specific Skills to install.
  5. The combined workflow. Which of the three tools (Claude, ChatGPT, OpenClaw) to reach for depending on the type of decision — and how to chain them so the output of one becomes the input of the next.
  6. Six copy‑paste prompts, one per founder, each engineered to apply that founder's framework to whatever strategic question you bring it.

The structural lesson here, even without the prompts themselves: a serious "thinking agent" is identity + memory + triggers + skills + a routing layer across tools. Most people stop at "a clever prompt." Dominguez is arguing that the prompt is the smallest piece.

What you can actually do with just the preview

Even without buying the playbook, the post hands you a usable scaffold. If you wanted to build a stripped-down version this afternoon, the public material implies a recipe:

  • Pick 3–6 founders whose decision style you admire and can describe in one sentence each (constraint-finder, requirement-deleter, red-teamer, single-filter, customer-immersion, essay-thinker).
  • Write each as a short instruction block: "When I bring you a decision, do X. Always name which model you're using."
  • Put them in a single system prompt and require the model to pick one (or run several) before answering.
  • Force it to disagree with you at least once per response — the "challenge your reasoning" rule is the part that turns a chatbot into a thinking partner.

That is the public-side gist. The paid post then provides the polished, tested versions of all of this for the three big tools.


Summary based on the publicly visible portion of the article; the bulk of the playbook (system prompts, memory files, copy-paste prompts) is gated behind a paid Substack subscription and was not accessible.

#AI#AI_AGENTS#ENGINEERING#AUTOMATION#CONTENT#STARTUPS

Author

Ruben Dominguez

The weekly builder brief

Subscribe for free. Get the signal. Skip the noise.

Get one focused email each week with 5-minute reads on product, engineering, growth, and execution - built to help you make smarter roadmap and revenue decisions.

Free forever. Takes 5 seconds. Unsubscribe anytime.

Join 1,872+ product leaders, engineers & founders already getting better every Tuesday.