
Another Wednesday, another selection of AI news and resources to help you become more AI native. This week:
AI beat physicians in two separate peer-reviewed studies in one week
Subquadratic launches a 12M-token model at a fifth the cost of the frontier
GPT-5.5 Instant takes over as ChatGPT's default with 52% fewer hallucinations
The 4-line prompt that turns any pile of notes into a free, backlinked second brain
Next Gen
Your best customer is no longer a person.
Whats happening: Aaron Levie said the quiet part out loud last week: "As agents become the biggest users of software, all software has to be available in a headless fashion." Cloudflare and Stripe shipped a joint protocol that lets agents create their own Cloudflare accounts, register domains, and deploy apps, with Stripe capping spend at $100 per agent per month. Salesforce launched Headless 360, exposing the entire platform so agents like Codex and Claude Code can operate it without a browser. Anthropic just crossed $44 billion in ARR, up $14 billion in a single month.
Why it matters: Every B2B product was built for a human clicking through a UI. That assumption is now load-bearing debt. If the fastest-growing buyer of your category is an agent, and your product cannot be operated without a screen, you are about to lose distribution to whoever exposes a clean API surface first. The evaluation question is shifting from "is the UI good" to "is the API good," and customers will not announce the switch. They will just route around you.
In the wild: Coinbase cut 14% of its workforce this week and CEO Brian Armstrong's open letter said the company is flattening every layer, making managers contributors, and hiring exclusively for AI-native talent who can ship using agents. Vercel is hiring "design engineers" who sketch and ship alone, collapsing two roles into one. The pattern is consistent across both buyer and builder: fewer humans, more agent leverage per employee, smaller orgs running bigger workloads.
Looking ahead: Audit your product this quarter. Pick the three workflows your customers run most often and ask whether an agent could complete them without ever opening your app. If the answer is no, that is your roadmap. The first version does not need to be elegant. It needs to exist before a competitor's does. Companies that ship a workable agent surface in 2026 will own the next distribution layer. The ones that wait will spend 2027 explaining why their UI is still beautiful.
AI First
The Atomic Notes Prompt
A four-line prompt turns any pile of raw notes into a free, fully backlinked second brain in under 60 seconds.
The trick is called atomicizing. It splits a text blob into many small, single-concept files with [[wikilinks]] between them. No vector database, no RAG, no $20 per month app. You end up with a browsable knowledge graph that lives inside Obsidian or any markdown editor.
Copy any raw text: a meeting transcript, a research dump, a voice memo.
Paste it into Claude, GPT-5.5, or Gemini.
Run this prompt:
Dissect this raw note into atomic Obsidian markdown files. Each file = one concept. Use [[wikilinks]] between any concept that references another. Output as separate code blocks with filenames.Save each output block as a .md file in a folder.
Drop the folder into your Obsidian vault.
Click around. The graph builds itself.
Pro tip: Run the same notes through a second model and ask a third to merge the outputs. You get sharper concept names and tighter backlinks for free.
This is the cleanest personal knowledge management workflow we have seen this year. It is for anyone whose notes app is currently a graveyard of ideas they will never find again.
AI News
OpenAI made GPT-5.5 Instant the new default in ChatGPT. The model is now live for every ChatGPT user, free tier included. OpenAI says it produces 52% fewer hallucinated claims than GPT-5.3 Instant in high-stakes domains like medicine, law, and finance, and answers about 30% more concisely. This is the first time the company has shipped a default upgrade aimed squarely at accuracy under pressure rather than raw capability or speed.
Subquadratic came out of stealth with SubQ, a 12M-token model at a fraction of frontier costs. The lab raised $25M in seed funding behind a fully sub-quadratic architecture that scales linearly with input length. SubQ scored 97% on RULER 128K (Opus 4.6 hit 94%) at $8 per run versus roughly $2,600 on frontier models. Its CLI agent loads an entire repo in one pass, no chunking required. Anthropic still leads SWE-Bench at 87.6% to SubQ's 81.8%, but the cost-per-context math just shifted.
AI beat human doctors in two peer-reviewed studies the same week. Mayo Clinic's REDMOD model flagged pancreatic cancer on routine CT scans an average of 16 months before diagnosis, catching 73% of pre-diagnostic cancers versus 39% for specialist radiologists. Days later, a Harvard study in Science showed OpenAI's o1 correctly triaged 67% of real ER cases compared to 50% and 55% for two attending physicians. Neither team recommended clinical deployment yet. The second-opinion use case is now harder to argue with.
That's it for this week. See you next Friday.
- Cam
