Tom's Guide published a piece this week called "I tried the top 1% way of using AI" and the thesis is dead-on: most people are using these tools like a better search engine when they should be using them like a chief of staff.
I have been saying a version of this for months. The article puts numbers on it — the author found that 30% of her "thinking time" was actually pattern-following she could have automated. That tracks. When I did my own five-hour audit of Hawaii-Guide.com, the bottleneck was never the AI. It was always my willingness to sit down and direct it with specificity.
What the Article Gets Right
Three concepts from the piece deserve attention.
Cognitive drudgery is the real target. Not emails. Not calendar management. The tasks that feel like thinking but follow a pattern — the classification work, the status checks, the "let me review this and decide" loops that you run the same way every time. That is where the time goes. The article calls it the "time-collapse prompt" — feed AI your 20 most frequent tasks and let it identify which ones are automatable. Simple idea, but most people never do it because the work feels too important to hand off.
Cross-model auditing solves the slop problem. Single-model output is where every "AI sounds like AI" complaint originates. The fix is not better prompting. The fix is a second model reviewing the first model's work as a critical auditor. Different training data, different blind spots, different defaults. The article frames this as "never trust a single model" and that is the right instinct.
Synthesis beats generation. The 1% are not using AI to write more. They are using it to read more — to compress a pile of raw inputs into the one thing that matters. What you focus on is worth more than what you produce. AI handles production. You handle direction.
What It Misses
The article treats this as a workflow discovery. It is actually a worldview shift.
The gap is not between people who know the right prompts and people who do not. The gap is between people who see AI as a tool they use and people who see it as a layer they operate through. The first group opens ChatGPT when they have a task. The second group has AI running continuously as infrastructure — monitoring, synthesizing, auditing — whether they are sitting at the keyboard or not.
That is the actual 1% behavior. Not better prompts. Persistent delegation.
I wrote about this in The $20 Employee — the idea that everyone now has access to a team-scale workforce for the cost of a subscription. The Tom's Guide piece circles the same idea without fully landing on it. The prompts are a starting point. The architecture is the thing.
Three New Prompts
I added three new entries to the prompt library based on these ideas:
- Cognitive Drudgery Audit — Feed it your 20 most frequent tasks. It identifies the ones that feel like thinking but follow a pattern, then drafts SOPs an AI can execute at 95% accuracy.
- Second-Opinion Review — Take one model's output and run it through a second model as a critical reviewer. Catches factual errors and logical gaps that single-model workflows miss entirely.
- Synthesis Brief — Dump raw inputs (articles, notes, threads) and get a 500-word decision-ready brief. Findings ranked by importance, contradictions surfaced, missing context flagged.
All three are copy-paste ready. No APIs, no setup, no paywall.
The Protocol: The 1% are not people who found a secret prompt. They are people who stopped treating AI as a tool they pick up and started treating it as a team they direct. The prompts are free. The shift is the hard part.