Worst AI Principle
One of Ethan Mollick’s four core principles (from the book Co-intelligence 1) for working with AI is: “Assume This Is the Worst AI You Will Ever Use.” This reframes how we should think about AI’s current limitations. When an AI fails at a task today (generates mediocre ideas, makes errors, or struggles with complexity) we shouldn’t dismiss AI’s potential for that use case. Instead, we should design our workflows assuming that these specific weaknesses will be solved in the next iteration.
The insight cuts both ways:
- Don’t over-index on current failures. That clunky AI tutor or uninspiring brainstorming session represents the floor, not the ceiling. Build systems with “human in the loop” checkpoints that can progressively hand off more responsibility as models improve.
- Don’t under-invest in fundamentals. If AI capabilities will only grow, then the irreplaceable human skills (judgment, taste, domain expertise, critical filtering) become more valuable, not less. Knowledge acquisition remains essential even if information retrieval gets automated.
This principle forces you to think in terms of trajectories rather than snapshots. The question isn’t “Can AI do this well today?” but “If AI keeps improving at this task, how should I be positioning myself and my work now?”
Footnotes
-
Mollick, E. (2024). “Co-intelligence: Living and Working with AI.” Penguin Random House. ↩