The word "prompting" is hiding four different disciplines. Most people are practising one of them. Nate B Jones maps the full progression — and names the primitives replacing the old toolkit.
Each discipline tells the model something different.
1. Prompt engineering — telling AI what to do. A single request, a single session. The original skill. Now table stakes.
2. Context engineering — telling AI what to know. Structuring background, data, and situational information so the model can reason properly. Tobi Lütke's public insight was that context engineering principles made him a better leader, not just a better AI user. Most of the industry sits here.
3. Intent engineering — telling AI what to want. Jones's own term. Making goals, values, trade-offs, and decision boundaries machine-readable. Not just giving the agent good context — giving it a clear definition of success.
4. Specification engineering — telling AI what to build. Precisely describing what should exist so autonomous agents can execute without supervision. The ceiling skill.
The progression is sequential. What to do → what to know → what to want → what to build. Skip a layer and you get the Klarna problem.
Klarna's AI agent handled 2.3 million conversations in its first month. Cut resolution times from eleven minutes to two. Replaced the work of 853 full-time employees. Saved sixty million dollars.
Then the CEO went on Bloomberg to explain why the strategy had backfired — and started rehiring humans.
The AI optimised for what it could measure: speed, cost, resolution count. It missed what the company actually needed: customer satisfaction, retention, relationship value. Context was excellent. Intent was missing.
This is the gap between discipline two and discipline three. The system succeeded at the wrong thing.
Jones replaces the old prompt engineering toolkit with five primitives:
Specification engineer — writing structured specs that autonomous agents execute against defined quality bars. The core output of discipline four.
Intent framework builder — making organisational purpose, goals, and trade-offs explicit and machine-actionable. The core output of discipline three.
Eval harness — assessing whether agent outputs serve the defined intent, not just whether the task completed. Without this, you can't tell if you have a Klarna problem.
Constraint architecture designer — defining boundaries, guardrails, and decision constraints for agent operation. The difference between an agent that can act and an agent that knows when to stop.
Problem statement rewriter — what Jones calls "the Lütke Primitive." Reframing problems into precise, context-rich, intent-clear specifications before touching AI. The meta-skill that feeds all four disciplines.
Jones references Anthropic data showing a threshold where the 2025 prompting playbook collapses. Once agents run autonomously beyond roughly 35 minutes, you can't supervise and course-correct in real time. The upfront specification must stand on its own.
This is the inflection point where disciplines three and four become non-optional. Chat-based prompting doesn't scale past it. Structured specification does.
Jones argues the distance between people practising one discipline and people practising all four is already ten-times and compounding. Three model releases in February 2026 shipped with autonomous agent capabilities that make chat-based prompting obsolete.
The models stopped being conversation partners and started being workers. The skill of directing a worker you can't supervise in real time is a fundamentally different discipline from the skill of having a productive conversation.