AI Infrastructure

Prompt engineering

Prompt engineering is the practice of phrasing requests to a language model to get better outputs.

Prompt engineering dominated the 2022 to 2024 period. Teams treated the prompt itself as the main lever on model quality. Techniques like role prompts, few-shot examples, chain of thought, and careful phrasing produced real gains when models were weaker and tooling was thinner.

The practice flattened as models improved and as better patterns emerged. Tool calling, structured output, retrieval, and agent loops all moved quality work out of the prompt and into the system around it. Prompts still matter, but they stopped being the main differentiator.

What remains of prompt engineering today is mostly craft. Clear instructions, good examples, and careful framing still help. The discipline itself has folded into broader fields: context engineering, eval design, and agent architecture. The job title is disappearing even as the skill stays useful.

The Amdahl view

Prompt engineering is mostly obsolete as a standalone job. The practitioners who were good at it moved into context engineering, evals, or agent design. Teams still optimizing prompts in 2026 are usually distracting themselves from a context problem they should be solving instead. If your agent is wrong, the prompt is almost never where the fix lives. Look at what the agent can see.

See customer intelligence running on your own customer conversations.