Agents
Prompt engineering
The practice of designing inputs to language models so they reliably produce the desired output; covers wording, structure, examples, and system prompts.
Prompt engineering is the discipline of crafting the text — and increasingly images, audio, or structured data — that you send to a large language model so it produces the output you actually want. Because today’s models are sensitive to wording, ordering, and context, small changes in the prompt can dramatically shift quality, tone, accuracy, and reliability.
A practical prompt typically includes some of the following: a system prompt defining the role and rules, a clearly stated task, relevant context or documents, formatting instructions, examples (few-shot), and constraints on what to avoid. More advanced techniques — chain-of-thought, self-consistency, role-play, planning, decomposition into sub-tasks, retrieval augmentation — improve reasoning on hard tasks.
Prompt engineering matters most where the model is exposed directly: chat assistants, AI agents, code copilots, customer-support bots, and content pipelines. It overlaps with security: poorly designed prompts are vulnerable to prompt injection when they include user-supplied or web-retrieved text without clear instruction boundaries.
As models become more capable, raw prompt tricks matter less and prompt design — clear specifications, evaluation suites, and structured outputs — matters more.