AI tools are increasingly used across organisations to support tasks such as writing, research, planning, and analysis. While these tools offer significant efficiency gains, their outputs are highly dependent on how they are instructed. A key principle has emerged: the quality of output is shaped by the quality of input.
Prompt engineering refers to the structured design of inputs that guide AI systems to produce relevant and accurate responses. Research into large language models demonstrates that even small variations in phrasing and structure can significantly influence outcomes (Brown et al., 2020). This highlights that effective use of AI is not solely a technical capability, but a reflection of how clearly users define problems, provide context, and communicate expectations.
As AI becomes more embedded in everyday work, prompt engineering is increasingly recognised as a core skill that supports accuracy, consistency, and informed decision-making.
Prompt engineering can be understood as a form of structured communication with AI systems, where the user provides instructions that shape how the model interprets and responds to a task. Unlike traditional programming, where logic is explicitly defined, prompt engineering relies on language, context, and examples to guide behaviour. This has led to it being described as a form of “programming with language”, where the clarity and structure of instructions directly influence performance (Reynolds and McDonell, 2021).
The importance of prompt design is grounded in how large language models operate. These models identify patterns in the input they receive and generate outputs based on those patterns. Research shows that providing structured prompts or examples, known as few-shot prompting, can significantly improve accuracy and consistency (Brown et al., 2020). This reinforces that AI systems do not inherently “understand” tasks in the way humans do but instead rely on the quality of guidance they are given.
The effectiveness of prompt engineering is particularly evident in everyday workplace applications. AI tools are commonly used to draft communications, summarise information, generate ideas, and support decision-making. In each of these contexts, poorly designed prompts can result in outputs that are vague, misleading, or incomplete. Conversely, well-structured prompts that clearly define the task, provide context, and specify expected outcomes are more likely to produce relevant and reliable results.
Research into prompting methods highlights several characteristics associated with effective prompts. Clear and specific instructions reduce ambiguity and help the model focus on the intended task. Providing context ensures that outputs are aligned with purpose rather than generic responses. Structuring prompts, for example by breaking tasks into steps or specifying formats, has been shown to improve reasoning and output quality (Wei et al., 2022). Furthermore, prompt engineering is inherently iterative, requiring users to refine and adapt inputs based on the outputs generated (Liu et al., 2022).
Despite its benefits, prompt engineering is often applied inconsistently. A common limitation is the assumption that AI systems will infer meaning or intent without sufficient detail. This can lead to outputs that appear plausible but contain inaccuracies or unsupported assumptions. Guidance on human-AI interaction emphasises the importance of clarity, feedback, and evaluation when working with AI systems, highlighting that users remain responsible for interpreting and validating outputs (Amershi et al., 2019).
There are also broader risks associated with the use of AI that prompt engineering alone cannot resolve. Large language models can reflect biases present in training data, generate incorrect or fabricated information, and lack transparency in how outputs are produced. Research into foundation models identifies these risks as significant considerations, particularly in contexts where outputs inform decisions or are shared with stakeholders (Bommasani et al., 2021). As such, prompt engineering should be understood as a tool for improving outputs, rather than a substitute for critical thinking or validation.
Ultimately, prompt engineering represents a shift in how individuals interact with technology. Rather than simply using tools, users are required to think more deliberately about how they define tasks, structure information, and communicate expectations. This positions prompt engineering not as a technical skill in isolation, but as an extension of core capabilities such as problem-solving, communication, and analytical thinking.
Select a task you regularly complete using AI, such as writing or summarising information. Create two versions of a prompt: one basic and one structured with clear instructions, context, and expected output. Compare the results and identify how changes in the prompt influenced accuracy and usefulness. Use this to refine your approach and improve future outputs.
Prompt Engineering Checklist
Effective prompt engineering requires clarity, structure, and critical evaluation. This checklist supports the development of prompts that generate accurate, relevant, and reliable outputs when working with AI tools.
Before using AI for any task, review your prompt against this checklist and refine it to ensure clarity, context, and structure. After generating an output, critically evaluate its accuracy, relevance, and potential risks before use. Capture what worked well and refine your prompt over time to build a consistent and reliable approach.
|
Clarity
|
- Define the task precisely, avoiding vague or open-ended language that could lead to inconsistent outputs.
|
|
Context
|
- Provide sufficient background, purpose, and audience to ensure the response is relevant and appropriately framed.
|
|
Structure
|
- Organise the prompt logically, breaking complex tasks into clear steps or instructions where needed.
|
|
Output
|
- Specify the expected format, level of detail, and style to guide consistency in responses.
|
|
Assumptions
|
- Identify and minimise assumptions by explicitly stating constraints, definitions, or boundaries.
|
|
Validation
|
- Critically assess outputs for accuracy, completeness, and alignment with the original task before use.
|
|
Iteration
|
- Refine prompts based on results, recognising that effective prompting is an iterative process.
|
|
Risk Awareness
|
- Consider potential bias, inaccuracies, or ethical concerns within the output and how they may impact decisions.
|
|
Ownership
|
- Take responsibility for how outputs are interpreted, used, and communicated within your context.
|
|
Reflection
|
- Capture effective prompts and lessons learned to improve consistency and efficiency over time.
|