Prompting
Prompting is how you turn vague intent into repeatable output. This lesson shows you how to brief the model with enough context, constraints, and structure to get usable first drafts instead of lucky guesses.
You will leave withOne reusable prompt scaffold, one iteration checklist, and one prompt rewrite you can reuse later.
Why this matters
Prompting is not about sounding clever. It is about giving the model enough direction that the output becomes controllable instead of random.
What to do
- Treat every prompt like a brief with a goal, audience, constraints, and expected output shape.
- Decide what good looks like before you ask the model to generate anything.
Why it matters
- Weak prompts create vague output, which forces you to rewrite from scratch instead of improving a useful first draft.
- Strong prompts make iteration cheaper because the model already understands the job it is being asked to do.
What good looks like
- You can explain the task in one sentence, the context in a few lines, and the output format in clear terms.
Checklist
- Goal is explicit
- Audience is explicit
- Output format is explicit
The better the brief, the less time you waste fixing preventable ambiguity.
Step 1: Define the task
Start with the job itself. Tell the model what you need, what the output is for, and who the result should serve.
What to do
- Write one line that names the task clearly, such as writing a hook set, rewriting a product script, or summarizing research.
- Add the purpose of the output so the model knows what success should optimize for.
Why it matters
- If the task is fuzzy, the model starts improvising on your behalf and fills gaps with generic assumptions.
- A clear task line gives you a stable starting point for every future iteration.
What good looks like
- The model could repeat your task back in one sentence without losing the goal.
Checklist
- Task is named
- Purpose is named
- Audience is named
If the model cannot identify the job quickly, it will give you a generic answer quickly.
Step 2: Add context and constraints
Once the task is clear, give the model the operating conditions: tone, boundaries, source material, and what it must avoid.
What to do
- Add the relevant context the model should respect, including product details, tone, banned claims, and non-negotiable constraints.
- Separate hard constraints from soft preferences so the model can prioritize correctly.
Why it matters
- Most bad outputs happen because the model was asked to invent missing information or because it never knew which constraints were mandatory.
- Constraints are what turn a general answer into a usable answer for your workflow.
What good looks like
- The prompt gives enough context that the model does not need to guess your tone, risk limits, or source of truth.
Checklist
- Relevant background included
- Hard constraints separated from preferences
- Known failure modes explicitly blocked
Context makes the answer relevant; constraints make it usable.
Step 3: Specify the output shape
Tell the model how to package the answer, not just what to think about. Output shape is where prompting starts saving real editing time.
What to do
- Ask for the answer in the exact structure you want to review: bullets, table, numbered hooks, rewrite options, or JSON.
- Name the number of variations, the length, and any formatting rules before generation starts.
Why it matters
- If the output arrives in the wrong shape, you end up spending time reformatting instead of judging quality.
- A defined output shape also makes comparison easier across multiple model runs.
What good looks like
- The answer can be pasted directly into your next workflow step with minimal cleanup.
Checklist
- Format is named
- Length is named
- Number of options is named
Formatting the output is part of prompting, not something to fix later.
Step 4: Ask for versions, not one-shot perfection
Prompting becomes more reliable when you ask for controlled options instead of hoping the first answer is the final answer.
What to do
- Ask for a few distinct versions that solve the same task from different angles.
- Keep the task stable while requesting variation in tone, hook style, or structure.
Why it matters
- Multiple controlled versions help you compare quality without rewriting the full prompt every time.
- Versioning also exposes whether the prompt is strong enough to stay on-task across variations.
What good looks like
- Different outputs still feel aligned to the same brief and constraints.
Checklist
- Task stays stable
- Variation category is explicit
- Outputs are easy to compare side by side
Good prompting gives you options with discipline, not randomness with extra text.
Step 5: Evaluate and iterate
After the first pass, improve the prompt by diagnosing the miss. Do not rewrite everything just because one part failed.
What to do
- Review the output against the task, constraints, and output format you requested.
- Change one variable at a time, such as tone, specificity, or structure, so you know what improved the result.
Why it matters
- Prompting gets stronger when iteration is diagnostic instead of emotional.
- If you change everything at once, you never learn which instruction actually fixed the problem.
What good looks like
- Each revision makes the answer more aligned without changing the core intent of the task.
Checklist
- Mismatch identified
- One change made at a time
- Improved version saved
Iteration should teach you how the brief works, not just produce another draft.
Common mistakes
Most prompting problems come from assuming the model understands unstated context, hidden constraints, or the preferred output shape.
What to do
- Cut filler language and replace it with actual instructions the model can follow.
- Name the audience, constraints, and format instead of hoping the model infers them.
Why it matters
- Unstated assumptions are the fastest way to get generic or off-brand output.
- Overwriting the whole prompt after every miss hides which instruction actually mattered.
Checklist
- Do not ask for 'good' without defining good
- Do not bury the task inside long context
- Do not skip the output format
- Do not rewrite everything after one bad run
Specificity is what makes prompting efficient, not length by itself.
Starter prompt scaffold
Use this whenever the task is still rough. It forces you to define the job, the constraints, and the output shape before you ask the model to solve it.
Task: Write [what you need] Purpose: The output will be used for [where it goes] Audience: Speak to [who it is for] Context: Use these facts, inputs, or source notes [paste context here] Constraints: Must include [x], avoid [y], stay within [z] Output format: Return [number] options in [format] with [length or structure]
What you should finish with
This topic is complete when these outputs exist and are saved for the next stage of the workflow.
- One reusable prompt scaffold you can copy into new tasks.
- One prompt rewrite that improves a weak request into a usable brief.
- One short iteration checklist for diagnosing weak outputs.
- One saved example of a prompt that already works for your workflow.
Placeholders for uploads
These are the assets we will plug in later. Keeping the slots visible now makes the workflow feel complete and shows exactly what still needs to be collected.
placeholder
Before / after prompt rewrite
Upload one weak prompt and the improved version side by side.
placeholder
Approved prompt library entry
Upload the first prompt template that is strong enough to reuse later.
placeholder
Prompt iteration checklist
Upload the checklist used to review and improve weak outputs.
Once your prompts are structured, Models is where you decide which model should handle which type of work without wasting quality, speed, or budget.
Continue to Models