Linguist · Vuelo Labs
How to work with machines without losing your mind or your standards.
This is the third layer. The manifesto told you why — why this costs you something, why the mode distinction matters, why your brain isn't broken for struggling with it. The booklet gave you the eight things: the principles, one sentence each, with a pattern to take away. This goes into the how. In enough detail that you can build real working patterns from it, not just understand the logic in the abstract.
You don't need to have read the other two. But if you have, this goes further. The goal isn't to give you more to remember. It's to give you enough depth that the things you do remember actually stick — because they're rooted in something, not floating.
Part 1 · Think
This is the most important step and the most skipped one. Not because people are lazy. Because it doesn't feel like work. Sitting with your thoughts, jotting things down, talking to yourself — none of it produces the visible forward motion that opening a chat window and typing does. But this is the step that determines everything that comes after it.
When you skip Think and go straight to the machine, you're asking it to do two jobs at once: help you figure out what you want, and deliver what you want. That's a reasonable ask — but it's slower than it looks, because the machine can't distinguish between your half-formed idea and your actual intent. It'll execute on whatever you gave it. So if you gave it a vague mess, it'll produce a clean version of a vague mess. That's usually not what you needed.
It doesn't mean having a fully formed brief. It means knowing enough to have a useful first exchange. What is the actual problem? What would the output be used for? Who's it for? What would a bad version look like — not in quality terms, but in the wrong-direction sense? You don't need answers to all of these before you start. You need to have asked them.
The simplest version of this is a brain dump — getting the mess out of your head and somewhere visible before you try to shape it into an ask. It doesn't have to be structured. A voice memo while you walk. A paragraph of stream-of-consciousness in a notes app. The back of an envelope. The format doesn't matter. The act of externalising your thoughts does, because it separates the generating from the organising, and it gives you something concrete to work with instead of trying to think and prompt simultaneously.
Voice memo. Talk through the problem out loud like you're explaining it to a colleague. Say what you're trying to do, what you already know, what you're not sure about. Don't edit. Don't correct yourself. Three minutes of rough talking out loud almost always reveals something you didn't know you knew — a constraint you'd forgotten, a thing you actually care about, a version of the problem that's more specific than the one you started with. Transcribe it later or just use it as a reference. The act of speaking is the work.
Brain dump in the chat itself. There's a version of the Think step that happens in the machine — you write everything you know about the situation in no particular order, label it "this is a brain dump, not a brief," and ask the machine to tell you what it heard and what's missing. This is a valid approach. It puts the organising work on the machine rather than you, and it surfaces gaps immediately. The key is labelling it correctly — the machine needs to know you're not dispatching, you're clearing your head.
Rubber duck variation. Write the problem out as if you're explaining it to someone who knows nothing about it. No acronyms, no assumed context, no "you know the thing I mean." This forced clarity almost always reveals where the genuine ambiguity is. Often just writing the explanation is enough — you find the answer, or you find the right question, before you've even got to the machine.
You're ready to move from Think to Collab when you can answer: "What am I actually trying to produce, and why does it matter?" Not perfectly. Not in a clean sentence. But well enough that you can say it out loud without trailing off. That's the signal. If you still can't say what you're making and why, you haven't thought long enough.
Part 2 · Collab
Collab is the step between your messy thinking and your clean dispatch. Its job is to build the brief — not execute it. Most people skip it entirely and go from "vague idea" to "full instruction," which is why the output so often misses. A few minutes in Collab mode produces a dispatch that's five times more useful. It's not a detour. It's the fastest route.
The mindset is different here. You're not telling the machine what to do. You're in a back-and-forth. You're exploring. It's allowed to be messy. The machine's job in Collab isn't to produce the thing — it's to help you get clear enough to instruct it properly when you're ready.
This is the most underrated single technique in this entire handbook. When you're not sure what you want — when you have a rough idea but can't quite articulate it — say so, and ask the machine to interview you. Not "ask me questions." One question at a time. This matters. If you ask for a list of questions, you get a list to fill out. If you ask for one question, you get a conversation. Conversations produce better briefs than questionnaires.
Example conversation fragment
You: I need to write something to my team about a project delay. I'm not sure how to frame it. Ask me one question at a time to help me figure out what I actually want to say.
Machine: How delayed are we talking — a few days, a couple of weeks, or is this indefinite?
You: About three weeks. We know the new date.
Machine: And do you know why it happened — something you could have anticipated, or genuinely external?
You: Mostly external. A supplier issue. But I should have flagged it a week earlier than I did.
Machine: That's useful context. Do you want to address that directly in the message, or keep the focus on what's happening now and next steps?
Three questions in and there's a brief. The machine now knows the delay duration, the cause, the honest complication, and what the writer wants to emphasise. A Dispatch built from this exchange will be considerably better than one built from "write me an email about a project delay."
The single most leveraged investment you can make in your day-to-day use of AI tools is a context document — a short, always-available reference that tells the machine who you are, how you communicate, and what you care about. You write it once. You paste it at the top of every new conversation. It replaces ten minutes of re-establishing context with thirty seconds of copy-paste.
What goes in it: your role and what you actually do day to day. Your industry and the audience you typically write for or speak to. Your tone preferences — formal, direct, conversational, technical. Things you always want in your outputs. Things you never want. Any standing constraints that apply to most of your work. Keep it under 200 words. If it's longer, it's become a document about you rather than a useful briefing tool.
Example context document
role: Head of Product at a B2B SaaS company, ~60 person team
audience: mostly engineers, PMs, and senior stakeholders
tone: direct, no fluff, plain language — assume technical literacy
always: give me options when you're not sure which direction I want
never: bullet-point everything — use prose unless I ask for lists
avoid: corporate buzzwords, passive voice, "it's important to note that"
format default: short paragraphs, no headers unless doc is long
Once this exists, you stop repeating yourself across sessions. The machine knows your defaults. You only need to specify what's different about this particular task.
A brief worth dispatching answers four questions clearly: What is the output? Who is it for? What should it include or avoid? What does done look like? You don't need an elaborate document. You need those four things. If you can't answer all four, you haven't finished Collab yet. Keep going — one more question, one more exchange — until you can.
Part 3 · Dispatch
Dispatch is the moment you stop exploring and start executing. You've thought. You've collaborated. Now you write the cleanest version of what you want and you send it. This is the step most people spend all their time on — but if Think and Collab went well, Dispatch is actually fast. The work was in the preparation.
A good dispatch has four elements. None of them are complicated. All of them pull their weight.
Objective. What are you making? Be specific. "A follow-up email" is not specific enough. "A follow-up email to a senior stakeholder confirming next steps after a go/no-go meeting" is. The objective tells the machine what kind of output to produce, not just the category of thing.
Context. What does it need to know to do this well? Apply the load-bearing test: if removing this piece of context wouldn't change the output, cut it. You want the minimum set of information that produces the maximum quality output. Background that's interesting but not directly relevant to this task is noise. Noise degrades output.
Role. Who is it speaking as, or speaking to? This is covered in Principle 02, but it belongs in every dispatch: a one-sentence role calibration before the task. It changes the register, the depth, the assumed audience, and the standards the machine holds itself to. It is the highest-return single addition to any prompt.
Constraints. What are the hard edges? Format, length, tone, things to include, things to avoid. The constraint that matters most: "done when X." Define the finish line explicitly. Without it, the machine has no basis for stopping, and you have no basis for knowing you've arrived.
Read your dispatch before you send it. For every piece of context, ask: if I removed this, would the output be meaningfully worse? If the honest answer is no — cut it. This sounds obvious. It isn't. Most people over-brief because context feels safe, like more information equals more insurance. It doesn't. The machine weights everything you give it. Too much context means it's trying to incorporate things that don't matter, and the output reflects that.
The right amount of context is the minimum that makes the task possible at the quality you want. That's it.
Before — first instinct, unsharpened
Write me a one-pager about our new onboarding process for the team.
After — dispatched properly
role: You are a clear, practical internal communications writer.
task: Write a one-page internal doc introducing a new employee onboarding process.
context:
— audience: current employees, manager level and above
— change: moving from ad hoc onboarding to a structured 30-day programme
— tone: direct and practical, not corporate — we're a 45-person company
— key change that needs emphasis: new buddy system in week one
format: short intro (2–3 sentences), then three sections with plain-language headers
length: under 400 words
done when: someone can read it in two minutes and know exactly what's changed and why
Both prompts ask for the same thing. The second one gets a usable first draft. The first one gets something that needs three rounds of correction before it's close to right.
The machine's default register is slightly formal, slightly hedged, and slightly verbose. That's the middle of the road calibrated to offend nobody. It is almost never the right register for your specific work. So specify it. Not "professional" — everyone thinks their work is professional. "Direct, no hedging, plain language, two-syllable words where possible." That kind of specific. The more precisely you describe the tone, the less editing you do after.
Format instructions are equally underused. If you want paragraphs, say so. If you want bullet points, say so. If you want a specific number of sections, name them. The machine will invent structure if you don't give it one — and its invented structure is rarely yours.
Part 4 · Iterate
Iteration is where most quality gets lost. Not because people iterate badly — but because they iterate too much. Each round takes the output one step further from the original dispatch, one step further from your actual intent, and one step closer to the averaged middle of everything you've asked for so far. The machine doesn't hold your original intent in reserve. It learns from every correction. Enough corrections and it can't find its way back to what was good about the first version.
Good iteration is targeted and specific. Not "make it better" — there's no useful information in that instruction. Not "more engaging" — engaging to whom, in what context, by what measure? Good feedback names the specific thing that isn't working and what it should do instead. "The second paragraph buries the main point — move it to the top and cut the setup." "The tone is too formal for this audience — write it like a colleague, not a consultant." Specific problem, specific fix. One thing at a time.
When you give targeted feedback, you get a targeted change. When you give vague feedback, you get a vague interpretation of a vague direction, applied to an output you already had mixed feelings about. That's how you end up at round five still not happy with it.
It goes like this. First output is pretty good — maybe 85% of the way there. You ask for a tweak. Second output is 87%. You ask for another tweak. Third output is 82% — slightly worse, but in a different way. You ask it to go back to something more like the first version. The machine produces a blend of the first and third. You're now at 80% of the original's quality, and you've spent four rounds getting there. The time cost is real. The quality cost is real. And you can't actually go back.
The antidote is the rule: max two rounds of iteration, then ship or start fresh. If after two targeted rounds the output still doesn't pass your done criteria, the brief wasn't right — not the iteration. Start a new conversation with what you've learned.
The first good version reads as if a competent person wrote it for the right audience with a clear purpose. It passes your done criteria. It doesn't have an obvious flaw that would make you uncomfortable sending it. It might not be perfect — but the things you'd want to change are preferences, not problems. That's the signal. Preferences don't justify another round. Problems do, once, specifically.
Stop when it's good. Not when it's perfect.
Start fresh if you've made the same correction twice and it isn't sticking. Start fresh if the output has drifted so far from your intent that you'd have to explain the entire task again to get back. Start fresh if you realise the brief was wrong in a fundamental way — not just a detail, but the framing. In all of these cases, the existing thread is a liability. The clean brief you write for the new conversation, informed by what you just learned, will produce a better result in one round than the existing thread would in five.
The machine doesn't need breaks. You do. None of this — Think, Collab, Dispatch, Iterate — works well if you're running on empty. The best thinking happens on a walk, not at the screen. The sharpest dispatches get written after a night's sleep. Iteration decisions made when you're tired are how you end up at round six still not happy with it.
Take breaks. Eat. Sleep. Talk to people who aren't using AI to talk to you. When you're done for the day, try to actually be done — not "one more round." The machine will be there in the morning. The only thing that won't keep is your attention.
The machine is a supporting player in a much larger human conversation. Keep it that way.
Part 5 · The eight principles in practice
The booklet gave you the principles in compressed form. This section gives each of them a scenario — the moment it matters most — and what specifically to do when you're in it. No pattern blocks. Just the practice.
01
Where it matters most: high-frequency tasks. If you're using these tools many times a day, the energy saved by removing social preamble compounds over thousands of interactions. It doesn't feel like much on any given day — until you notice how much lighter the whole practice feels.
What to do specifically: before you hit send, read the first sentence of your prompt. If it contains "Hi," "Hope," "Sorry," "Just wanted to," or "I know you're probably busy" — delete it and start from the second sentence. The second sentence is almost always where the actual task begins. The first sentence almost never adds information.
The harder version of this isn't the words — it's the feeling. Some people feel rude going straight to the task. That feeling is real, but it's misplaced. The machine isn't receiving your tone. It's receiving your instructions. Saving your warmth for the people who can actually feel it is the point.
02
Where it matters most: any task where perspective is as important as accuracy. Writing, feedback, strategy, communications, analysis where you want an opinion rather than just facts. The role does something subtle: it gives the machine a reason to have a viewpoint, and a basis for telling you when something isn't working.
What to do specifically: add one sentence before your task. Not a paragraph — one sentence. "You are a no-nonsense B2B copywriter who has written for technical audiences for fifteen years." That's enough. The more specific the role, the more calibrated the output. Generic roles like "helpful assistant" are redundant — that's already the default. Specific roles like "a principal engineer reviewing a proposal" change the register immediately.
The version of this that unlocks the most value: pairing the role with an explicit permission to be direct. "You are [role]. Be honest with me — I can handle criticism." Without that, many outputs default to reassuring regardless of how good or bad the work is. The role sets the expertise; the permission removes the social hedging.
03
Where it matters most: complex workflows that you're tempted to compress into one big prompt. The instinct makes sense — why not ask for all of it at once? The problem is that you lose reviewability at every step. If the summary is slightly off, and you use it to build the action list, and you use that to draft the email — you've stacked three tasks on a flawed foundation you didn't catch because you never reviewed the first step.
What to do specifically: look at your prompt and count the tasks. If there's more than one, split them. Dispatch the first. Review it properly — not a skim, an actual read — before you use it as input for the second. Each output is a decision point. If you'd send the first thing to a colleague without reading it, you're moving too fast.
The counterintuitive thing about working sequentially is that it's usually faster overall. A five-step workflow with review at each stage catches a bad direction at step one, before it's baked into everything downstream. The single-prompt version catches it at the end, when you've already committed five outputs to a wrong premise.
04
Where it matters most: longer, more complex tasks where you have a lot of relevant background. The temptation is to include everything because "it might be relevant." Resist this. The machine has no way to know what's load-bearing and what's background noise, so it tries to incorporate everything — and the output reflects that with hedges, qualifications, and tangents you didn't ask for.
What to do specifically: before you paste context into a prompt, go through it line by line and ask the load-bearing question for each piece: would the output be meaningfully worse without this? If the answer is no, delete it. You want the minimum set that makes the task possible at the quality you need. Not comprehensive — minimum sufficient.
A related discipline: separate the context you always need (your context document) from the context that's specific to this task. The context document goes at the top of every new conversation. Task-specific context goes in the dispatch. Mixing the two produces prompts that are long and muddled. Keep them distinct.
05
Where it matters most: iterative tasks — anything you might want to refine, like writing, design thinking, analysis. Without a done definition, "better" is infinite. You can always make something better by some measure. The question is whether that improvement is worth the round, and you can't answer that without a standard to measure against.
What to do specifically: write your done criteria into the dispatch itself. Not as an afterthought — as a required element. "Done when: under 300 words, active voice throughout, no jargon, ends with a clear next step." Now both you and the machine have a finish line. You use it to evaluate the first output. If it passes, you stop. If one thing fails, you name that thing and fix only that thing. That's the whole iteration process.
The mental shift this requires: stop thinking of done as "when it feels right" and start thinking of it as a checklist you write before you start. The checklist doesn't need to be long. It needs to be specific. Four criteria is enough for most tasks. The act of writing them forces you to be clear about what you actually want — which is good thinking, not just good prompting.
06
Where it matters most: creative and writing tasks, where there's no objective measure of quality and the temptation to improve indefinitely is highest. The machine will produce infinite versions. That doesn't mean infinite versions exist. It means the machine is trying to meet a moving target you haven't defined — and each version is an average of everything you've asked for so far, smoothed toward the middle.
What to do specifically: evaluate the first output against your done criteria before you decide to iterate. Not against a vague sense of "could be better" — against the list you wrote before you started. If it passes, ship it. If it doesn't, identify the specific failing and address only that. Resist the pull to improve things that aren't failing. Improvement is not free. It costs a round of iteration and moves the output further from what was originally good.
The useful framing: the first good version is not the ceiling. It's the exit point. Your done criteria was your definition of good enough — good enough for the actual purpose, in the actual context. Once you're there, you're done. Pushing past done doesn't produce better. It produces different. And different, at that point, is usually not better.
07
Where it matters most: any task that's taken more than two correction rounds without converging. This is the signal. Two corrections and still not close means the thread has accumulated enough contradictory context that the machine is trying to satisfy too many competing instructions at once. That problem compounds with each additional message. The thread is not salvageable. The brief is.
What to do specifically: close the tab. Open a new conversation. Take the best brief you can construct from what you've learned — what the task actually requires, what went wrong in the last attempt, what you'd specify differently — and dispatch fresh. This almost always produces a better first output than the fifth attempt in the broken thread would have. The learning from the failed thread is the asset. The thread itself is not.
The emotional component is real: there's a sunk cost pull to the existing conversation. You put effort into it. You don't want to throw that away. But the effort produced information — you now know more about what the brief needs to say. Take that. Leave everything else. The new conversation will be faster because of it.
08
Where it matters most: when you're stuck, when the output isn't working but you can't identify why, and when you're starting something new that you expect to do repeatedly. The machine is remarkably good at telling you what it needs to help you better — but only if you ask directly. Most people don't ask. They iterate instead, which is slower and produces worse information.
What to do specifically: when an output isn't working and you're not sure why, ask: "What information would help you give me a significantly better answer on this?" The machine will usually identify the gap — missing context, unclear objective, ambiguous constraints — faster than you'd find it by adjusting the prompt through trial and error. This one question can save a full round of iteration.
The version of this that builds long-term value: at the end of any task you'll repeat regularly, ask "what would make this type of task consistently better if I provided it from the start?" Save what it says. Incorporate it into your context document or your template for that task type. The machine is essentially telling you how to brief it better. That knowledge applies to every future version of the same kind of work. One conversation, lasting improvement.
You won't use all of this at once. You shouldn't try to. Start with the one or two things that match whatever's frustrating you right now. The principle about starting fresh if it's going wrong. The technique of one question at a time. The habit of defining done before you dispatch. Any one of these, applied consistently, will change how the work feels.
The others will make sense when they make sense — when the situation arrives and you recognise it. That's how patterns actually settle into practice. Not by memorising them. By experiencing the scenario they describe, and having something ready when you do.
The goal was never to make you better at prompting. It was to make this less exhausting and more yours. Something you do deliberately, in a way that costs you less than it did before — and leaves more of you for the things that actually need you.
The machine will wait. Go be human.