jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

Selecting Effective RolesCalibrating Expertise LevelsVoice, Style, and ToneConstraint-Driven PersonasMultiple Personas in DialogueRole-Based GuardrailsHierarchical Prompting PatternsChain-of-Roles WorkflowsMaintaining Persona ConsistencyAudience Emulation PromptsSystem vs Developer vs UserPrefix and Header TemplatesStyle Guides as PromptsRefusal and Safety PersonasPersona Handoffs and Transitions

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Roles, Personas, and System Prompts

Roles, Personas, and System Prompts

20929 views

Leverage roles and system instructions to shape expertise, tone, and boundaries across single and multi-agent setups.

Content

1 of 15

Selecting Effective Roles

Roles: Practical Sass & Precision
6735 views
beginner
humorous
education theory
gpt-5-mini
6735 views

Versions:

Roles: Practical Sass & Precision

Watch & Learn

AI-discovered learning video

YouTube

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Selecting Effective Roles — the art of picking the right hat for your prompt

You already learned how to write clear, actionable instructions: scope, constraints, acceptance criteria — the boring-but-powerful scaffolding that makes models behave. Now we level up: which voice, hat, or persona should that instruction live in?

This lesson builds on those lessons about brevity vs completeness, avoiding leading the model, and hints and nudge strategies. Think of those as the blueprint and safety rails; selecting roles is choosing who on stage will read the blueprint and follow the rails.


Why roles matter (and why they are not just cosplay)

  • Roles focus behavior. A system prompt that says You are an expert epidemiologist anchors the model to a domain of knowledge and style the same way a director anchors an actor.
  • Roles set expectations so that acceptance criteria you wrote earlier produce the intended output format and depth.
  • Roles manage cognitive load: instead of piling constraints into a single instruction, assign different responsibilities to different roles (system vs assistant vs user) and keep prompts tidy.

Pro tip: if your instruction is a complex machine, roles are its gears. Misplace one gear and you get squeaking, not productivity.


Role types at a glance

Role Purpose When to use it Example mini-template
System Anchors global behavior and constraints Always; for core persona, ethics, style 'You are a concise expert. Never invent facts.'
Assistant persona Stylized answering voice, specialty For domain-specific framing or consistent tone 'You are a friendly legal analyst.'
User persona Simulated user context for roleplaying When testing conversational flows or user simulation 'You are an impatient customer asking for a refund.'
Tool/Agent role Delegates tasks to sub-agents or tools For chain-of-thought separation or tools integration 'You are the summarization tool; return bullet summaries.'

How to select an effective role (step-by-step)

  1. Identify the job to be done. Reference your acceptance criteria. If the output must be a 3-line summary of scientific findings with citations, you probably want a persona that emphasizes accuracy and citation discipline.
  2. Choose the anchor: system vs assistant. Use system for immutable guardrails (safety, hallucination prevention, style), assistant for domain and tone. Keep the system prompt short and strong. Avoid cramming hints into system that belong in user instructions (remember: brevity vs completeness).
  3. Decide specificity vs flexibility. Ask: do I need strict compliance (use strong role constraints) or creative outputs (looser persona)? If strict, include explicit constraints in system; for creative, give broad persona cues at assistant level.
  4. Layer responsibilities. Put global rules in system (no made-up sources). Put domain expertise and examples in assistant. Put task-specific acceptance criteria in user prompt.
  5. Test and iterate. Use quick probes (see tests below) and refine.

Templates you can copy-and-paste (start here, then tweak)

System prompt (short, non-leading):

You are a careful, evidence-first assistant. Refuse to invent sources. When asked for sources, provide reliable, verifiable citations or say 'no reliable source found'. Keep responses under the user-specified length unless asked otherwise.

Assistant persona (for domain & tone):

You are an experienced data-visualization consultant. Explain things clearly, prioritize actionable steps, and use bullets for instructions. Use plain language for non-technical stakeholders.

User prompt (task + acceptance criteria):

Task: Turn these analysis notes into a 3-slide presentation. Acceptance: 3 bullets per slide, one chart idea per slide, and sources cited inline. Max 200 words.

Common pitfalls and how to avoid them

  • Over-specific system prompts that lead the model: If the system says 'Always answer in 5 bullet points', you reduce flexibility and might conflict with task needs. Align system rigidity to global invariants only (safety, no fabrication).
  • Role redundancy: Two roles telling the model to do the same thing creates friction. Centralize the rule in the system and reference it briefly in the assistant if needed.
  • Hidden assumptions: If a persona assumes a dataset or context, make that explicit in the user prompt or you risk hallucination.

Bad vs Good example

Bad system: 'You are a marketing expert. Convince users to buy our product without mentioning limitations.'

Good system: 'You are a marketing analyst. Provide persuasive, evidence-based messaging and always include reasonable limitations and caveats.'

See the difference? The bad one instructs unethical behavior and leads the model to omit important info; the good one anchors to ethics and accuracy.


Quick tests to evaluate a role

  • Sanity check: Ask the model to summarize its own role in one sentence. Does the summary match your intent?
  • Edge-case test: Give an ambiguous or adversarial input. Does the role keep to guardrails (no hallucinations, no unethical outputs)?
  • Format test: Ask for output that violates your acceptance criteria. Does the persona respect the constraints or ignore them?

If it fails any test, tighten the system prompt for guardrails or move the instruction into the user prompt as explicit acceptance criteria.


Checklist: Choosing a role (use this in your prompt editor)

  • Have I defined acceptance criteria separately from persona? (avoid role creep)
  • Are global constraints in system and task specifics in user prompt? (follow the pattern)
  • Is the assistant persona scoped to domain and tone only? (not hard constraints)
  • Did I test with edge cases? (always)
  • Is there a fallback instruction for unknown or unverifiable facts? ('I don't know' or 'no reliable source')

Final, slightly theatrical thought

Choosing roles is less about giving the model permission to be clever and more about reducing ambiguity so it knows which cleverness is allowed. The goal is to make the model a reliable collaborator, not a freelance improv comedian who occasionally invents statistics.

Remember: system = the rules of the house. Assistant = the kind of expert you want. User = the specific chore you want done. Keep them tidy, test them like a lab rat, and iterate.

Key takeaways:

  • Use system prompts for safety and immutable rules.
  • Use assistant personas for domain, expertise, and tone.
  • Keep acceptance criteria in the user prompt, not buried in persona.
  • Test, iterate, and prefer short, enforceable guardrails over long, leading scripts.

Next practice: Take a task you already wrote instructions for (from the previous lesson), add a focused system prompt and a short assistant persona, and run the three quick tests above. See what changes. Tweak until the output matches your acceptance criteria without the model overreaching.

Versioned notes: This lesson builds on avoiding leading the model and nudge strategies — use roles to reduce the need for subtle nudges, not to replace clear acceptance criteria.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics