jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI and Agentic AI
Chapters

1Introduction to AI and its Evolution

History of AIKey Milestones in AI DevelopmentUnderstanding Machine LearningIntroduction to Deep LearningAI vs. Traditional ComputingCurrent AI TrendsGenerative vs. Discriminative ModelsApplications of AI in Various FieldsFuture Trends in AIKey AI Concepts and Terms

2Understanding Generative AI

3Diving Deep into Generative Models

4Introduction to Agenting AI

5Reinforcement Learning in Depth

6Generative AI in Content Creation

7AI Agents in Real-World Applications

8Ethical Implications of Generative and Agenting AI

9Hands-On Projects with Generative AI

10Future of Generative and Agenting AI

Courses/Generative AI and Agentic AI/Introduction to AI and its Evolution

Introduction to AI and its Evolution

30 views

An overview of artificial intelligence's historical context, development phases, and its significance in today's digital landscape.

Content

7 of 10

Generative vs. Discriminative Models

The No-Chill Boundary vs. Universe Generator
3 views
intermediate
humorous
science
narrative-driven
gpt-5
3 views

Versions:

The No-Chill Boundary vs. Universe Generator

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Generative vs. Discriminative Models: The Artist and the Judge

Previously, we framed AI vs traditional computing as learned behavior vs hard-coded rules, and peeked at current AI trends where giant multimodal models are eating the world. Today, we are going to turn that energy into a crisp mental model: who in the lab party is making stuff up (creatively and statistically), and who is judging it like the world’s pettiest debate captain.


What Are We Comparing?

  • Discriminative models learn the boundary between classes. They answer: given input x, what is y? Formally: learn p(y|x).
  • Generative models learn how the data itself is formed. They answer: how could x have been generated? Formally: learn p(x) or p(x|y) or even p(x, y).

If AI is a courtroom:

  • The generative model is the witness who can recreate the entire scene from memory, sound effects included.
  • The discriminative model is the judge who says guilty or not, with a calibrated eyebrow.

Why this matters (especially for agenting): agents need to both invent and evaluate. Creation without judgment is chaos. Judgment without creation is... a very quiet afternoon.


The Bayesian Bridge (aka Why These Two Are Secretly Related)

You may have encountered this under your friendly neighborhood Bayes rule:

  • p(y|x) ∝ p(x|y) p(y)

So you can:

  • learn p(y|x) directly (discriminative), or
  • learn p(x|y) and p(y) and then infer p(y|x) (generative flavor).

This is the plot twist: both families can help classify, but generative models also give you sampling superpowers.


Examples You Already Know

  • Discriminative: logistic regression, SVMs, ResNet classifiers, BERT fine-tunes, XGBoost, reward models, toxicity detectors, spam filters.
  • Generative: GPT-style LLMs, diffusion models (image/audio), VAEs, HMMs, autoregressive transformers for code and music, Naive Bayes (yes, class-conditional generative!), GANs (generator + discriminator duo).

Meme-adjacent fact: GANs literally ship with a built-in hater (the discriminator). Healthy relationships include feedback loops.


Training Objectives (Translated Out of Jargon)

  • Discriminative training: optimize the probability of correct labels given inputs.
Given data (x, y):
minimize  -log p_theta(y | x)
# Cross-entropy loss, logistic loss, etc.
  • Generative training: fit the data distribution so samples look like the real thing.
Unconditional generation:    minimize  -log p_theta(x)
Class-conditional generation: minimize  -log p_theta(x | y)
Autoencoding (VAE-ish):      reconstruct x from latent z and regularize z
Diffusion:                   denoise x_t to x_{t-1}; match Gaussian noise schedule
Autoregressive LLM:          minimize  -Σ_t log p_theta(x_t | x_<t)

TL;DR: discriminative models learn boundaries; generative models learn the world (or at least a convincing fanfic).


Quick Visual (ASCII Edition)

x  --->  Discriminative f(x)  --->  y (label)

z ~ prior ---> Generative g(z[, y]) ---> x' (sample that looks like data)

The Comparison Table You Screenshot for Later

Axis Generative Discriminative Typical Use
What it learns p(x), p(x y), or p(x, y) p(y
Label needs Often fewer labels (can be self-supervised) Requires labels Supervision budget
Output New data, completions, simulations Class, score, probability Creation vs classification
OOD behavior Can hallucinate but also estimate likelihood Often overconfident OOD Safety considerations
Calibration Can be wonky; needs post-hoc tricks Often better calibrated Risk-sensitive tasks
Evaluation Likelihood, FID, BLEU, human eval Accuracy, F1, AUC, NLL Metrics toolbox
Latency Often heavier at inference Often faster Real-time needs

Real-World Anchors

  • Email world:

    • Discriminative: spam vs not spam classifier.
    • Generative: write a polite email to your landlord that sounds legally literate but also kind.
  • Vision:

    • Discriminative: dog vs cat vs bread.
    • Generative: synthesize a photorealistic corgi loaf on a marble countertop at golden hour.
  • Speech:

    • Discriminative: speech-to-text (often CTC/attention models leaning discriminative objectives).
    • Generative: text-to-speech; music generation; voice cloning.
  • Agent stacks (tying to current trends):

    • Planner and code-writer: generative.
    • Tool chooser and safety filter: discriminative (ranking, routing, refusal checks).
    • Reward model for RLHF/RLAIF: discriminative model shaping a generator.

The present trend: foundation models are largely generative; the guardrails and scoring layers are discriminative. It is vibes + rubrics.


Why People Keep Mixing Them Up

  • Generative models can do classification by prompting: 'Given x, which label fits?' They implicitly estimate p(y|x).
Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics