Introduction to Artificial Intelligence
An overview of AI, its significance, and foundational concepts.
Content
Types of AI
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Types of AI — the taxonomy party you've been invited to (bring snacks)
"AI" is one word in our syllabus, many different personalities in real life.
If you remember from the earlier lessons, we covered what AI is and how it arrived here (yes — we emotionally visited Turing, neural nets, winters, and renaissances). Now it’s time to stop treating AI like a single mystical creature and meet the cast of characters. Different AIs behave differently, have different capabilities, and deserve different safety checks, budgets, and design docs.
Two useful ways to classify AI (aka, the maps that help you stop calling everything 'AI')
We’ll look at two taxonomies that are both widely used and complementary:
- By capability — how powerful/versatile the system is (broadly: Narrow → General → Super).
- By functionality — how the system processes information and interacts with the world (reactive, memory-based, social, self-aware).
Both matter. For example, a very capable AI can still be functionally reactive, and a limited-capability AI might be excellent at social reasoning in a specific domain.
1) By capability: ANI, AGI, ASI (aka the 'size' measure of intelligence)
Artificial Narrow Intelligence (ANI) — skill-focused specialists.
- What it is: Systems trained for one task or a narrow set of tasks.
- Examples: Image classifiers, chatbots tuned for customer support, recommendation engines, self-driving lane-following modules.
- Real-world vibe: A world-class sushi chef who refuses to make anything but sushi — spectacular at one thing.
Artificial General Intelligence (AGI) — generalist problem-solver.
- What it is: Hypothetical systems that can understand, learn, and apply intelligence across domains at or above human levels.
- Examples: None in production today. Systems like GPT-4 are broad but still have clear limitations; debate persists whether that counts as AGI.
- Real-world vibe: The polymath colleague who can code, negotiate, compose music, and fix your Wi-Fi — across contexts.
Artificial Superintelligence (ASI) — superhuman minds.
- What it is: Systems that surpass human intelligence across the board (creativity, problem-solving, social skills).
- Examples: Purely speculative; often appears in science fiction and high-stakes policy discussions.
- Real-world vibe: The AI that writes novels, computes new physics, and makes investments that make billionaires uncomfortable.
Table — quick comparison:
| Class | Scope | Today’s reality | Risk profile |
|---|---|---|---|
| ANI | Narrow tasks | Widespread (product search, image recognition) | Low-to-moderate (bias, misuse) |
| AGI | Human-level generality | Not achieved | Moderate-to-high (governance, job displacement) |
| ASI | Beyond human | Speculative | High-to-extreme (existential concerns) |
2) By functionality: How AI thinks (the mode of operation)
This taxonomy is practical: it tells you how an AI system will behave under changing conditions.
Reactive Machines
- Definition: No memory, no learning from past experiences — they react to current inputs only.
- Example: Classic chess program that evaluates board states at each move but doesn’t learn from games beyond the search algorithm.
- Analogy: A smoke detector — senses, decides, acts. No feelings.
Limited Memory
- Definition: Uses historical data (recent observations) to inform decisions; most modern ML systems fall here.
- Example: Self-driving cars that use short-term sensor history and models to predict pedestrian movement.
- Analogy: A good barista who remembers your regular order for the day and adjusts if you add oat milk.
Theory of Mind (research stage for AI)
- Definition: Systems that can model beliefs, intentions, and emotions of others.
- Example: Human-level social AI would fall here; current systems can approximate aspects but lack full theory-of-mind.
- Analogy: A negotiator who understands not just words, but hidden agendas.
Self-aware AI (hypothetical)
- Definition: Systems with self-representation, emotions, or consciousness.
- Example: Purely speculative and philosophically fraught.
- Analogy: An employee who not only works but knows they’re working and starts bargaining for vacation.
Quick practical examples so this isn’t just theory
- Siri/Alexa/Google Assistant: ANI + Limited Memory (they track context in a session but aren’t truly general).
- Autonomous vehicle stack: ANI + Limited Memory (perception and prediction modules); occasional reactive submodules.
- AlphaGo: ANI using deep reinforcement learning and tree search; not general.
- GPT-like LLMs: Broadly ANI / leaning toward AGI-like behaviors in some tasks, but still limited memory and brittle in reasoning.
Two other slices you should know as a professional
- By technique: Symbolic (rule-based) vs connectionist (neural networks) vs hybrid. This tells you how explainable or brittle the system might be.
- By learning paradigm: Supervised, unsupervised, reinforcement learning — which tells you about data needs and failure modes.
Code-ish pseudocode to contrast reactive vs limited-memory:
# Reactive (pseudo)
if sensor_input == 'obstacle': brake()
# Limited memory (pseudo)
history.append(sensor_input)
prediction = model.predict(history[-10:])
if prediction == 'obstacle_ahead': brake()
Why do people get confused? (and how to avoid bamboozlement)
- Buzzwords: “AI” becomes shorthand for any automated behavior. Stop using AI as a synonym for 'software'.
- Anthropomorphism: It’s tempting to call chatbots 'smart' in human terms. They’re pattern-matchers with style, not intentionality.
- Hype vs capability: A system being 'good' at many tasks doesn’t mean it’s general or safe. Always ask: what does it fail at?
Questions to ask when assessing an AI product:
- What taxonomy does it fit into (capability + functionality)?
- What are its training data, update procedures, and memory limits?
- What human oversight, auditing, and red-teaming were done?
Closing — key takeaways (read these and feel smarter)
- Types of AI are not just semantics; they shape design, testing, and governance. Knowing the difference between ANI and AGI changes your risk model.
- Functionality matters: reactive vs memory-based systems have very different failure modes and operational needs.
- Most deployed AI today is ANI + Limited Memory — powerful, narrow, and sometimes surprising. Treat it with respect, not fear.
Final thought: Treat AI like tools in a workshop. Some are fine chisels (ANI: focused, precise), some are Swiss Army knives (broad but limited), and some are imagined super-tools (AGI/ASI). You’re not wrong to be excited about new tools — just don’t use a chainsaw for woodcarving.
Now go look at an AI system you use every day and classify it. First person to tell me it’s an ASI gets a virtual gold star and a strongly worded correction.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!