Capabilities and Limits of Machine Learning
Develop realistic expectations of what ML can and cannot do.
Content
When to prefer rules
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
When to Prefer Rules (Yes, Rules Still Matter — a Lot)
"Just because you can throw ML at something doesn't mean you should. Sometimes a hammer is a screwdriver in disguise." — Probably me, very wise and slightly caffeinated.
You're coming fresh off the sections "What ML can do well" and "What ML cannot do yet." Great — you already know ML's superpowers and kryptonites. Now we build a map for when to stop swipe-right-ing ML and go steady with good old fashioned rules.
Why this matters (quick reminder): In "What Makes an AI-Driven Organization" we talked about strategy, governance, and the capability to operationalize AI. Choosing rules over ML isn't just a technical choice — it's a product + org + policy choice. Pick the wrong tool and your org pays in maintainability, compliance headaches, or outright disasters.
TL;DR (Because you love bullet lists and I do too)
- Prefer rules when you need predictable, auditable, low-data, low-latency decisions.
- Prefer ML when patterns are fuzzy, you have lots of labeled data, and approximate correctness is fine.
- Most real systems do both: ML recommends, rules guard, humans audit.
The Rules-First Checklist: Ask these questions
- Is the decision legally regulated or requires audit trails? If yes → rules.
- Is the logic simple, stable, and expressible as explicit conditions? If yes → rules.
- Do you lack high-quality labeled data? If yes → rules.
- Does misclassification carry a catastrophic cost (health, safety, serious financial loss)? If yes → rules or hybrid with strict guardrails.
- Do you need absolute determinism and reproducibility? If yes → rules.
If you answered yes to two or more, strongly consider rules first.
Why rules win (and I mean really win)
- Determinism & Explainability: Rules produce the same output for the same input every time and are obvious to humans. Regulators love this. So do auditors, lawyers, and nervous product managers.
- Low Data Requirements: No training set? No problem. Rules need domain expertise, not terabytes.
- Faster to Implement & Cheaper to Run: A few if/else statements beat training GPUs and complex pipelines for straightforward logic.
- Easy to Test and Version: Unit tests for rules are simple: inputs -> expected outputs. Rollbacks are trivial.
- Safety & Fail-Safe: In safety-critical systems, simple explicit constraints prevent weird ML hallucinations.
Real-world examples: tax calculations, regulatory compliance checks, access-control policies, firewall rules, simple input validation (e.g., "no negative age"), deterministic business rules (discount tiers), and many line-of-business logic tasks.
Why ML is sometimes tempting — and what it costs
ML's allure: adaptivity, pattern-finding power, and the ability to handle messy, high-dimensional data. But it demands:
- Large, representative labeled datasets
- Continuous monitoring for drift
- Explainability tooling if used in regulated contexts
- Infrastructure for deployment, retraining, and auditing
Don't pick ML because it sounds cool. Pick it because the problem needs it.
Quick comparison (Rules vs ML)
| Criteria | Rules | Machine Learning |
|---|---|---|
| Data needed | Very little | Lots (and clean) |
| Explainability | High | Variable (low → higher with effort) |
| Determinism | Yes | No (probabilistic) |
| Cost to run | Low | Variable → can be high |
| Handling fuzzy patterns | Poor | Excellent |
| Legal/audit friendliness | Excellent | Challenging |
Hybrid plays (aka: be pragmatic and slightly mischievous)
You don't have to choose only one. Here are productive patterns:
- ML recommendations + rules guardrails: ML proposes an answer; rules check it. E.g., an ML content classifier flags items for review, but publishing cannot happen if a rules engine detects a banned keyword.
- Rules for edge cases, ML for bulk: If ML is weak in rare but critical cases, route those through rules or human review.
- Rules to validate ML outputs: Sanity-check predictions against business constraints (e.g., predicted interest rate must be within allowed bounds).
- Rules to generate synthetic labels or filters: Use expert rules to bootstrap datasets for ML.
A tiny decision pseudocode (because we love rituals)
function choose_approach(problem):
if problem.is_regulated or problem.requires_audit:
return 'rules or hybrid with strict logging'
if not problem.has_sufficient_data:
return 'rules'
if problem.patterns_are_fuzzy and cost_of_error_acceptable:
return 'ml (with monitoring)'
return 'hybrid'
Ask this before building anything heavier than a scripted cron job.
Implementation & lifecycle tips (to not embarrass yourself later)
- Spec the rules like you're writing law — clear, testable, versioned.
- Automated tests: cover both happy-path and rare exceptions.
- Observability: log decisions, inputs, and rule hits so you know what changed.
- Change management: treat rule changes as feature releases with approvals and rollback plans.
- Governance: a single source of truth (rules repository) and change logs tied to why a rule exists.
- Unit-by-unit migration: if migrating to ML later, let ML shadow the rules for observation before switching.
Pitfalls & gotchas (because life is a lesson plan)
- Complex rule sets can become a spaghetti mess. If you find yourself with hundreds of overlapping rules, maybe extract patterns and consider ML.
- Rules are brittle to novel inputs. They can miss emergent behaviors ML might catch.
- Overconfidence in rules can create blind spots. Log everything.
Closing — Key takeaways
- Rules = clarity, ML = discovery. Pick based on need, not novelty.
- Start simple: if rules suffice, ship rules. If patterns grow messy and data accumulates, evolve to hybrid/ML.
- Match tech to org maturity: your AI-driven organization (remember that earlier lesson?) needs the governance, people, and processes before swallowing ML wholesale.
"Treat ML like sushi: amazing when fresh and expertly prepared. Rules are your reliable comfort food — predictable, nourishing, and unlikely to poison your stakeholders."
Go forth: pick tools that solve the problem, not the one that looks cooler in the demo. And if you ever feel tempted to replace a perfectly working rules engine with a neural net 'because science', ask your future self about maintainability — your future self will send a strongly worded Slack message.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!