jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

AI For Everyone
Chapters

1Orientation and Course Overview

2AI Fundamentals for Everyone

3Machine Learning Essentials

4Understanding Data

5AI Terminology and Mental Models

6What Makes an AI-Driven Organization

7Capabilities and Limits of Machine Learning

8Non-Technical Deep Learning

9Workflows for ML and Data Science

10Choosing and Scoping AI Projects

Aligning to business goalsOpportunity discovery methodsFeasibility assessmentsData availability auditsRisk and constraint analysisEstimating impact and ROIQuick wins vs moonshotsPilot scope and resourcesDefining success metricsStakeholder and user mappingCompliance and ethics reviewBuild vs buy tradeoffsVendor pilot evaluationPrioritization frameworksRoadmap and next steps

11Working with AI Teams and Tools

12Case Studies: Smart Speaker and Self-Driving Car

13AI Transformation Playbook

14Pitfalls, Risks, and Responsible AI

15AI and Society, Careers, and Next Steps

Courses/AI For Everyone/Choosing and Scoping AI Projects

Choosing and Scoping AI Projects

5968 views

Select high-impact, feasible AI projects and define success clearly.

Content

2 of 15

Opportunity discovery methods

Opportunity Explorer — Chaotic Practicality
534 views
beginner
humorous
education theory
science
gpt-5-mini
534 views

Versions:

Opportunity Explorer — Chaotic Practicality

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Opportunity Discovery Methods — Finding AI Problems Worth Solving

Imagine you walk into a room full of data, stakeholders, and bad coffee. Somebody says: 'Can we do AI for this?' Your job: not to say yes immediately. Your job: to find the right yes.

This piece builds on our earlier notes about aligning AI projects to business goals and the workflows for ML and data science (yes, the nice maps and checkpoints we already love). Think of this as the field guide for the moment before you design a model: the messy, glorious hunt for opportunities.


Why discovery methods matter (and why your spreadsheet of ideas is not enough)

If aligning to business goals is the compass, and the data science workflow map is your GPS, then opportunity discovery methods are the binoculars and the detective hat. They help you:

  • See high-value problems, not just flashy technical toys
  • Find problems that have data and organizational will behind them
  • Avoid wasted months building models no one uses

So: we want signals that a problem is worth solving, and evidence that it can be solved.


A menu of practical discovery methods (what they are, when to use them, and caveats)

  1. Stakeholder interviews

    • What: Talk to product owners, ops managers, sales leads, support teams.
    • Best for: Understanding pain, constraints, and business priorities.
    • Outputs: Candidate problem statements, KPIs, success criteria.
    • Pitfall: People promise the moon; get concrete examples and measures.
  2. Process mapping & workshops

    • What: Walk through current processes step-by-step, in a room with the folks who do the work.
    • Best for: Operational efficiency opportunities (automation, routing, anomaly detection).
    • Outputs: Bottlenecks, manual handoffs, data capture gaps.
    • Pitfall: Workshops with only managers miss reality. Invite frontline workers.
  3. Data triage / quick analytics

    • What: Run lightweight queries and dashboards to measure volumes, error rates, latencies.
    • Best for: Quantifying impact and feasibility quickly.
    • Outputs: Baseline metrics, rough ROI estimates, data readiness flags.
    • Pitfall: Data without context can mislead you — pair with interviews.
  4. Ethnography / shadowing

    • What: Physically (or virtually) observe users doing their work.
    • Best for: UX-heavy domains or where tacit knowledge matters.
    • Outputs: Observed workarounds and unmet needs.
    • Pitfall: Time-consuming; do targeted short observations.
  5. Ticket / query / chat-log analysis

    • What: Mine support tickets, returns logs, or internal tickets for patterns.
    • Best for: Repetitive problems, classification automation, escalation triggers.
    • Outputs: Frequent issues, candidate prediction/automation tasks.
    • Pitfall: Noise — not all frequent problems have high business value.
  6. Jobs-to-be-Done and value-chain analysis

    • What: Frame what customers or internal users are trying to get done, and where AI could remove friction.
    • Best for: Product-market fit and customer-facing solutions.
    • Outputs: Job stories, desired outcomes.
    • Pitfall: Too high-level without data or process context.
  7. Competitor & market scan

    • What: See what rivals are doing and where the industry is headed.
    • Best for: Strategic AI features and defensive moves.
    • Outputs: Differentiation map, investment priorities.
    • Pitfall: Copying features without business fit is expensive.
  8. Rapid prototyping / experiments

    • What: Build a minimal demo or rules-based surrogate model to test assumptions.
    • Best for: Validating whether a solution would be used.
    • Outputs: Real user feedback, quick metrics.
    • Pitfall: Prototype bias — early prototypes can ossify poor ideas if you don’t iterate.

Quick decision table: which method when

Method Best fit Time to insight Typical ROI signal
Interviews Strategic alignment 1-2 weeks Stakeholder commitment + clear KPIs
Data triage Feasibility checks 1-3 days Data completeness & volume
Process workshop Ops automation 1-2 weeks Time saved per transaction
Shadowing UX/complex work 1-2 weeks Observed workarounds
Ticket analysis Repetition/classification 2-4 days Proportion of workload
Rapid prototyping Usage validation 1-4 weeks Adoption rate in test

A pragmatic discovery sequence (repeatable, aligns with workflows and checkpoints)

  1. Start with short stakeholder interviews to align to strategic goals (remember our 'align to business goals' checkpoint).
  2. Run a fast data triage to validate that data exists and identify red flags.
  3. Map the process or shadow the work to see edge cases and hidden costs.
  4. Score candidates with a simple rubric (below).
  5. Prototype the top 1-2 ideas and route them through a collaboration checkpoint with product, legal, and ops.

This dovetails with the data science workflow map: discovery -> validation -> scoped spec -> iterative development -> deployment checkpoint.


Simple scoring rubric (use this to prioritize)

Score = (Business Value * 0.4) + (Feasibility * 0.3) + (Data Readiness * 0.2) + (Strategic Alignment * 0.1)

Where each component is 1-5. Higher total wins.

Business Value: potential $$ or time saved
Feasibility: engineering complexity, latency constraints
Data Readiness: volume, quality, label availability
Strategic Alignment: fits company priorities

Use this to force trade-offs. If a project scores 18/20 but requires exotic sensors, maybe deprioritize.


Mini worked example: Returns prediction for an e-commerce retailer

  • Interviews reveal customer service spends huge time on returns and fraud investigation.
  • Data triage: returns logs exist, with timestamps, SKUs, and text reasons. Labeling for fraudulent returns is partial.
  • Process mapping shows manual flagging after 3 returns in 30 days.
  • Ticket analysis shows 12% of orders lead to a return; 60% of return reviews are repetitive.

Decision: high business value (reduce manual review), medium data readiness (need labels), feasible to prototype with rules + model. Prototype a rules-based triage and a simple classifier, measure reduction in manual review time in a short pilot.


Two contrasting perspectives (for nuance)

  • The hype-chasers say: 'If you have data, build a model.' Reality: that gets you a pretty model no one uses.
  • The purists say: 'Never build without perfect labels.' Reality: iterative prototypes + human-in-the-loop labeling often win.

Both extremes fail; the middle path mixes stakeholder evidence, quick data checks, and low-cost experiments.


Closing — takeaways and a tiny pep talk

  • Discovery is a practice, not a workshop. Do a little every sprint.
  • Mix qualitative and quantitative signals. Interviews + data = magic.
  • Use a simple rubric to force trade-offs. Prestige models don’t pay the rent.
  • Prototype early and invite feedback. Models must be useful and used.

'A good AI project solves a real problem you can measure, not the fanciest math you can show off.'

Go find the problem worth solving. Then build the model that earns a seat at the table.


If you want, I can generate: a printable interview template, a one-page data triage checklist, or a workshop facilitation script tailored to your org. Which would you like first?

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics