jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

CS50 - Introduction to Computer Science
Chapters

1Computational Thinking and Foundations

2C Language Basics

3Arrays, Strings, and Algorithmic Basics

4Algorithm Efficiency and Recursion

5Memory, Pointers, and File I/O

6Core Data Structures in C

7Python Fundamentals

8Object-Oriented and Advanced Python

9Relational Databases and SQL

10Web Foundations: HTML, CSS, and JavaScript

11Servers and Flask Web Applications

12Cybersecurity and Privacy Essentials

13Software Engineering Practices

14Version Control and Collaboration

15Capstone: Designing, Building, and Presenting

Ideation and Problem SelectionDefining Success CriteriaScoping and MVPChoosing a Tech StackData Modeling and APIsUI/UX WireframesProject Plan and MilestonesBackend ImplementationFrontend ImplementationDatabase IntegrationAuthentication and SecurityTesting and QADeployment and HostingDemo and PresentationReflection and Next Steps
Courses/CS50 - Introduction to Computer Science/Capstone: Designing, Building, and Presenting

Capstone: Designing, Building, and Presenting

12039 views

Plan, implement, test, deploy, and present a polished final project.

Content

2 of 15

Defining Success Criteria

Defining Success Criteria for CS50 Capstone Projects
4628 views
beginner
project-management
computer-science
collaboration
gpt-5-mini
4628 views

Versions:

Defining Success Criteria for CS50 Capstone Projects

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Defining Success Criteria — CS50 Capstone

"Your app runs. The UI is cute. But did you actually solve the problem you promised to solve?"

You already picked a problem (Ideation and Problem Selection) and learned how to keep your team sane with Git and governance (Version Control and Collaboration). Now we do the part almost every team skips until demo day: deciding how you'll know you succeeded.


Why success criteria matter (and why they're your project's north star)

Success criteria are the explicit, testable statements that say when a feature, sprint, or whole project is done. They're not vague hopes like "it's fast" or "users like it" — they're measurable checkpoints that keep design, code, and presentations honest.

Why bother? Because without them:

  • Teams argue about whether something is "good enough" at 11:48 PM.
  • Demos become smoke-and-mirrors and instructors guess at intent.
  • You can't automate verification or write confident acceptance tests.

When you define success criteria early, you're turning feelings into facts — and feelings are terrible unit tests.


What counts as success criteria? Types and examples

1) Functional (what the app must do)

  • Example: Users can create an account and save 5 flashcards in under 90 seconds.
  • These are often captured as acceptance tests or user stories with clear pass/fail steps.

2) Non-functional (how the app behaves)

  • Performance: API responds <300ms for 95% of requests under typical load.
  • Reliability: App uptime >= 99% during a 2-week demo window.
  • Security & Compliance: All third-party code is compatible with MIT/Apache-2.0 license. (Yes — link back to Licensing and Compliance.)

3) UX / Accessibility

  • All interactive elements reachable with keyboard.
  • WCAG 2.1 AA for color contrast on main screens.

4) Educational or Business Outcomes (especially for CS50 projects)

  • A user who completes the tutorial improves quiz score by 20% in 3 attempts.
  • At least 10 students can complete core flow without instructor help.

5) Presentation & Deliverables

  • Demo covers problem, solution, architecture, and live feature — in 5 minutes.
  • Repository includes README, license, contribution guide, and a runnable demo script.

How to write success criteria that work — the SMART-ish checklist

Use this quick formula to avoid fuzzy criteria:

  • Specific — what exactly? who exactly?
  • Measurable — a number, range, or explicit pass/fail steps
  • Achievable — realistic for a capstone timeline
  • Relevant — actually tied to the problem from your Ideation phase
  • Time-bound — when will you measure it?

Micro-example:

  • Bad: "The app is fast."
  • Good: "Under a sample of 500 requests during a 1-hour test, the 95th percentile latency is < 300ms."

From success criteria to Git workflows (because you already use Git)

Make success criteria first-class citizens in your repo — they should live where everyone sees them and link to code.

  • Create a SUCCESS_CRITERIA.md in the repo root with numbered items.
  • Turn each criterion into an issue with an acceptance-test checklist. That way you can track progress, assign an owner, and have PRs reference the issue.
  • Add a CI job that runs automated checks related to criteria (unit tests, lint, accessibility tests).
  • Use PR templates to require a checklist showing which criteria the PR addresses.

Example issue checklist (GitHub flavor):

## Acceptance Tests for: User Signup
- [ ] Create account with email and password
- [ ] Receive confirmation email within 60s
- [ ] Logged-in user can create one resource
- [ ] End-to-end test passes in CI

Example SUCCESS_CRITERIA.md (JSON-like quick snippet):

{
  "criteria": [
    {"id": 1, "description": "User can create account in < 90s", "metric": "time_to_signup_seconds <= 90"},
    {"id": 2, "description": "API 95p latency < 300ms", "metric": "p95_latency_ms < 300"}
  ]
}

Who signs off? Governance + Roles

Tie success criteria to your governance model (we covered project governance earlier). Someone needs sign-off.

  • Product Owner / Team Lead: final decision on scope and priorities.
  • Devs: responsible for automated and manual verification.
  • QA / Peer reviewers: run acceptance tests and confirm results.
  • Instructor / TA: final demo acceptance (if required for grading).

Make this explicit in your governance doc: for each criterion, list the owner and verification method.


Automate what you can — tests, CI, and demo scripts

Every success criterion that can be automated should be. Examples:

  • Unit/integration tests for functional criteria.
  • Lighthouse or pa11y runs for performance/accessibility.
  • A demo script (e.g., demo/run_demo.sh) that sets up the DB, seeds test data, and launches the demo with predictable state.

CI snippet example (GitHub Actions):

name: Verify Success Criteria
on: [pull_request]
jobs:
  tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: ./scripts/run_unit_tests.sh
      - run: ./scripts/check_accessibility.sh

Presentation & grading rubrics — don't leave it to vibes

For your final presentation, create a simple rubric tied to success criteria. Example categories:

  1. Problem clarity (10 pts)
  2. Demonstrated success against criteria (30 pts)
  3. Technical depth & architecture (20 pts)
  4. Code quality & reproducibility (20 pts)
  5. Reflection & next steps (20 pts)

Attach this rubric to your README and rehearse the demo to show criterion evidence quickly (screenshots, metrics, CI badges).


Mini case study: "Study Buddy" flashcard app

Success Criteria (selected):

  • Functional: Create, edit, delete flashcard — acceptance test passes in CI.
  • Performance: Main study page loads < 400ms for 95% of requests.
  • UX: Keyboard navigation works for 100% of main controls.
  • Educational: Users who follow the spaced-repetition flow increase correct recall by 15% after 3 sessions.
  • Compliance: All libraries MIT/Apache compatible; LICENSE present.

Mapped actions:

  • Each criterion → GitHub issue + CI test + assigned owner.
  • Demo script seeds 20 cards and runs a script that simulates 3 study sessions; outputs recall-improvement metric for the demo.

Quick checklist before demo day

  • SUCCESS_CRITERIA.md present and linked in README
  • Each criterion has an issue with acceptance steps
  • CI runs and indicates pass for automated criteria
  • Demo script reproduces success evidence
  • One team member assigned to present each criterion
  • License & compliance checks complete

Key takeaways

  • Write success criteria early. They turn opinions into tests.
  • Make them measurable and owned. Tie each one to an issue, CI job, and person.
  • Include functional, non-functional, UX, and compliance items. Your project isn't shipped until all are accounted for.

"Defining success is less romantic than coding, but it's the only way your demo becomes irrefutable." — your future, calmer self.

Now go update that repo: create SUCCESS_CRITERIA.md, open issues, and make your demo honest, repeatable, and impressive.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics