Capstone: Designing, Building, and Presenting
Plan, implement, test, deploy, and present a polished final project.
Content
Defining Success Criteria
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Defining Success Criteria — CS50 Capstone
"Your app runs. The UI is cute. But did you actually solve the problem you promised to solve?"
You already picked a problem (Ideation and Problem Selection) and learned how to keep your team sane with Git and governance (Version Control and Collaboration). Now we do the part almost every team skips until demo day: deciding how you'll know you succeeded.
Why success criteria matter (and why they're your project's north star)
Success criteria are the explicit, testable statements that say when a feature, sprint, or whole project is done. They're not vague hopes like "it's fast" or "users like it" — they're measurable checkpoints that keep design, code, and presentations honest.
Why bother? Because without them:
- Teams argue about whether something is "good enough" at 11:48 PM.
- Demos become smoke-and-mirrors and instructors guess at intent.
- You can't automate verification or write confident acceptance tests.
When you define success criteria early, you're turning feelings into facts — and feelings are terrible unit tests.
What counts as success criteria? Types and examples
1) Functional (what the app must do)
- Example: Users can create an account and save 5 flashcards in under 90 seconds.
- These are often captured as acceptance tests or user stories with clear pass/fail steps.
2) Non-functional (how the app behaves)
- Performance: API responds <300ms for 95% of requests under typical load.
- Reliability: App uptime >= 99% during a 2-week demo window.
- Security & Compliance: All third-party code is compatible with MIT/Apache-2.0 license. (Yes — link back to Licensing and Compliance.)
3) UX / Accessibility
- All interactive elements reachable with keyboard.
- WCAG 2.1 AA for color contrast on main screens.
4) Educational or Business Outcomes (especially for CS50 projects)
- A user who completes the tutorial improves quiz score by 20% in 3 attempts.
- At least 10 students can complete core flow without instructor help.
5) Presentation & Deliverables
- Demo covers problem, solution, architecture, and live feature — in 5 minutes.
- Repository includes README, license, contribution guide, and a runnable demo script.
How to write success criteria that work — the SMART-ish checklist
Use this quick formula to avoid fuzzy criteria:
- Specific — what exactly? who exactly?
- Measurable — a number, range, or explicit pass/fail steps
- Achievable — realistic for a capstone timeline
- Relevant — actually tied to the problem from your Ideation phase
- Time-bound — when will you measure it?
Micro-example:
- Bad: "The app is fast."
- Good: "Under a sample of 500 requests during a 1-hour test, the 95th percentile latency is < 300ms."
From success criteria to Git workflows (because you already use Git)
Make success criteria first-class citizens in your repo — they should live where everyone sees them and link to code.
- Create a
SUCCESS_CRITERIA.mdin the repo root with numbered items. - Turn each criterion into an issue with an acceptance-test checklist. That way you can track progress, assign an owner, and have PRs reference the issue.
- Add a CI job that runs automated checks related to criteria (unit tests, lint, accessibility tests).
- Use PR templates to require a checklist showing which criteria the PR addresses.
Example issue checklist (GitHub flavor):
## Acceptance Tests for: User Signup
- [ ] Create account with email and password
- [ ] Receive confirmation email within 60s
- [ ] Logged-in user can create one resource
- [ ] End-to-end test passes in CI
Example SUCCESS_CRITERIA.md (JSON-like quick snippet):
{
"criteria": [
{"id": 1, "description": "User can create account in < 90s", "metric": "time_to_signup_seconds <= 90"},
{"id": 2, "description": "API 95p latency < 300ms", "metric": "p95_latency_ms < 300"}
]
}
Who signs off? Governance + Roles
Tie success criteria to your governance model (we covered project governance earlier). Someone needs sign-off.
- Product Owner / Team Lead: final decision on scope and priorities.
- Devs: responsible for automated and manual verification.
- QA / Peer reviewers: run acceptance tests and confirm results.
- Instructor / TA: final demo acceptance (if required for grading).
Make this explicit in your governance doc: for each criterion, list the owner and verification method.
Automate what you can — tests, CI, and demo scripts
Every success criterion that can be automated should be. Examples:
- Unit/integration tests for functional criteria.
- Lighthouse or pa11y runs for performance/accessibility.
- A demo script (e.g.,
demo/run_demo.sh) that sets up the DB, seeds test data, and launches the demo with predictable state.
CI snippet example (GitHub Actions):
name: Verify Success Criteria
on: [pull_request]
jobs:
tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: ./scripts/run_unit_tests.sh
- run: ./scripts/check_accessibility.sh
Presentation & grading rubrics — don't leave it to vibes
For your final presentation, create a simple rubric tied to success criteria. Example categories:
- Problem clarity (10 pts)
- Demonstrated success against criteria (30 pts)
- Technical depth & architecture (20 pts)
- Code quality & reproducibility (20 pts)
- Reflection & next steps (20 pts)
Attach this rubric to your README and rehearse the demo to show criterion evidence quickly (screenshots, metrics, CI badges).
Mini case study: "Study Buddy" flashcard app
Success Criteria (selected):
- Functional: Create, edit, delete flashcard — acceptance test passes in CI.
- Performance: Main study page loads < 400ms for 95% of requests.
- UX: Keyboard navigation works for 100% of main controls.
- Educational: Users who follow the spaced-repetition flow increase correct recall by 15% after 3 sessions.
- Compliance: All libraries MIT/Apache compatible;
LICENSEpresent.
Mapped actions:
- Each criterion → GitHub issue + CI test + assigned owner.
- Demo script seeds 20 cards and runs a script that simulates 3 study sessions; outputs recall-improvement metric for the demo.
Quick checklist before demo day
- SUCCESS_CRITERIA.md present and linked in README
- Each criterion has an issue with acceptance steps
- CI runs and indicates pass for automated criteria
- Demo script reproduces success evidence
- One team member assigned to present each criterion
- License & compliance checks complete
Key takeaways
- Write success criteria early. They turn opinions into tests.
- Make them measurable and owned. Tie each one to an issue, CI job, and person.
- Include functional, non-functional, UX, and compliance items. Your project isn't shipped until all are accounted for.
"Defining success is less romantic than coding, but it's the only way your demo becomes irrefutable." — your future, calmer self.
Now go update that repo: create SUCCESS_CRITERIA.md, open issues, and make your demo honest, repeatable, and impressive.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!