← Back to Portfolio
⚠️ Specifics such as internal metrics, customer names, architecture details, and unreleased features are omitted in line with my employment agreement. What follows focuses entirely on my decision-making process, frameworks, and approach as a PM.
PM Case Study · Oncology AI

CancerAI | Building Clinical Decision Support from Zero

How I led the 0→1 product build of an AI-powered tool for oncologists, from discovery through alpha launch.

0→1 Full lifecycle owned
Alpha Live with oncologists
Safety-First Clinical eval framework
The Problem

Oncologists managing complex cancer cases must simultaneously synthesize genomic sequencing reports, treatment history, published research, and clinical guidelines often across disconnected, legacy systems. The cognitive overhead is significant. Errors in this context aren't just expensive, they're life-threatening.

The core discovery insight: oncologists didn't need more information. They needed better synthesis, AI that could surface the right signal at the right moment in their workflow, without adding steps.

My Role

I owned the product function end-to-end for CancerAI from initial discovery through alpha launch. This spanned:

Key Product Decisions

1. Safety before features

The single most important early call was establishing a formal Go/No-Go framework for alpha readiness, with explicit Hard Gates around clinical safety, hallucination rate thresholds, PHI compliance, and critical incident resolution. This was non-negotiable in a clinical setting and gave the entire team a shared, unambiguous definition of "ready."

2. Oncologist workflow, not AI showcase

Early prototypes were AI-first, impressive demos that didn't map to how oncologists actually work. I reoriented the product around the clinical decision moment: what question is the oncologist trying to answer right now, and how does this tool answer it without adding friction? This shifted us from a general assistant to a context-aware decision support tool.

3. Alpha scope discipline

With a small cohort of alpha users, the temptation was to ship broadly and iterate. Instead, I enforced strict scope, fewer features, higher confidence. The goal wasn't to impress. It was to generate usable signal about what was actually driving clinical value before scaling.

"Ship narrow, learn deep", the alpha wasn't about feature breadth. It was about getting enough real clinical interactions to validate the core value hypothesis before committing to a wider rollout.

Discovery Insights

User research with oncologists surfaced several non-obvious findings:

AI Safety Framework

Clinical AI requires a different evaluation standard than consumer products. I built an evaluation framework that included:

Cross-Functional Execution

The science team was the most critical cross-functional dependency, and the most complex to manage. I worked to align on shared definitions of model performance, evaluation criteria, and what "good enough for clinical use" actually meant. Getting this alignment early prevented significant rework downstream.

Engineering, data science, and clinical stakeholders each had different definitions of readiness. My job was to hold a single standard across all of them without letting any one function's definition dominate.

What I'd Do Differently

PM Takeaway

Building clinical AI is a different discipline from consumer product. The bar for safety, evidence, and trust is higher, and the timeline to earn adoption is longer. The PM's job is to hold that bar without letting it become an excuse for not shipping. Both things are true at once.