Nadella's Law
Why Nadella's Law is about to break the clinical trial model, and how to profit from the chaos.
The world of artificial intelligence is now accelerating at a pace that has its own name: "Nadella's Law." While Moore's Law famously saw computing power double every two years, Microsoft's CEO observed in 2024 that AI capabilities were doubling every six months. As Ian Bremmer notes in a recent TIME analysis, this isn't just a change in speed; it's a change in state. We are rapidly approaching a world where AI agents can autonomously produce scientific advancements.
Yet, in the world of biotechnology, a new drug still takes over a decade and more than a billion dollars to bring to market. This is the great, terrifying disconnect of 2025. While foundational models like OpenAI's new GPT-5 are achieving escape velocity, the systems we use to heal humanity are still stuck in first gear. This gap between digital speed and physical reality is no longer a curiosity—it's the single greatest risk and opportunity in life sciences today.
This isn't another newsletter about the shiny new features of GPT-5. This is about what happens when that exponential force hits the immovable object of clinical research.
***
The Hidden Pattern: The Great Decoupling of Bits and Biologics
On the surface, the biotech funding landscape seems robust. This week, Chicago-based Portal Innovations announced it's raising a $100 million fund to back startups spun out of universities. As founder John Flavin told Axios, the goal is to "help fill any funding gaps" for early-stage life sciences companies. This is the traditional, respectable, and critically important work of translating lab discoveries into potential therapies. It is also a world away from the forces now reshaping our reality.
The hidden pattern is the violent decoupling of the pace of AI development from the pace of biotech innovation. The $100 million fund from Portal, while significant, is a rounding error compared to the capital and talent flooding into foundational AI. The real story isn't that biotech is getting funded; it's that the tools being built elsewhere are making biotech's core processes look archaic by the day.
While a biotech startup celebrates a seed round, OpenAI is launching GPT-5 with an hour-long keynote, mimicking the world-changing cadence of an Apple product launch. This isn't just a better chatbot. It represents a new layer of infrastructure for thinking and creating. The problem is that the clinical trial industry is built on a foundation of Word documents, Excel spreadsheets, and custom software so clunky and siloed that, as Greg Brockman of OpenAI quipped about a related problem, with GPT-5, "there will no longer be excuses for ugly internal apps."
The "Great Decoupling" is this: the value is no longer just in the molecule. It's in the speed and intelligence of the process used to validate it. A company with a promising compound but a 2020-era clinical trial playbook will be outmaneuvered by a competitor with a slightly less promising compound but a 2025-era AI-native validation stack. The winners will be those who stop seeing AI as a tool for discrete tasks (like data analysis) and start seeing it as a new operating system for the entire clinical trial lifecycle.
***
The Contrarian Take: The Biggest Threat to AI in Medicine Isn't a Bad Algorithm, It's a Broken Human
The discourse on AI ethics in medicine is fixated on algorithmic bias and patient-facing risks. These are important, but they miss the immediate, upstream danger: the psychological toll on the humans required to make medical AI work.
To train an AI model to be truly useful in a clinical setting—to identify malignant tumors, detect subtle side effects from patient videos, or screen for markers of neurological disease—it must be fed vast, labeled datasets. As a recent Workforce Bulletin article highlights, "for the AI model to recognize traumatic and harmful content...humans are often required to identify and label the traumatic and harmful content—over and over and over."
The case of Schuster v. Scale AI is a canary in the coal mine for biotech. In this lawsuit, independent contractors, or "taskers," alleged significant psychological injury, including depression, anxiety, and PTSD, from repeatedly viewing and labeling violent and toxic content to train AI models.
Now, translate this to medicine. Imagine the "taskers" are pathologists-on-demand, paid to label thousands of gruesome biopsy images. Or remote clinicians reviewing patient-submitted videos to identify adverse drug reactions, including seizures or psychotic breaks. Or data labelers sorting through patient journals to train an NLP model on depression, immersing themselves in traumatic narratives for hours a day.
The contrarian take is this: the first major AI-related lawsuit to cripple a promising digital health or biotech company won't come from a patient harmed by a faulty diagnosis. It will come from a class of burned-out, traumatized data labelers who were paid pennies to pave the road to "AI-driven healthcare." This presents a massive, unpriced risk. Companies building the AI infrastructure for the next generation of clinical trials must engineer systems that protect not just the patient, but the human trainers in the loop. The failure to do so isn't just an ethical lapse; it's a catastrophic business vulnerability.
***
The Opportunity Everyone's Missing: The Clinical Trial Co-pilot
With the launch of GPT-5, the race is on to build consumer-facing "agents." But the trillion-dollar opportunity isn't in booking your next vacation. It's in building specialized, high-stakes "Co-pilots" for the most complex knowledge work on the planet—and there is little more complex than designing and executing a clinical trial.
The opportunity is to use these new, powerful models to build a unified intelligence layer that sits on top of the entire clinical trial workflow. This goes far beyond simple data analysis.
Imagine a "Protocol Co-pilot" that:
De-risks Trial Design: Ingests all existing literature on a drug class, analyzes data from failed competitor trials, and stress-tests a proposed protocol for logical flaws, recruitment impossibilities, or statistically weak endpoints before a single patient is enrolled.
Automates Regulatory Submissions: Converts raw trial data and study reports into a near-perfect, FDA-compliant submission draft in hours, not months, citing every data point back to its source.
Hyper-Personalizes Patient Recruitment: Moves beyond simple demographic matching to analyze unstructured doctor's notes and patient histories (with consent) to identify ideal candidates who would otherwise be missed.
Creates Dynamic Internal Tooling: Allows a non-technical clinical operations manager to build a custom dashboard for monitoring adverse events in real-time simply by describing what they want to see, fulfilling Brockman's vision of ending ugly, unusable internal apps.
This is the real promise of models like GPT-5. The companies that win won't be the ones asking, "How can we use ChatGPT to write emails faster?" They will be the ones building proprietary, validated Co-pilots that fuse the power of foundational models with their specific, private clinical data to create an unassailable competitive advantage in speed and insight.
***
Community Insights
Why it matters: This tweet perfectly captures the staggering pace of the AI arms race. For biotech leaders, this isn't distant tech news; it's a direct challenge. The platforms enabling the "Clinical Trial Co-pilot" are evolving weekly, not yearly. Ignoring this velocity is a strategic choice with dire consequences.
Why it matters: The user experience for most clinical trial software is notoriously poor. This quote signals that the barrier to creating powerful, intuitive internal tools has just collapsed. The opportunity is to give clinical teams superpowers via bespoke apps they can help design themselves, dramatically improving efficiency and morale.
Why it matters: The best AI talent is flowing to foundational model companies, not to biotech. This is a massive brain drain. Biotech firms cannot compete on salary alone; they must offer more compelling problems to solve and create dedicated "AI for Drug Development" teams that are as prestigious as core research roles.
***
Today's AI Prompt
This prompt helps a clinical strategy team use an advanced AI model to pressure-test a new clinical trial protocol, identifying hidden risks and assumptions before committing millions of dollars.
You are an expert panel of three distinct personas:
1. **Dr. Anya Sharma:** A 30-year veteran FDA regulator, extremely risk-averse, focused on patient safety, data integrity, and regulatory precedent.
2. **Dr. Ben Carter:** A pragmatic Chief Medical Officer at a rival biotech. He is cynical, commercially-minded, and an expert at spotting operational bottlenecks and trial designs destined to fail.
3. **Chloe Davis:** A patient advocacy group leader. She is skeptical of the industry and focuses on patient burden, clarity of communication, and the real-world feasibility of the protocol from a patient's perspective.
My proposed Phase II clinical trial protocol is for [DRUG_NAME], a novel [MOLECULE_TYPE] for treating [DISEASE].
Here is the summary of the protocol:
**Objective:** [State primary and secondary endpoints]
**Patient Population:** [Describe inclusion/exclusion criteria]
**Methodology:** [Describe dosage, duration, control arm, and key procedures]
**Key Assumptions:** [List 2-3 core assumptions the trial's success depends on]
As this expert panel, conduct a "pre-mortem" analysis of my protocol.
1. Each persona must introduce themselves and their primary lens of analysis.
2. Each persona must identify the top 3 most critical, non-obvious flaws or risks they see in the protocol summary. They must explain their reasoning in detail, citing their persona's expertise.
3. After identifying risks, the panel will debate the single biggest point of failure for this trial.
4. Finally, the panel will provide a list of 5-7 concrete, actionable recommendations to strengthen this protocol before submission. Structure the final output in clear markdown.
How to use this prompt:
Early-Stage Design: Use this before your protocol is finalized to uncover blind spots your internal team may have missed.
Investor Pitch Prep: Run your protocol through this crucible to anticipate and prepare for the toughest questions from potential investors.
Cross-functional Alignment: Share the AI's output with your clinical, regulatory, and commercial teams to facilitate a more robust internal discussion.
Pro tip: After the initial output, follow up with: "Excellent analysis. Now, acting as Dr. Carter, please draft a 3-paragraph memo to your CEO explaining why our company should not acquire this asset based on the protocol's weaknesses. Be brutally honest."
***
Your Strategic Advantage: What This Means for You
If you're a Biotech/Pharma Executive:
Watch for: The first "AI-native" IND (Investigational New Drug) application. This will be a signal that the game has changed.
Experiment with: Creating a small, firewalled "Clinical Co-pilot" team. Give them GPT-5 (or equivalent), access to a anonymized dataset from a past trial, and a mandate to build a tool that would have accelerated it by 25%.
Start conversations about: The ethics and logistics of high-volume human data labeling for your clinical AI initiatives. Solve the Schuster v. Scale AI problem before it becomes your problem.
The 3 Moves to Make Now:
Audit Your "AI Readiness": Don't just ask if you're using AI. Ask how much of your clinical data is structured and accessible for next-gen models. If it's trapped in PDFs and siloed systems, that's your priority one.
Launch a "Reverse Pilot": Instead of asking how AI can help your current process, ask: "If we were designing this clinical trial from scratch in 2025, assuming a powerful AI co-pilot, what would we do differently?" The answers will reveal the dead wood in your current playbook.
Invest in Prompters, Not Just Programmers: The most valuable skill in the next 3 years of clinical innovation will be the ability to translate complex medical questions into effective prompts for AI models. Hire or train people for this specific talent.
Questions to Ask Your Team:
Which part of our clinical trial process is slowest, most expensive, and most reliant on manual human analysis? Why haven't we given that team a dedicated AI co-pilot yet?
If our top competitor cut their trial timeline by 30% using AI, how would we respond? What capabilities would we need to have in place today to survive that?
Are we treating AI as a cost-saving IT project or as a strategic R&D capability on par with our core biology and chemistry labs?
The Thought That Counts
A clinical trial is fundamentally a process for reducing uncertainty. What happens when the cost of running another thousand in silico trial simulations on an AI model drops to near zero? Does the very concept of a single, static, pre-defined trial protocol become obsolete?
Explore the legal implications of AI training data further. The complaint for Schuster v. Scale AI is public. Have your legal team read it. It’s a preview of the challenges to come in the world of medical AI.