Highlights from our CEO Patrick Grady’s Substack, Unvarnished

Why Does Getting New Drugs To Market Get Harder While Everything Else Gets Easier?

If you’ve spent decades in life sciences, you’ve felt this paradox daily.
Here's a first-principles analysis of what's actually happening.

The paradox that shouldn't exist

Computing power doubles every 18 months. Drug development costs double every 9 years. These opposing forces create an absurdity that anyone in pharmaceutical R&D has experienced: the most scientifically advanced era in history produces the least efficient drug discovery in modern memory.

The prevailing explanations for the diverging trend lines—low-hanging fruit, biological complexity, regulatory burden—don't hold up to scrutiny.

Source: ResearchGate

The prevailing explanations are all symptoms

Biological complexity and regulatory burden are consequences, not causes. The real failure lies deeper. After six years of working alongside scientists and observing patterns across dozens of organizations, a different picture emerges: This isn't about biology being hard. It's about architecture evolving in the wrong direction. A root cause analysis reveals four interconnected obstacles.


The Artisanal Colony

Science operates as a colony of isolated workshops. Each lab reinvents workflows. Each vendor builds walled gardens. Scientists spend days stitching CSVs instead of experimenting—human capital consumed with janitorial work. The result: systemic diseconomies of scale where more investment produces less innovation.

The Myth of Quantity

Biopharma confuses data volume with value. Terabytes of heterogeneous, unstructured data become data swamps. Without structure and context, AI cannot reason—it can only approximate. The limiting factor isn't compute, it's data architecture. No amount of computational power can extract signal from noise when the data itself is fundamentally unfit for purpose.

The Insufficiency of Compute

More GPUs won't fix broken epistemology. Scientific data are heterogeneous, context-dependent, and ontologically diverse. Models cannot infer what data never encode, and no amount of processing can extract meaning from data that was never designed to carry it. The bottleneck is data semantics, not computational power.

The Economic Trap

Every biopharma rebuilds the same commodity infrastructure for internal use, despite 90% commonality across organizations. Vendors defend proprietary data formats and opaque schemas. Collective learning is economically disincentivized, wasting tens of billions annually instead of compounding on shared innovation.

Bottom line impact to biopharma

These aren't abstract problems. They're the daily reality you navigate:

10M+

Proprietary data silos fragmenting scientific knowledge

90%

of R&D stack is common across biopharma, yet treated as bespoke

$10B+

wasted annually on duplicated infrastructure efforts

How these failures compound

These root causes don't exist in isolation. They reinforce each other, creating a system that gets worse with scale.

1. Fragmented Data Architecture

Thousands of incompatible instruments, systems, and vendors generate unstructured data in proprietary silos. This is fundamentally incompatible with the era of AI.

2. Data Swamps

Without context and structure, vast data lakes become noise. AI cannot reason on incoherent inputs. AI’s promise collapses into vanity projects that never scale beyond curated pilots.

3. Human Capital Waste

Scientists become digital janitors—reconciling metadata, stitching CSVs, wrangling file formats instead of experimenting. The industry’s underlying architecture was never designed for scale, interoperability, or machine reasoning and it serves no purpose in the era of AI.

4. Failed AI Promises

Vanity projects and curated pilots that never scale. No network effects. No compounding learning. The limiting factor in Scientific AI is not compute, not models, not imagination — it is data architecture and cultural inertia.

5. Diseconomies of Scale

Every additional experiment increases friction. Every merger adds entropy. Productivity falls while costs explode. Science, the very enterprise meant to industrialize knowledge, remains trapped in a pre-industrial mode of production.

More than a vicious cycle, it's a purposefully designed system optimized for failure—where every attempted solution that doesn't address the architecture exacerbates the problem.

The path forward

The crisis isn't biological or financial. It's architectural—and the people who understand this best are the ones living with the consequences.

You've probably intuited this: the industry wastes billions solving the same problems because the economic incentives push everyone toward proprietary solutions for commodity infrastructure. What if 90% of the R&D stack was treated as shared infrastructure instead of competitive moats?

The answer isn't better software. It's a different architecture: standardized, AI-native infrastructure that lets organizations compound on shared innovation instead of duplicating the same integrations and analyses.

This is about liberating expertise, not replacing it, and giving scientists back the time currently consumed by digital janitorial work so they can focus on what you went into this field to do: discover.

Want to see how this plays out in detail?

Perspectives from our CEO

TetraScience CEO Patrick Grady has spent years working at the intersection of enterprise software, AI, and data infrastructure. His Substack, Unvarnished, breaks down the root causes, challenges conventional wisdom, and maps the path forward—written for the people leading the work.

Subscribe to Unvarnished for first-principles thinking from someone who's been in the same trenches before.

Subscribe