Highlights from our CEO Patrick Grady’s Substack, Unvarnished
Computing power doubles every 18 months. Drug development costs double every 9 years. These opposing forces create an absurdity that anyone in pharmaceutical R&D has experienced: the most scientifically advanced era in history produces the least efficient drug discovery in modern memory.
The prevailing explanations for the diverging trend lines—low-hanging fruit, biological complexity, regulatory burden—don't hold up to scrutiny.

Source: ResearchGate
Science operates as a colony of isolated workshops. Each lab reinvents workflows. Each vendor builds walled gardens. Scientists spend days stitching CSVs instead of experimenting—human capital consumed with janitorial work. The result: systemic diseconomies of scale where more investment produces less innovation.
Biopharma confuses data volume with value. Terabytes of heterogeneous, unstructured data become data swamps. Without structure and context, AI cannot reason—it can only approximate. The limiting factor isn't compute, it's data architecture. No amount of computational power can extract signal from noise when the data itself is fundamentally unfit for purpose.
More GPUs won't fix broken epistemology. Scientific data are heterogeneous, context-dependent, and ontologically diverse. Models cannot infer what data never encode, and no amount of processing can extract meaning from data that was never designed to carry it. The bottleneck is data semantics, not computational power.
Every biopharma rebuilds the same commodity infrastructure for internal use, despite 90% commonality across organizations. Vendors defend proprietary data formats and opaque schemas. Collective learning is economically disincentivized, wasting tens of billions annually instead of compounding on shared innovation.
Thousands of incompatible instruments, systems, and vendors generate unstructured data in proprietary silos. This is fundamentally incompatible with the era of AI.

Without context and structure, vast data lakes become noise. AI cannot reason on incoherent inputs. AI’s promise collapses into vanity projects that never scale beyond curated pilots.
.jpg)
Scientists become digital janitors—reconciling metadata, stitching CSVs, wrangling file formats instead of experimenting. The industry’s underlying architecture was never designed for scale, interoperability, or machine reasoning and it serves no purpose in the era of AI.
Vanity projects and curated pilots that never scale. No network effects. No compounding learning. The limiting factor in Scientific AI is not compute, not models, not imagination — it is data architecture and cultural inertia.
Every additional experiment increases friction. Every merger adds entropy. Productivity falls while costs explode. Science, the very enterprise meant to industrialize knowledge, remains trapped in a pre-industrial mode of production.
The crisis isn't biological or financial. It's architectural—and the people who understand this best are the ones living with the consequences.
You've probably intuited this: the industry wastes billions solving the same problems because the economic incentives push everyone toward proprietary solutions for commodity infrastructure. What if 90% of the R&D stack was treated as shared infrastructure instead of competitive moats?
The answer isn't better software. It's a different architecture: standardized, AI-native infrastructure that lets organizations compound on shared innovation instead of duplicating the same integrations and analyses.
This is about liberating expertise, not replacing it, and giving scientists back the time currently consumed by digital janitorial work so they can focus on what you went into this field to do: discover.

TetraScience CEO Patrick Grady has spent years working at the intersection of enterprise software, AI, and data infrastructure. His Substack, Unvarnished, breaks down the root causes, challenges conventional wisdom, and maps the path forward—written for the people leading the work.