Riffing on Machines of Loving Grace
Exploring some implications of Dario Amodei's "geniuses in a datacenter" for biology.
Introduction
Dario Amodei's essay, Machines of Loving Grace, has been living rent free in my head since he published it two months ago. In it, he lays out a positive vision for how superhuman AI systems could accelerate biological progress. Niko McCarty also wrote a response essay, Levers for Biological Progress, and Adam Marblestone wrote a tweet thread (which is begging to be expanded into an essay). These inspired me to write up my own thoughts riffing on the implications of the sort of biological acceleration Dario describes.
Sitting down to write, I felt overwhelmed by the number of threads there were to pull on, so rather than going for comprehensiveness, I decided to “riff” on a few implications I find particularly compelling:
Molecular design is ripe for acceleration
AIs would be superhuman experiment planners
Automation could finally penetrate into early stage exploratory research
AIs will like modular therapeutics even more than I do (high bar)
AIs’ discoveries will surprise, and likely upset (some of) us
These are based on Dario’s assumptions in the post with a few of my own tacked on. The next section lays out where I agree, expand, and disagree with Dario (and Niko’s) assumptions to set the stage for the “riffs”.
Admittedly, I am a bit nervous about posting this essay because it’s highly uncertain but I decided to because I would love to hear other people’s thoughts on this topic. From experience, I know that putting my own (inevitably imperfect) ones out there is the best way to do that. Given my goal is to hear from you, don’t hesitate to comment/email me/etc. with thoughts!
Assumptions about what geniuses in data centers will and won’t be capable of
Dario describes his assumptions about future AI systems as follows:
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence1, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed2. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
I’ll now add a few additional ones.
Relative to Dario, I expect the capability frontier to be more jagged, at least initially. I assume AI systems will likely be more superhuman at tasks involving coding, mathematics, and other clearly verifiable domains long before they dramatically exceed human capabilities across the board. I still expect them to have superhuman capability as biologists, but I imagine their programming and mathematical ability to exceed their biological capability on a relative basis. Similar to Dario, I also assume they'll still be constrained by Moravec's Paradox when it comes to physical interaction with the world. This means tasks that seem trivial to us, like manipulating delicate lab equipment in novel ways, will remain challenging if not actuatable via programming. I expect their interface to the physical world will primarily be through software and specialized automation rather than general-purpose robotics.
These AIs would have some key operational advantages over humans. As Dario mentions and in line with Jacob Steinhardt's analysis of GPT-2030, they would likely operate at roughly 10x faster "clock speeds" than humans. Combined with their tirelessness, this means that “thinking” will be much cheaper for them than it is for us.
As dramatized by Richard Ngo’s Tinker and discussed by Jacob Steinhardt here, I expect future AI systems to be able to directly add new senses for data modalities with abundant data. In the same way ChatGPT, Claude, and Gemini are now natively multimodal, future systems will be able to directly perceive protein sequences, structures, molecules, etc. without requiring a tool to translate to images or text as humans require. This enhanced perception, combined with their ability to process vast amounts of scientific literature and experimental data, is part of the reason I expect these systems to have substantially better biological intuition than humans despite potentially operating through more restricted physical interfaces.
My main disagreements with Dario and Niko
Per the above, I agree with many of Dario’s assumptions. I also really like Niko’s ideas for how we can work on enabling developments. But at times, I worry that they both fall prey to a tendency to imagine these superintelligent systems simply doing what we do today, just faster and better - discovering more CRISPR-like systems, designing better delivery vectors, training machine learning models, or finding more drug targets. While they'll certainly do all of that, it feels like this portrayal both understates the transformative impact of these systems and results in a “missing mood” around what such transformation would feel like to observe.
Many of the below sections spell out the implications of this, but the main point I want to make here is that it’s easy to round off “geniuses in a datacenter” to “image of a few smart people spending a bit of time thinking”, but this is very much not that. If Dario’s assumptions are correct, we’d be talking about an entire small city of entities with unprecedented superhuman scientific understanding running at 10x human speed. Taking this seriously (and literally) causes me to diverge from both the vibe of some of Dario’s claims as well as some of Niko’s concrete claims about these systems sharing bottlenecks we face.
Regarding Dario’s vibe, while the substance of what he claims would be transformative, it sounds a bit mundane when he describes it. (I appreciate that this balances others’ tendencies in the opposite direction and demonstrates that Dario is a thoughtful, measured thinker who is capable of staying grounded.) As I discuss below, in addition to 10x more CRISPRs, I’d expect superhuman AI systems to make shocking discoveries that could upend entire paradigms. Rather than imagining being given new tools that we just go and use to cure a bunch of diseases, observing and participating in this process might feel like watching aliens occasionally drop inscrutable artifacts, tell us about thought-to-be fundamental things we believed that are wrong, and describe entirely new areas of science in terms that require significant deciphering. Since we’re assuming these systems would be aligned, they could presumably help us with all of this and ease the process as much as possible, but I still imagine it would feel pretty crazy and involve disruption, strife, and even existential angst.
Regarding Niko’s claims, while I strongly support the ideas he describes and agree that certain bottlenecks (such as predicting and testing human translation) will be challenging even for superhuman AI systems, I think others will prove more surmountable by “geniuses in a datacenter” than he seems to think. It may not extend as far as feats such as the one described here, but I would expect superhuman systems with especially strong programming skills, mathematical capabilities, and knowledge of the literature to be able to get surprisingly further than we naively expect using computational capabilities and existing knowledge/data. The first riff section describes the sorts of accomplishments and progress I’d imagine in the molecular domain, and the section after that discusses in more detail how I’d expect AI systems to get more out of planning than we do. These hopefully give a sense of where and I see current bottlenecks as overcome-able by hypothetical AI systems.
Zooming back out, in the same way it’s easy to imagine breakthrough science as people wearing white lab coats mechanically puttering away in the lab following the high school version of the scientific method vs. people making radical leaps based on quasi-mystical intuitions, it’s easy to fall into a similar trap when imagining these future systems. This section is my attempt to remind myself and others to try and avoid this. Instead, even in the extremely beneficial scenario we all hope for, I expect working with these systems would involve profound moments of shock, awe, and paradigm-shattering insight of the same magnitude as past shifts.
Riffs
Reminder that, beyond being conditional on the above assumptions, all of these opinions should be taken as speculative and my best guesses today rather than highly confident claims that I know will come to pass.
Molecular design is ripe for acceleration
Of the many areas of biology, molecular design feels especially ripe for acceleration. This may seem surprising because this field is already advancing rapidly. In just the past few years, we’ve gotten AlphaFold (1-3), lots of exciting tools and results for de novo designing proteins, and many more modeling results than I could cover here all showing the promise of AI for molecular prediction and design.
But if you zoom out from the simple functions we have gained ground on, primarily structure, binding prediction, and other “simple” functions, the gap between what’s possible and what we can reliably do today becomes clear. Evolution, an imperfect but extremely persistent molecular engineer, has produced marvels of molecular complexity and precision which are still well beyond our capabilities. Molecular motors like those employed by ATP synthase and bacterial flagella, copy machines like DNA polymerase, massive viral protein complexes like the giant mimivirus, self-assembling targeted cellular syringes, molecular filters and measurement devices, and many more. While we have achieved limited de novo construction of some of these (pores, nanocages, simple rotors), we are clearly extremely far from evolution’s ceiling, let alone physics’.
This gap between what evolution has produced and our current capabilities shows that there’s enormous headroom for improvement. On top of having a long way to improve, I believe molecular engineering has particularly high “returns to intelligence”. Here's why:
We see high returns to intelligence and compute already! AlphaFold, RFDiffusion, AlphaProteo, ESM, and other models that are sure to come all show what’s possible today with enough compute, ML and engineering capabilities, and effort. Part of this was enabled by a “data overhang” from the PDB and sequence databases, but there’s reason to suspect superhuman coding and focused effort can help unblock the data bottleneck as well. This could happen through improvements like (discussed below) improvements to MD as well as a range of other ideas. On top of that, while gathering structure and molecular function data is by no means cheap, relative to the rest of bio, it’s more elastic to money and powerful automation than other areas with regulatory, ethical, and temporal (e.g., it takes time to see if a mutation affects an adult organism in a predicted way) barriers. For example, there is no regulatory barrier to spending 10X more on Cryo-EM to generate 10X the number of resolved structures, potentially targeted at specific types of data. It would be expensive, but it’s different from testing more in humans, where there are safety and ethical risks that we can’t just spend $ to remove.
Superhuman programming and mathematical skill unlock transformative improvements to molecular dynamics. Rewriting the entire MD stack to be GPU accelerated while spending 10x thought years on optimizations and approximations is fully in scope as the sort of thing superhuman AI systems would make possible. Even a concerted effort from a relatively small team at MSR, obviously building upon the shoulders of giants, gave us this. If dramatic improvements could enable MD to scale to the relevant space and time scales, then at minimum, that could expand the size and diversity of available data dramatically, adding important dynamical information, and might substantially improve the generalization of models trained on said data by helping them learn physics.
AIs will be able to add senses for perceiving and reasoning about molecular systems. Outside of smell, which isn’t as useful for molecular engineering, humans can’t directly sense the nanoscopic world, which limits our reasoning about molecules to either be mediated by visual interface like pymol or to relying on logic. Future AIs won’t face this same limitation. Per the assumptions section, I expect future AI systems to be able to integrate “new senses” for the molecular world that make perceiving it analogous to how we perceive through sight, touch, etc.
To understand why this matters, consider how human mechanical intuition operates. While we're not as precise as our measuring tools, we develop reliable intuitions about the physical world through experience. We can often predict whether an object will be too heavy or if a nail will fit in a hole without explicit calculation. This ability has proven especially helpful in our species’ history for prototyping and exploring potential designs. Similarly, future AIs could develop integrated world models that give them intuitive understanding of nanoscopic phenomena like molecular binding. Rather than just processing abstract representations, they could develop something closer to a direct "sense" of molecular behavior and use these senses to think much more effectively about molecular design.Superhuman programming will enable lots of other improvements to the molecular engineering workflow. Above, I talked about how superhuman programming and mathematical skill could potentially supercharge progress in molecular dynamics, but the impact would go so far beyond that. Conservatively, superhuman AI programmers could compress decades of progress on developing bespoke tools for computer-aided molecular design into a single year, leaving them with powerful tools perfectly adapted to their uneven capabilities and integrated with future molecular models as well as MD frameworks.
Even without postulating self-replicating nanotech or the ability to perfectly predict interactions in vivo, accelerated progress in molecular design could provide us a much more powerful therapeutic interventional toolkit. Here are just a few examples of what might become possible:
Next-gen delivery systems which could be perfectly immune cloaked, can be targeted to the organ/tissue/cell type with adjustable precision, can accommodate hundreds or even thousands of kilobases of DNA/RNA and/or proteins, and which work with inducible payloads which can be tuned to specific expression levels.
DNA, RNA, and epigenetic editors capable of performing gene-sized edits but with off-target rates well below the natural, error-corrected error rate of DNA polymerase, and which disassemble upon successful editing.
CAR-T cell therapies with extremely precise, combinatorial regulatory logic based on designed receptors.
Synthetic receptors designed to react to precise signals.
Protein switches which could be attached to the above or other molecules for further programmability.
Measurement methods would also benefit from superhuman molecular design capabilities. Again, a glimpse of what could be possible includes:
Biocompatible, safe molecular recording devices for non-destructively capturing cellular events.
Custom nanopores for detecting a huge range of metabolites and molecules.
Much better DNA sequencers (and synthesizers) using designed proteins.
More speculatively, I suspect superhuman AI systems would invent entire new classes of molecules optimized for engineerability and predictability. We can already see hints of this direction in current protein design work, where researchers have successfully created modular molecular structures like coiled coil peptides, protein rotary motors, beta-solenoids, and nanocages. These structures share key properties that make them amenable to rational design: they're highly symmetric, composed of repeating units, and their behavior can be predicted from first principles. Nature itself demonstrates how such engineerable components can be combined with programmable control mechanisms - consider how cells use phosphorylation cascades to implement complex logical operations. AI systems would likely push this engineering-oriented approach much further, developing entire libraries of predictable molecular building blocks that could be assembled into increasingly sophisticated machines.
AIs would be superhuman experiment planners
For AI systems operating at >10x human speeds, every hour spent thinking would cost them far less than it costs us. But, as Niko discusses in his essay, experimental time would remain constant - cells would still take days to grow, mouse experiments could still take weeks, and clinical trials would, at least initially, still take years to complete. This fundamental asymmetry between thought-time and experimental time would drive these systems to approach experimental planning very differently than we do.
While we already try to carefully plan our experiments, the depth of our planning is limited by our thinking speed, working memory capacity, and programming capabilities. AI systems would face none of these constraints. They could spend thousands of subjective hours analyzing the literature, looking for subtle patterns and inconsistencies across papers that might inform experimental design, drilling down into methods sections and individual figures, and scrutinizing every single decision.
Their superhuman programming capabilities would also transform experimental planning. They could write sophisticated simulation code to predict experimental outcomes, build custom analysis pipelines to extract maximum signal from their data, and develop new statistical frameworks for experimental design. Need to model how different variables might interact? They'd build it. Want to extract more information from each experimental condition? They'd find a way.
The result would be experimental plans that seem almost impossibly thorough by our standards - carefully controlling for every known confounder, incorporating multiple parallel readouts, and structured to extract maximum information from each precious hour of experimental time. While this level of planning might seem excessive to us, for systems operating at AI speeds and scales, it would be the obvious way to minimize expensive experimental time.
Automation could finally break into discovery-phase research
When these systems do need to run physical experiments, I expect they'll find automation much more approachable than we do today. Even without solving general robotics, AI systems could dramatically expand the use of lab automation in early-stage research. Today, one of the biggest barriers to automation adoption isn't technical limitations but rather the high fixed cost of programming automated workflows. Researchers often decide it's not worth writing code for a procedure they could just do manually, especially given the expected pain of having to debug and iterate on the workflow. But for AI systems who speak in code and possess unparalleled industriousness and patience (e.g. for watching to make sure the tip doesn’t hit the bottom of the well), spinning up and QCing bespoke automation software would be trivial rather than a burden.
This shift in the trade-offs of automation programming, combined with continued improvements in computer vision, could enable much more flexible and cost-effective lab automation. Imagine an OpenTrons system monitored by computer vision, with an AI that never tires watching operations in real-time at many times human speed. Multiple AI systems could continuously improve the underlying code and models controlling such systems. The AIs' faster processing speeds - operating at roughly 10x human rates - would help overcome many of the flexibility challenges that currently make lab automation impractical for discovery-phase research.
Of course, this wouldn't solve all automation challenges, particularly the complex choreography required between different robotic systems for full automation. However, these coordination problems could potentially be mitigated through advanced work cells and hybrid approaches that keep humans involved in key steps while automating an increasing portion of the workflow. The key insight is that superhuman programming capability fundamentally changes the calculus around when automation makes sense.
AIs might favor modular therapeutic platforms
If my assumptions and arguments from the prior sections hold, superintelligent AIs will be extremely skilled molecular engineers, programmers, and mathematical modelers who have deeply analyzed all existing literature. Given a goal of achieving results quickly and safely, I believe that they'd favor platform technologies that let them solve hard problems once and leverage those solutions across many programs.
Today, powerful modalities like antibodies, gene therapies, and cell therapies are constrained by manufacturing costs, delivery challenges, and safety concerns. For the reasons discussed above around molecular engineering capabilities, these core platform-level problems may be much more tractable for our future AI friends. If they can solve these problems, then the economic logic for platform therapeutic modalities becomes much more compelling due to massive benefits of amortizing fixed costs across many therapeutic programs.
Once they’d derisked the safety of a targeted delivery system, for instance, that same validated system would be able to be used across many different therapeutic payloads, leaving payload safety and efficacy as the primary risks. This would create powerful economies of scale: high upfront investment to solve platform challenges, but then lower costs and faster development for each new therapeutic. Safety learning could accumulate across programs, and multiple therapeutic candidates could be tested in parallel once the platform is validated. All these things could be true today but would become an even bigger advantage if foundational engineering capabilities are sufficient to solve the core challenges mentioned above and in a world where in vivo validation is an even larger relative bottleneck. (I owe much to Darkome by Hannu Rajaniemi for its realistic fictional portrayal of a world in which much of this comes to pass.)
Of course, this platform-first approach faces challenges. The regulatory framework would need to continue to evolve to handle truly modular approaches. Some diseases may require completely novel mechanisms that don't fit neatly into existing platforms. And there will always be tension between optimizing platforms versus developing specific therapeutics. But given the imperative for rapid progress and the AIs' likely capabilities, I would expect strong bias toward modular therapeutic platforms that would allow derisking major components of safety and manufacturing once, followed by safer rapid iteration on therapeutic applications.
AIs will make surprising, controversial, paradigm-breaking discoveries
Consider Michael Levin's work on bioelectricity and cellular decision-making. His research hasn't just given us new tools - it may fundamentally change how we think about biological systems. The idea that bioelectric patterns can encode and control complex anatomical outcomes challenges the traditional bottom-up, genetic-first view of development. If he’s right, cells and tissues can exhibit goal-directed behavior and information processing at scales we didn’t know were possible, controlled by relatively coarse electric potential signals. This is a Kuhnian paradigm shift, not another tool in the toolbox.
If we really have “geniuses in a datacenter”, I’d expect them to make multiple discoveries that upend entire paradigms or areas of understanding. They won't just be filling in our existing maps of biology - they'll be revealing entirely new territories we didn't know existed. Perhaps they'll discover fundamental principles about biological organization that make as much of a difference to our understanding as the discovery of DNA did. Maybe they'll reveal entirely new layers of biological information processing, or uncover deep patterns in how living systems maintain and repair themselves that completely change our therapeutic approach.
Just like human insights of this nature involve scientific resistance and conflict, these developments would too. And not just in science. Relativity and quantum mechanics may have contributed to to the loss of faith in rationalism by many intellectuals in the early 20th century. Imagine having a firehouse of viewpoint shifts of similar magnitude in biology (and other fields). Not knowing what they’d be makes it hard to guess at the impact, but some disruption is to be expected.
This also has important implications for how we think about the impact of AI on biological progress. It implies that, in addition to our societal capacity to benefit from improvements, we’ll be rate limited by our ability to trust and buy into what could be pretty shocking breakthroughs from these systems. Among other things, our already ailing scientific review mechanisms could become a serious bottleneck.
Conclusion
In multiple places in this essay, I’ve pushed back against the tendency to assume “geniuses in a datacenter” would face all the same bottlenecks we do. However, this shouldn't be mistaken for me viewing removing as many bottlenecks as possible as futile. I think it’s one of the highest leverage things we can and should work on today1. Even if AI systems will eventually do a growing share of the heavy lifting, right now we need to lay groundwork - especially around translation and experimental systems. Perfect protein design won't matter nearly as much if we can't figure out how to cheaply and quickly validate safety in humans.
And while this essay explores the implications of the “geniuses in a datacenter” scenario, my views on how accurate that set of assumptions will prove are continuously in flux. Accounting for all of this, it continues to make sense to me to simultaneously prepare for transformative developments while working to solve today's concrete challenges, remaining open, flexible, and aggressively integrating AI along the way.
If this scenario does come to pass, things will be pretty crazy, hopefully in a good way!
Acknowledgments
Willy Chertman and Eryney Marrogi for helpful feedback on this essay. Dario, Niko, and Adam Marblestone for writing essays which provoked me to write mine. Richard Ngo and Hannu Rajaniemi for Tinker and Darkome, which both gave me a vivid sense of relevant future scenarios.
With another being thinking about how we ensure differential progress works for us rather than against us.
Nice post. I’d like to see more thought on what modern scientific institutions look like given the new pace of science. Working on a piece thinking about this, will reference this.