“I want to be one of the greats”: Lisan al Gaib indeed.
Billion Dollar Bioproducts: Good list of underrated bioproducts with current or potential billion dollar markets. My personal favorites:
On sincerity: Not super recent but I either never read this one from Joe or read it and forgot it.
Bayer CEO Bill Anderson Previews 2025 Turnaround Plans (paywalled) and one former senior employee’s take: Pharma organizations are fascinating as bureaucratic model organisms, but it can be hard to find believable case studies of their strategy. As a result, I’m always looking for tidbits of grounded, concrete information about their strategy and leadership. Here, the CEO of Bayer is interviewed about his ongoing turnaround strategy. Read alongside the second link, it’s an interesting study of how hard “change management” is at scale, especially when you move beyond the pablum of a consulting slide.
Scientists aiming to bring back wooly mammoth create wooly mice: Progress from Colossal, resulting in very cute, “wooly” mice. To me, the real story here is that, yes, if you don’t mind having a nonzero failure rate, multi-gene embryo editing works. As Shelby said, imagine what happens if we scale to 50 or a thousand edits.
AI as the engine, humans as the steering wheel: Vitalik sketching out a mechanism through which humans remain in the driver’s seat but the car is a hypersonic rocket powered by AI. I have been interested in mechanism design for over a decade and would strongly consider working on it were I not even more interested in AI / bio. Keeping in mind that that means I’m an optimist on this sense, I really like this idea and more generally wish we saw more experimentation in this space. Early experiments I like in this space include Deep Funding (discussed in the post), Metaculus’s AI forecasting benchmark series, and Democratic Fine-tuning .
Towards Institutions of Inhuman Trustworthiness and Transparency: Very related to the prior article, argues for what I’d call credibly neutral entities (journalists, historians, etc.) based on fully auditable AI systems. While I used to think societal impact fears were overrated, I’ve come around to the idea that having AIs aligned to our personal needs is not the default trajectory and that not having them aligned as such at least poses a real risk to our personal empowerment. This article does a good job of showing how the future could be better or worse in this respect, even if we condition on reasonably aligned superhuman, but perhaps not superintelligent, AI.
Why Pytorch is an amazing place to work… and Why I’m joining Thinking Machines: Horace is one of these “if you know, you know” people where you know he’s GOATed if you’re doing ML. If you’re not, you might not know who he is because he’s not meme-ing on Twitter or vagueposting about AGI. Mostly found this interesting as a peek into how the Pytorch org operates given that they have been a consistently effective steward for Pytorch for so many years from within Meta.
Integrating AI agents into companies: Austin Vernon with an interesting take on the nuts & bolts of integrating AI agents into corporations. I have some knee jerk skepticism because a lot of this feels like the same advice we saw that works for a very small subset of remote companies, such as Gitlab, without providing a massive advantage, and totally fails for most. At the same time though, it’s both true that 1) Henry Ford-esque insights into business organization may be required to get the full benefit of AI and 2) we haven’t had them yet. This post gestures in directions that seem fruitful for potentially finding such insights.
The rise of AI organizations looks similar to software-oriented startups but supercharged. Putting the entire context of an organization in text is easier from the start than patching an existing organization. The best companies will treat AI agent waiting time like Toyota does inventory. And having AI speed from the beginning will make iteration cycles much faster.
Democratizing Computed-Aided Drug Development: Reflections on the state of computer-aided drug development from one of Rowan’s founders (focused on chemistry, i.e. small molecules). Good tidbits like:
Better instrumentation and analytical tooling has revolutionized chemistry over the past sixty years, and better design & simulation tools can do the same over the next sixty years. But as we’ve seen with NMR and mass spectrometry, enabling technologies must become commonplace tools usable by lots of people, not arcane techniques reserved for a rarefied caste of experts. Only when computational chemistry undergoes the same transition can we fulfill the vision that Van Drie outlined years ago—one in which every bench scientist can employ the predictive tools once reserved for specialists, and in which computers can amplify the ingenuity of expert drug designers instead of attempting to supplant it.
Alpha Shock: I discovered this by way of the prior post and loved it. A speculative science fiction story about doing chemistry in 2037 by Pat Walters and Mark Murcko, both currently at Relay. Considering this was written in 2012, it holds up shockingly well. Nowadays, it would probably have more deep learning, but that’s a totally forgivable omission. Sadly, some of its “predictions” for 2021 proved overly optimistic:
Requirements for electronic submission of chemical structures and data accompanying any journal submission had been instituted in 2021. Back in the 2010’s there had been a significant backlash due to the inability of researchers to reproduce scientific results from across all the disciplines, but computational papers were uniformly the worst. The resulting near-total breakdown (highlighted by several leading academics being fired for incompetence and outright fraud) finally brought about an emphasis on reproducibility and a new willingness among computational chemists and biologists to create a shared software infrastructure and develop software tools that could be easily enhanced and integrated. This level of collaboration led to the development of standard ontologies and to the first practical applications of the semantic web. That said, lots of fun vision boarding for people like me who view thinking about future scientific UXs as a relaxing hobby: > In the end, Sanjay’s competitive nature won out. There was no way that he was going to let Dmitri win. After all, Sanjay was a “drug hunter,” a species that had been rare for more than 100 years. He remembered that Paul, one of his students in the Republic of Texas, had developed a new algorithm for selectively enumerating all of the chemical reactions stored in the public BeilPubCAS repository. Sanjay accessed Paul’s Python code and deployed it onto the Amazon Hyper-Cloud. He then accessed the existing structures from the WCR and cross-referenced these against experimental data for available compounds and related targets. With the structures in hand, Sanjay ran the WCR’s AutoStere protocol to identify potential isosteric binding sites in the structures of interest. The virtual tethering protocol identified a site that might be able to restrict the loop movement and allow him to design a single agent to block both targets. > > With the structures and custom function in hand, Sanjay was ready to initiate the docking study. But despite recent advances in the TIP32P** water model, Sanjay still didn’t completely trust the predicted protein-ligand binding energetics. Next, he transferred the experimental data into the Google Predictive Analytics engine and quickly designed a new empirical function to fit the experimental data. Now he launched the dynamic docking simulator, dropping the empirical function into the hopper. He always preferred to run his docking simulations using the slow protocols that incorporated solvated dynamics—“less haste less waste,” as his 107-year-old grandmother still liked to say as she carefully pruned her rose bushes. A progress bar appeared in front of him showing “1030 molecules remaining, 2,704 h 15 min to completion.” Sanjay quickly stopped the process and constrained the search to only those molecules that fell within the applicability domain of his empirical function. This reduced the search to 1012 molecules and allowed the analysis to complete in a few minutes. > > After a bit of visual inspection to confirm the results of his docking study, Sanjay moved on to the next step. He knew that slow binding kinetics could provide a means of lowering the dose for his compound. To check this, he ran a few seconds of real-time MD on each of the top 50,000 hits from the docking study. A quick scan of the results turned up 620 structures that appeared to have the required residence time. Sanjay submitted all these structures to PPKPDS, the Primate Pharmacokinetic and Pharmacodynamic Simulator, a project developed through a collaboration of industry, academia, and the World Drug Approval Agency. Of the compounds submitted, 52 appeared to have the necessary PK profile, including the ability to be actively transported into the brain. All but a few were predicted to be readily synthesizable. He felt satisfied that he could turn this problem over to his network of students.
The Experience of Maxing Out One’s Cognitive Horsepower: appreciate people seen as “high cognitive status” sharing these experiences. Also where I diverge from at least some SV types the most in that I think, “you can just learn it in two weeks” is total BS for the subjects in which humanity has built the most knowledge (math, physics, chemistry).
Basically, almost everybody who pursues serious math eventually reaches a level at which there’s just not enough scaffolding to justify continuing. > It’s not a hard threshold at which you’re suddenly incapable of learning more advanced math, but rather a soft threshold at which the amount of time and effort required to learn begins to skyrocket until it’s effectively no longer a productive use of your time (when you consider the opportunity cost). > People get off the train and stop learning math once it begins to feel too inefficient. This isn’t even a math-specific thing – the same thing plays out everywhere else in life. In anything you do, once the progress-to-work ratio gets too low, you’re going to lose interest and focus on other endeavors where your progress-to-work ratio is higher.
On the other hand, as Dan Luu says, getting to 95th percentile isn’t that hard.
50 Thoughts on DOGE: I try as hard as I can to stay far away from any current hot button politics here and on Twitter, but I enjoyed this and felt like it was a measured enough take that it was in spiritual alignment with the intentions behind my avoidance.
Life, Liberty, and Superintelligence: “Scholarly analyst” (source: Dario), Dean Ball
Perhaps, rather than conceiving of AI as something that “watches over” humans, we should conceive it is a new kind of tool—or even a force of nature we have discovered—that we use to ascend to new heights. To do this, though, we will need to build the kind of society that cultivates such ambition in all productive domains of human life. Perhaps this is the new “economic setup” to which we should aspire, rather than one based on preemptive safety, unending “risk management,” and universal basic income.
Viewed in this light, the better purpose of “AI policy” is not to create guardrails for AI — though most people agree some guardrails will be needed. Instead, our task is to create the institutions we will need for a world transformed by AI—the mechanisms required to make the most of a novus ordo seclorum. America leads the world in AI development; she must also lead the world in the governance of AI, just as our constitution has lit the Earth for two-and-a-half centuries. To describe this undertaking in shrill and quarrelsome terms like “AI policy” or, worse yet, “AI regulation,” falls far short of the job that is before us.
Danaher: Lean Capital Allocation: Triggered by Anand’s tweet, I went and re-read Cedric on Danaher. Realistically, if we want more smart people to do boring things in bio, we need more people interested in process improvement and wielding capital effectively. As Danaher says, “common sense, vigorously applied”:
DBS is simple. At its core, it’s just a set of tools that remind people what to do: To stay focused on what matters. To use visual tools. To keep meetings short and focused and email only what’s necessary. To manage the little details. To measure what matters and improve on those measurements by doing a little every day versus taking big leaps in spurts. To benchmark to the best and be willing to accept the realities that others are getting better too. To hire humble and transparent folks. To develop internal talent so that when you get promoted, someone is ready to take your role. And to get rid of those who don’t live those principles. None of this is rocket science. There are no new paradigms. Danaher and Fortive employees aren’t expected to reinvent the wheel. They’re expected to make that wheel go a little faster and smoother every single day.
Lessons from the Industrial Titans: Fascinating interview transcript I stumbled upon while recently re-reading of Cedric’s post (above) about Danaher. Here, Patrick O’Shaughnessy interviews the authors of Lessons from the Titans. I’ve since gone and read the full book, but this interview was a good appetizer. Among other things, as someone whose formative career moments largely occurred during ZIRP, this helps me understand the sort of psychological hurdle a company like TSMC faces in becoming AGI pilled. Only the paranoid survive goes both ways.
How I’ve run major projects: Characteristically lucid, example-filled post from Ben Kuhn on running major projects, especially during crisis moments. So much embodied wisdom in here. Many people who have managed similar projects have had to (re)discover these tips over the years, but lacked Ben’s initiative, comprehensive understanding, and clarity in writing it all down. I am confident I’ll be re-sharing this post many many times in the future.
You exist in the long context: Interesting meditations on long context LLMs framed with an interesting game. I’ve been waiting for people to create interesting experiences & games using LLMs and the one in this post is simple but still one of the most interesting examples I’ve seen.
Good Research Takes are Not Sufficient for Good Strategic Takes: Neel Nanda, a creative, thoughtful mechanistic interpretability researcher at DeepMind discusses how having good research taste and good strategic sense are correlated but not as highly as many seem to think. I largely agree and would add that good strategy often involves a generous helping of “common sense, vigorously applied”, which can be a hard pill for pioneering researchers to swallow.
Writing for LLMs: Gwern with some thoughts on what to write for the AIs. I think I read the original version of this, but mostly forgot it outside of the parts he repeated on Dwarkesh’s podcast. Based on this, I intend to write some more tacit knowledge posts about programming and maybe even some interesting autobiographical stuff. One challenge here is that there’s no easy way to write only for the AIs, but I suppose not blasting out links is a good second best option.
Good conversations have lots of doorknobs: Fun post that puts names to features of good (and bad) conversations I’ve observed but never crystallized. Adam consistently shows me what psychology could be. I agree with this:
While takers deserve some redemption, givers deserve some scrutiny. On day one of Improv 101 they’ll tell you not to ask questions in a scene because it puts undue pressure on your partner. “Hey, what are you doing?” “Uhh I’m making things up in an improv scene.” Similarly, refusing to take the spotlight in a conversation may seem generous, but in fact can burden the other person to keep the show going. (“What’s up?” is one of the most dreadful texts to get; it’s short for “Hello, I’d like you to entertain me now.”) And asking your partner question after question and resenting them when they don’t return the favor isn’t generosity; it’s social entrapment, like not telling your friends that it’s your birthday and then seething that they didn’t get you cake.
Towards a scale-free theory of intelligent agency: Still digesting but Richard Ngo taking inspiration from multi-agent theories of mind to rethink fundamental models of goal-oriented agents.
What does it mean for a technology to scale?: Ben Reinhardt on what makes scaling up technology hard and why intuitions from software, a near zero marginal cost business, can be very misleading. Reasons discussed here are why I’m curious to see how people going after AI for materials tackle the valley of death for novel materials.
Asian Conglomerate Series: Cedric Chin writes a series of cases on Asian tycoons. I always appreciate people both willing to dig into lots of concrete cases and then try and understand the underlying levers that unify (and differentiate) them. On top of that, I'm currently on a bit of an industrials kick, so this is a must read for me.
This is one of the core tenets of learning business pattern matching, which we’ve already discussed in our previous instalment — business is an ill-structured domain, meaning that concepts will always be slightly novel on a case-by-case basis. This core pattern is no different: the variation is revealing of the business ecosystems the cases are set in)
Discussion about this post
No posts
The pieces on Danaher & Bayer were particularly interesting. Change management is generally very challenging to implement. In Bayer’s case, you couldn’t just run the typical PE playbook of reducing costs/management as they need to retain the creative component of the company. In many ways the company exemplifies the challenges of being a mid-size player (especially in pharma)-too big to be agile and too small to achieve real scale (plus the Monsanto overhang).