George Church, a Harvard professor and a famed geroscientist, is also known as a serial entrepreneur who has co-founded dozens of biotech companies. While Church maintains that he’s involved in them all, one company has been seeing an unusual amount of his attention: Lila Sciences, where he assumed the role of Chief Scientist.
On its website, Lila sets a lofty goal: to build a “Scientific Superintelligence.” Practically, this involves creating an array of AI models and, perhaps Lila’s defining feature, building huge robotic labs to quickly test AI-generated hypotheses and feed the data back into the model.
This company, founded in 2023, has raised $550 million and is now valued at $1.3 billion. It has already made several promising discoveries and appears well on its way to revolutionize the way we do science. We spoke with Dr. Church to learn more about this giant startup and his role in it.
You’ve co-founded many startups, but usually you retain an advisory role without investing too much energy in the company. With Lila, things seem different. You’re the chief scientist, and you said in an interview that you really want to invest a lot of your time in Lila. What’s different this time?
With previous companies, I have put a fair amount of effort into them, at least for periods of time, and they’re all advisory roles when you come right down to it. Even my own laboratory at Harvard Medical School is an advisory role.
But Lila is special in the sense that I’ve been working on computational biology and AI for many years, and I keep looking around for the big players. Who’s got the most interesting story that could conceivably fit in with other things that I’m trying to do, like longevity? Lila is not a longevity company, but it is a science AI company.
In particular, most of the AI companies are scraping the internet and seeing what they can do to use natural language processing to sort through what’s in the abstracts and maybe even the supplementary material for articles. Lila is using that a little bit, but by far the biggest component is new empirical data. So, rather than the open data that everybody has, this is to develop new proprietary data.
It’s partly based on my observation that every new technology we develop, within the first year or so, can redo almost everything in history up to that point in that field and then start multiplying it by factors of 10, so there’s no reason to try to scrape the poorly configured ancient history. It’s better to go forward where you’ve constructed it so that it’s maximally compatible with AI and with whatever ultimate applications you have in mind. That’s a big game changer for me. It struck me as the best thing you can do with AI.
In fact, there are other things you can do with AI that I think are not as useful and potentially a little bit more dangerous. I’m not a doomsayer or anything, but I’m just saying the ratio of benefit to risk is not appealing for going after artificial general intelligence. That’s not as likely to benefit society as just working on AI applied to science.
Here, I want to quote a press release from Lila. It says, “Lila’s mission is to responsibly achieve scientific superintelligence.” So, they are talking about superintelligence.
Scientific, not general superintelligence.
Still, “responsibly” implies that this can also be done irresponsibly and probably do some damage. So, how do you do it responsibly, and what could happen if we fail?
I think the first step in doing it responsibly is keeping the focus narrow. In other words, if you’re dealing with a natural intelligence (a group of people) and you give them total power, infinite resources, and no guidance on what they’re going to do with it, there’s a certain chance they’ll go rogue. I think it’s even more so if you’re dealing with an intelligence that’s completely alien, but that’s not what we’re doing.
We’re not empowering a superintelligence to consider whether humans are the best thing or not. That’s a dangerous question, at least for starters, until the new intelligence has had the time that the old intelligence has had to get adjusted to the real world and to evolve. So, you don’t want to just rush into it and say, “Okay, here are the keys to the kingdom,” and start asking philosophical questions that can lead to extermination. You want to keep it narrow.
Beyond that, “responsibly” means to make sure that you are being transparent about it and make sure that software is capable of explaining why it’s doing things.
About that: it’s always interpretability versus the model’s power. Where are you in this debate? Would you prefer a weaker but more interpretable AI or a stronger but less interpretable one?
I lean on the interpretability side. It’s not an either-or, but… we’re in science. Few engineers are willing to just pull a rabbit out of a hat, just a black box. Scientists and engineers, by and large, want to know the mechanism. The FDA likes to know mechanisms. Typically, the autocatalytic loop where you learn something and then you invent something is better if it’s mechanistically grounded. So, I lean pretty heavily in the direction of interpretability, explainability, transparency, et cetera, and also it’s safer.
I just honestly think that we will soon be faced with this dilemma, where we will have to choose between the power of the model to do things and its actual interpretability, but maybe we’re not there yet.
If you look at the human scientist experience, the most powerful sciences are the ones that are better articulated mechanistically on a solid foundation rather than black boxes. The black boxes tend to include artifacts, dead ends. Most of the progress in science and engineering has been part of community efforts with strong mechanistic underpinnings.
Let’s move to Lila’s model. Can you give me any more details? For instance, is the reasoning happening on the human language level?
Well, it’s really models, plural. There’s the meta-model that I’ve already mentioned, which is obtaining proprietary data using new high-throughput methods. Then there is a language model where we interface with it the way essentially everybody interfaces with computers at this point, which is through natural language, but then there are specific models for each scientific enterprise.
We’re getting better and better at the meta-level of learning from the specific models to make the next specific model, but the bitter lesson is to not over-engineer and over-educate; it’s to let the data speak for themselves. I think that’s turned out again and again in the Lila experience. We’ve done about 12 different model systems, and in each case, there usually are some industry milestones or standards. We can ask, how do we stand? In almost every case, we’ve managed to exceed whatever the milestone points were at the time. These are wildly different fields of science, and we were able to pass the milestones. So, it probably means we’re on the right track.
We’ve seen before companies that are building foundation models in biology, companies that use AI to create drugs, and companies that are developing robotic labs. Sometimes they combine two of those. I don’t think I’ve ever seen a company that combines all three. You already said this is in part about getting proprietary data that you can feed into the model. Can you expand on this?
I think there are two conventional sources of large data. One of them is PubMed and things like that, and the other is theoretical. But as powerful as AI is, both at scraping the internet with natural language and at doing theoretical constructs, the empirical is something that’s often ignored.
We’ve gotten to an era where we can make very large libraries for certain fields of science. We can make material libraries; DNA, RNA, and protein libraries; cellular libraries, and they can be barcoded and multiplexed effectively. Then it becomes a question of how clever the human-AI team can be at analyzing and manufacturing these libraries.
With AAV, we designed a million changes that were highly diverse, but then how can you test them in a way that might not be easy to simulate with just computational simulations? So, we injected them directly into primates. Putting a million designed, not random, structures into primates simultaneously saves a lot of money relative to, say, doing a million primates each with a single injection, which was and still is the standard practice.
You can get things that are very hard to simulate. You can get it to be a hundred times better to go through the brain and detarget everything else. In principle, to simulate that in a computer, you’d have to know all the possible ligand binding sites throughout the endothelium and maybe throughout the entire “surface-ome” of the body. But instead, if you do it empirically, you get a perfect simulation, 100% correct, and you get it quickly. You get a million at once, and you can actually ask for all the different tissues. It’s like you got a million constructs times hundreds of different cell types.
This is sometimes called natural computing. It’s just as valid as von Neumann silicon computers or quantum computers. I think that’s a fundamental new capability. And you’re right, there are very few that try to do all these things at once, but there is something synergistic about it that we anticipated and we’re not disappointed.
Let’s go back for a second to that “scientific superintelligence.” I’m not asking if AI will replace human scientists. I’m asking how soon it will replace human scientists, and does it bother you in any way?
I think it’s kind of like saying, “How soon will automobiles replace runners and horsemen? How soon will jets replace all of those?” They don’t. It’s almost always a hybrid system. It’s like, “How soon will my cerebral cortex replace my cerebellum?” Why would you bother? They both do specialized tasks.
I think we already have, and we’ve had for years, things that computers could do way better than humans, starting with math, calculations, especially where speed is an issue, and then they did chess, Jeopardy, and Go. But there still are things where the hybrid system is likely to persist for science in particular.
I know this argument, but I think it’s too optimistic. It’s true that for our entire history, technology has been helping us, but now it looks like it’s finally going to replace us. Automobiles did replace horsemen almost completely, it’s just that those horsemen had other fields they could migrate to. This might not be the case this time. Do you honestly believe that for the foreseeable future, human scientists will remain relevant, will possess something that the models won’t?
The correct answer is I don’t know, but I still will speculate a little. First, we still are very efficient: the 20-watt brain versus megawatt GPU farms. Second, there’s considerable skepticism as to how hard it is to get the machine to think out of the box, or even think of what the box should be, to plan new experiments.
I also think humans are not necessarily a fixed target. It’s not like machines are progressing exponentially and biotechnology is standing still. They’re both progressing exponentially. To me, it’s not clear that humans won’t be augmented in some way. Given that we’re already ahead in energy efficiency, we might just get further and further ahead rather than falling behind. I’m not making a strong prediction there. I’m just saying there’s a lot of assumptions being made as to whether A will replace B or whether it’s going to be some hybrid system.
I agree. I just think that technically, creating such a hybrid system is really hard.
We already have a hybrid system. Every human is augmented and vice versa: the computer is currently augmented by the queries that we come up with. We’re coming up with a non-random set of prompts, and so far, it seems like we’re prompting the computer in ways that it wouldn’t prompt itself.
Even if Lila’s vision is fully realized in a few years, we will still have a lot of downstream bottlenecks. What are these bottlenecks that science will face, and what can be done about them?
Of course, one of the classic bottlenecks in therapeutics has been FDA approval or the equivalent in other countries, but that’s changing. I think the FDA has always been an agent of change, even if people sometimes don’t see it. The FDA loves it when scientists come across a new method or technology that is safe and effective.
For example, the COVID vaccine was a brand new technology, at least in terms of FDA approval, and it got approved very quickly in 11 months. Baby KJ got approved from birth to cure in seven months. I think that will probably still be a bottleneck, and rightly so, because we do want things to be safe and effective, but it starts to widen.
Typically, funding is a bottleneck until it’s not. I think clever scientists, and in the future, clever AI-plus-scientists, will come up with ways to reduce costs. For example, the cost of reading and writing DNA has dropped by 20 million-fold over less than two decades. And of course, electronics have a similar story over longer periods of time.
Certainly, a bottleneck for GPUs is energy. They’re talking more and more about locating near hydroelectric plants and investing in fusion power that doesn’t exist yet, but the alternative is to bring down the power consumption per FLOP. You want to get the FLOP-per-joule to be as low as possible. These are all bottlenecks that I think are addressable.
Another bottleneck that maybe hasn’t been considered very often (and also part of the reason we think that our biotechnology is static and the AI is dynamic) is that we’re not allowed to mess with the human brain very much. This is for ethical reasons, but it could be that that asymmetry may vanish in a variety of ways. It could be that silicon systems will demand more ethics, or “wetware” living brains might come up with a way that they can modify themselves that would be considered ethical, and they’ll probably converge on the same level of sentience and ethics of modification.
Let me know if you don’t want to be dragged into politics, but I’m trying to understand the net impact of the current administration. On one hand, it has cut down research funding, hurting a lot of people and research in our field. On the other hand, it seems to be very AI-friendly, and on the FDA level, they are now more open towards new testing modalities like organoids. Do you have an opinion on all that?
I don’t mind responding briefly. I’m a beneficiary on the organoid front. We do a lot of research in organoids, in particular brain organoids and embryos. I think that every now and then it’s helpful to science and society to stir things up a little bit. There will be winners and losers, and if there are enough losers, then there will be a backlash.
Both in science and in government, experiments can’t last for long, and they can’t fail for long. They can’t cause hardship for long. Science probably has a longer payoff that’s tolerated, especially if it’s inexpensive, but almost everything that the government does ends up being expensive and hurting somebody enough that it becomes a cause for a pushback.
So, I like the idea of doing experiments, even economic ones, but one has to be cautious that it’s a limited time. And you can see that the latest elections reflected some disappointment. In the midterms, we might see even more disappointment. It’s a feedback system. You do a radical experiment, and if you luck out, then everybody votes for you. Time will tell, but this is not something that’s going to play out in centuries. This is something that’s going to play out in months.
I have been thinking lately about the difference in public opinion on science and AI. It seems that the public generally loves science but vilifies AI. How do we get out of this?
It’s not quite that black and white. The public occasionally doesn’t like or trust certain kinds of science. With AI, part of it has to do with what Hollywood and screenwriters are writing about. If they see a new technology, it’s an opportunity to create both dystopias and utopias.
I think you need a large benefit as a prerequisite. One of the reasons GMOs were not popular is because the benefit wasn’t clear to many. The same thing is kind of true of AI. Most people either didn’t care about getting answers from Google, or if they did, the old pre-AI Google search was good enough. So, it really depends on convincing them that things that are positively affecting the economy and their health are actually due to AI.
Can you give me your vision for AI in biology in several years from now, a decade, maybe two? What will it be able to do? How would it change science and human aging?
I think AI has proven that it’s really useful for protein design, and protein structural prediction as well. And aging has proven that it is the ultimate disease. It involves possibly every subsystem of our physiology. So, it’s a perfect candidate for systems that can handle high complexity, and AI is one of those.
It’s also something you can de-black-box by doing experiments to test things. You can say, “Oh, this is how this is working.” In fact, AI can help design experiments that will not only screen these big libraries I was talking about, but also when you get the answers, you can have mechanistic interpretations, and you can test them.
It’s not that I disagree, I just think that everything about AI is a race. So maybe we will be able, theoretically, to de-black-box everything. We might just not have the time and the resources to do it because everyone’s racing to the goal.
That’s a fair statement, but it could be that the people that get to the goal faster are the ones that are working on mechanisms. It’s not incompatible with the history of science. Every now and then, people come up with a clever way to go a little bit faster.
It’s going to be empirical, and I think that we’re well on our way to solving some of the big engineering tasks that we need to get both longevity and age reversal. Aging reversal or disease reversal is what’s going to get FDA approval, and longevity is going to come along for the ride.
I think we have these exponential biotechnologies, which until recently did not depend on AI, while being exponential nonetheless. When you add them to AI, it might get us faster to mechanisms and faster to, let’s say, polypharmacy, where you need multiple drugs to handle all the different tissues, and each tissue might have a slightly different aging program.
So, you might need a very large number of drugs working in some kind of coherent way, and maybe devices to help the feedback loop. Those devices might be biological, or they might be electronic, or some hybrid.
It’s a great point about polypharmacy; I do see future anti-aging therapies as a complex array, where people will have to constantly do things to stay young (still worth it, though). So, fast forward to 10 years from now. Given how fast things are changing, do you even have this vision, this mental model of where we are going to be in a decade or two?
10 years used to be just barely enough time to do one clinical trial. Now you can do many clinical trials in parallel, but I think we soon will be doing clinical trials in less than a year. That means 20 years is 20 cycles of these things happening in parallel.
I think longevity and reversal of age-related diseases seem a lot less mysterious now than they used to, and all the exponential technologies are applicable. I would not be surprised if age-related diseases, and for that matter, diseases of poverty, will be easily solved in 20 years from now.
View the article at lifespan.io














