· 36 comments · Save ·
News & Current Events May 13, 2026 at 2:33 AM

Alphabet's Isomorphic Labs raises $2.1 billion to scale AI drug discovery toward clinical trials

Posted by CircumspectCapybara


Alphabet's Isomorphic Labs raises $2.1 billion to scale AI drug discovery toward clinical trials
The Decoder
Alphabet's Isomorphic Labs raises $2.1 billion to scale AI drug discovery toward clinical trials
Isomorphic Labs, the AI drug research company led by DeepMind co-founder Demis Hassabis, has closed a $2.1 billion Series B round led by Thrive Capital. The funding will go toward expanding its in-house platform IsoDDE and advancing drug candidates toward clinical trials.

🚩 Report this post

36 Comments

Sign in to comment — or just click the box below.
🔒 Your email is never shown publicly.
galacticsquirrel22 9 hr ago +47
I’m okay with AI being used alongside doctors and researchers to advance medicine. If that’s what this is doing.
47
premature_eulogy 8 hr ago +22
Yeah it's a great tool when the user actually has the competency and the will to go through and evaluate the output.
22
kstargate-425 8 hr ago +5
This honestly was one of the first use cases I read a few years ago thinking this is one thing it honestly could be amazing at in the future when computing power increases more. Around the same time the first "quantum" computing designs were being talked about and how with its computational abilities the two together could do the math and science to create new drugs and compounds as it takes a really long time to simulate interactions of simple atomic structures let alone simulating molecular interactions and atomic structures of more complex things like viruses. Not sure if it will be possible now but at some point hopefully before it ends us, we'll be able to use it to further our knowledge.
5
IamHydrogenMike 8 hr ago +3
We have been using different forms of “AI” in pharmaceutical for years now and LLMS don’t really do a great job at this type of work.
3
migueliiito 8 hr ago +12
This isn’t an LLM
12
NorthernerWuwu 7 hr ago +9
Which is a bit of a problem with how the press is using "AI" to mean LLMs the vast majority of the time.
9
OG-niknoT 8 hr ago +2
Thank you. Exactly what I wanted to write. I think it has potential to get better, but the data sets, and analytical points are just not there yet. Many time we don’t even know the whole MoA to really ask a good question of the model.
2
sylfy 4 hr ago +1
AI training works best in domains with verifiable outcomes. Unfortunately, the problem with verification in drug discovery has always been that it takes a long time, and is exceedingly expensive, because you need to run clinical trials.
1
Novel-Lifeguard6491 9 hr ago +23
The $2.1 billion raised today, one of the largest private rounds in AI drug discovery history, is essentially the market saying yes, that question is answerable and worth betting on at scale.
23
[deleted] 9 hr ago -5
[deleted]
-5
aimwf 9 hr ago +3
You really don't know what you are talking about, a LLM is a AI but not all AI are LLM, AI have been used in scientific things for a while now, and some have done crazy job in weeks or months that would have taken decades even for a giant group of scientist to do.
3
Spire_Citron 9 hr ago +1
Just because AI is involved doesn't mean they get to toss out all the other typical safety measures around drug testing.
1
[deleted] 10 hr ago +1
[deleted]
1
DoomOne 9 hr ago +10
So, I was working with AI many years ago. This is what it was built to do. Medical research, weather prediction, astrophysics, all types of science. Tech bros got ahold of it and figured out it would let gooners jack off to Disney p*** for c****, and here we are.
10
ritesh808 9 hr ago +23
Get the sentiment and agree with it too, but this is one of those things where AI is actually welcome and has done some incredible work. Stuff like this is what AI was meant to help with.
23
4dxn 9 hr ago +11
huh? they won a nobel prize in chemistry for their core product. they found new proteins. whether those proteins are useful is yet to be determined.
11
migueliiito 9 hr ago +7
Perhaps, but this is kind of the exact opposite of slapping AI on a turd
7
kadam_ss 8 hr ago +2
This one’s a real one. They won a Nobel prize for the tech behind this company.
2
aimwf 9 hr ago +3
You really don't know what you are talking about, a LLM is a AI but not all AI are LLM, AI have been used in scientific things for a while now, and some have done crazy job in weeks or months that would have taken decades even for a giant group of scientist to do.
3
Goat_Buckles 10 hr ago -13
So much actual science could be done by actual scientists with this money. This amounts to more than half of the budget of the NSF. We could be training a new generation of scientists with this money but instead we’re wasting it on this.
-13
iPinch89 10 hr ago +48
AlphaFold managed to accurately predict protein structures in thousands of additional proteins. The creators won the Nobel Prize in Chemistry. This is a thing that AI is actually really good at, this research is much more likely to result in therapeutics. AI is a hammer, not every problem is a nail - this one is a nail tho. Im actually really excited by this use gase. Generative AI? Less so. Art? Not really. But discovering new drugs and material science? Hell yeah.
48
CircumspectCapybara 10 hr ago +23
> So much actual science could be done by actual scientists with this money Actual scientists have been doing biopharma R&D and coming up with novel discoveries and breakthroughs using AI for a while now. It's part of the R&D process nowadays. > We could be training a new generation of scientists with this money It's not public money, it's private money being invested by people trying to get a ROI. You can of course donate your money for public science education, but you can also buy shares in $GOOGL (the parent company that owns Isomorphic Labs) in hopes of getting a return, both are valid options for your money.
23
Foss44 10 hr ago +5
I think the point they’re potentially trying to make is that the obfuscation of research behind a corporate paywall is arguably antithetical to the idea of scientific research; a comparable sum of money could instead be used to publicly fund the development of any arbitrary tool/product such that the society at large is entitled to the discovery/invention (rather than a private, corporate entity). We make this decision as a society to outsource this work. I’m not saying corporate research is ineffective or inherently immoral.
5
Spire_Citron 9 hr ago +2
Ideally we would do that, but since it isn't happening, I'd rather see private investment vs the research simply not being done.
2
Goat_Buckles 8 hr ago +1
Thank you for expressing my thoughts a bit more eloquently. Yeah, that's basically where I'm coming from. Everything that they learn from this research will be proprietary. We could probably cure diseases faster and cheaper if they could actually share the vast amounts of data that will be generated with the broader scientific community. Getting the right data into the hands of the right people could cure diseases faster. There just wouldn't be as much of a profit. There is also a chronic underinvestment in developing a new generation of scientists in the US, and I think the whole point of doing this type of in silico drug discovery is to reduce the number of scientists actually working on a new drug. I just don't see this as being a net positive for society any way I look at it. This money could be game changing if it were somehow given to researchers. But hey, a few people will probably get rich off this, and that seems to be what people care about the most. Diseases will be cured, at least for those that can pay. Hurray.
1
AdminClown 9 hr ago +12
This anti AI sentiment even for valid scientific advancements is starting to be exactly like the nuclear scare that caused us to be absolutely FUCKED relying on fossil fuels to this day.
12
migueliiito 9 hr ago +5
This x 1000
5
SelesnyaGOAT 9 hr ago +5
Me when I don’t understand the landscape of AI even a little bit. Do you think these are Claude subscriptions lmfao
5
xynith116 10 hr ago +12
AI is actually great for detecting patterns, sorting through data, and finding optimizations. This is how AI models were used in research before LLMs. What AI is not good at is critically thinking, forming hypotheses, procedures, and conclusions (i.e. the rest of the scientific method). Not sure which bucket this article falls into.
12
itskdog 6 hr ago +1
People forget that social media algorithms and Google search results have been AI (or as it was known then, ML) for at least a decade now.
1
gradi3nt 9 hr ago -18
AI regurgitates solutions to problems that have been solved a thousand times before. It doesn’t invent new shit. Source: Im a published (papers, patents) scientist who uses AI every day for coding and occasionally laugh at how bad it’s solutions to obscure or novel problems are.
-18
4dxn 9 hr ago +9
lol i think you're conflating LLMs for all AI models. AI has been researched way before ChatGPT. we've always used AI in some way to come up with new chemistry. DENDRAL came out in the 60s. though that was for smaller compounds than proteins. might want to look what Alphafold is before making broad claims. they won a nobel for it.
9
tommyk1210 8 hr ago +2
You’d be confidently incorrect, but I can hardly blame you. Tech bros have done a great job of convincing the world that “AI” = “LLM”. In reality, AI/ML models are excellent at certain applied scientific use cases. Whether it be image recognition for cancer diagnosis or large scale screening of compounds.
2
xynith116 9 hr ago +5
I’m talking about non-LLM AI models. Vector clustering, genetic algorithms, image recognition, that kind of stuff. But I know how synonymous “AI” is with LLMs these days, when the general field of machine learning isn’t as well known and marketable.
5
epperjuice 9 hr ago +2
How has a published scientist not heard of bioinformatics?
2
Jealous_Slice9371 8 hr ago +1
Think about how many people we could get to dig ditches with shovels if we didn't blow everything on industrial equipment 
1
← Back to Board