A secretive, Google-backed lab is using artificial intelligence to invent new medicines for humanity's worst illnesses. But can we trust drugs designed by a mind that isn't human?
Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them.
An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I will admit didn’t check because it was late and the article failed to load.
I just remember reading several papers 1-2years ago on things like cancer-cell segmentation where the ‘classical’ UNet architecture was beaten by either pure transformers, or unets with added attention gates on all horizontal connections.
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
Actually I agree. I guess I was just still annoyed after reading just previously about how llms are somehow not neural networks, and in fact not machine learning at all…
Btw, you can absolutely finetune llms on classical regression problems if you have the required data (and care more about prediction quality than statistical guarantees.) The resulting regressors are often quite good.
Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them. An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I will admit didn’t check because it was late and the article failed to load. I just remember reading several papers 1-2years ago on things like cancer-cell segmentation where the ‘classical’ UNet architecture was beaten by either pure transformers, or unets with added attention gates on all horizontal connections.
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
Actually I agree. I guess I was just still annoyed after reading just previously about how llms are somehow not neural networks, and in fact not machine learning at all…
Btw, you can absolutely finetune llms on classical regression problems if you have the required data (and care more about prediction quality than statistical guarantees.) The resulting regressors are often quite good.