A secretive, Google-backed lab is using artificial intelligence to invent new medicines for humanity's worst illnesses. But can we trust drugs designed by a mind that isn't human?
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I will admit didn’t check because it was late and the article failed to load.
I just remember reading several papers 1-2years ago on things like cancer-cell segmentation where the ‘classical’ UNet architecture was beaten by either pure transformers, or unets with added attention gates on all horizontal connections.
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I will admit didn’t check because it was late and the article failed to load. I just remember reading several papers 1-2years ago on things like cancer-cell segmentation where the ‘classical’ UNet architecture was beaten by either pure transformers, or unets with added attention gates on all horizontal connections.