ForceGen: Using A Diffusion Model To Help Design Novel Proteins

Although proteins are composed out of only a small number of distinct amino acids, this deceptive simplicity quickly vanishes when considering the many possible sequences across a protein, not to mention the many ways in which a single 1D protein sequence can fold into a 3D protein shape with a specific functionality. Although natural evolution has done much of the legwork here already, figuring out new sequences and their functionality is a daunting task where increasingly deep learning algorithms are being applied. As [Bo Ni] and colleagues report in a research article in Science Advances, the hardest challenge is designing a protein sequence based on the desired functionality. They then demonstrate a way to use a generative model to speed up this process.

They set out to design proteins with specific mechanical properties, for which they used the known unfolding characteristics of various protein sequences to train a diffusion model. This approach is thus more akin to the technology behind image generation algorithms like DALL-E than LLMs. Using the trained diffusion model it was then possible to generate likely sequences of which the properties could then be simulated, with favorable results.

As a large data set aid, such a diffusion model could conceivably be very useful in fields even beyond protein synthesis, automating tedious tasks and conceivably speeding up discoveries.

GETMusic Uses Machine Learning To Generate Music, Understands Tracks

Music generation guided by machine learning can make great projects, but there’s not usually much apparent control over the results. The system makes what it makes, and it’s an achievement if the results are not obvious cacophony. But that’s all different with GETMusic which allows for a much more involved approach because it understands and is able to create music by tracks. Among other things, this means one can generate a basic rhythm and melody first, then add additional elements to those existing ones, leaving the previous elements unchanged.

GETMusic can make music from scratch, or guided from examples, and under the hood uses a diffusion-based approach similar to the method behind AI image generators like Stable Diffusion. We’ve previously covered how Stable Diffusion works, but instead of images the same basic principles are used to guide the model from random noise to useful tracks of music.

Just a few years ago we saw a neural network trained to generate Bach, and while it was capable of moments of brilliance, it didn’t produce uniformly-listenable output. GETMusic is on an entirely different level. The model and code are available online and there is a research paper to accompany it.

You can watch a video putting it through its paces just below the page break, and there are more videos on the project summary page.

Continue reading “GETMusic Uses Machine Learning To Generate Music, Understands Tracks”