Retrotechtacular: Understanding Protein Synthesis Through Interpretive Dance

With the principles of molecular biology very much in the zeitgeist these days, we thought it would be handy to provide some sort of visual aid to help our readers understand the complex molecular machines at work deep within each cell of the body. And despite appearances, this film using interpretive dance to explain protein synthesis will teach you everything you need to know.

Now, there are those who go on and on about the weirdness of the 1960s, but as this 1971 film from Stanford shows, the 60s were just a warm-up act for the really weird stuff. The film is a study in contrasts, with the setup being provided by the decidedly un-groovy Paul Berg, a professor of biochemistry who would share the 1980 Nobel Prize in Medicine for his contributions to nucleic acid research. His short sleeves and skinny tie stand in stark contrast to the writhing mass of students capering about on a grassy field, acting out the various macromolecules involved in protein synthesis. Two groups form the subunits of the ribosome, a chain of ballon-headed students act as the messenger RNA (mRNA) that codes for a protein, and little groups standing in for the transfer RNA (tRNA) molecules that carry the amino acids float in and out of the process.

The level of detail, at least as it was understood in 1971, is impressively complete, with soloists representing things like T-factor and the energy-carrying molecule GTP. And while we especially like the puff of smoke representing GTP’s energy transfer, we strongly suspect a lot of other smoke went into this production.

Kitsch aside, and with apologies to Lewis Carroll and his Jabberwock, you’ll be hard-pressed to find a modern animation that captures the process better. True, a more traditional animation might make the mechanistic aspects of translation clearer, but the mimsy gyre and gimble of this dance really emphasize the role random Brownian motion plays in macromolecular processes. And you’ll never see the term “tRNA” and not be able to think of this film.

Continue reading “Retrotechtacular: Understanding Protein Synthesis Through Interpretive Dance”

The Protein Folding Break-Through

Researchers at DeepMind have proudly announced a major break-through in predicting static folded protein structures with a new program known as AlphaFold 2. Protein folding has been an ongoing problem for researchers since 1972. Christian Anfinsen speculated in his Nobel Prize acceptance speech in that year that the three-dimensional structure of a given protein should be algorithm determined by the one-dimensional DNA sequence that describes it. When you hear protein, you might think of muscles and whey powder, but the proteins mentioned here are chains of amino acids that fold into complex shapes. Cells use these proteins for almost everything. Many of the enzymes, antibodies, and hormones inside your body are folded proteins. We’ve discussed why protein folding is important as well covered recent advancements in cryo-electron microscopy used to experimentally determine the structure of folded proteins.

The shape of proteins largely controls their function, and if we can predict their shape then we get much closer to predicting how they interact. While AlphaFold 2 just predicts the static state, the sheer number of interactions that can change a protein, dynamic protein structures are still out of reach. The technical achievement of DeepMind is not to be understated. For a typical protein, there are an estimated 10^300 different configurations.

Out of the 180 million protein sequences in the Protein database, only 170,000 have had their structures identified. Technologies like the cryo-electron microscope make the process of mapping their structure easier, but it is still complex and tedious to go from sequence to structure. AlphaFold 2 and other folding algorithms are tested against this 170,000 member corpus to determine their accuracy. The previous highest-scoring algorithm of 2016 had a median global distance test (GDT) of 40 (0-100, with 100 being the best) in the most difficult category (free-modeling). In 2018, AlphaFold made waves by pushing that up to the high 50’s. AlphaFold 2 brings that GDT up to 87.

At this point in time, it is hard to determine what sort of effects this will have on the drug industry, healthcare, and society in general. Research has always been done to create the protein, identify what it does, then figure out its structure. AlphaFold 2 represents an avenue towards doing that whole process completely backward. Whether the next goal is to map all the proteins encoded in the human genome or find new, more effective drug treatments, we’re quite excited to see what becomes of this landmark breakthrough.

Continue reading “The Protein Folding Break-Through”

Templateize Your Timetable With EPaper Templates

To date, e-paper technology has been great for two things, displaying static black and white text and luring hackers with the promise of a display that is easy on the eyes and runs forever. But poor availability of bare panels has made the second (we would say more important) goal slow to materialize. One of the first projects that comes to mind is using such a display to show ambient information like a daily summary weather, train schedules, and calendar appointments. Usually this means rolling your own software stack, but [Christopher Mullins] has put together a shockingly complete toolset for designing and updating such parameterized displays called epaper_templates.

To get it out of the way first, there is no hardware component to epaper_templates. It presupposes you have an ESP32 and a display chosen from a certain list of supported models. A quick search on our favorite import site turned up a wide variety of options for bare panels and prebuilt devices (ESP32 and display, plus other goodies) starting at around $40 USD, so this should be a low threshold to cross.

Once you have the device, epaper_templates provides the magic. [Christopher]’s key insight is that an ambient display is typically composed of groups of semi-static data displayed in a layout that never changes. The only variation is updates to the data which is fully parameterized: temperature is always integer Fahrenheit, train schedules are lists of minutes and hours, etc. Layouts like this aren’t difficult to make, but require the developer to reimplement lots of boilerplate. To make them easy to generate, epaper_templates provides a fully featured web UI to let the user freely customize a layout, then exports it as JSON which the device consumes.

The sample layout configured in the video below

The web UI is shockingly capable, especially for by the standards of the embedded web. (Remember it’s hosted on the ESP32 itself!) The user can place text and configure fonts and styles. Once placed, the text can be set to static strings or tied to variables, and if the string is a timestamp it can be formatted with a standard strftime format string.

To round out the feature set, the user can place images and lines to divide the display. Once the display is described, everything becomes simple to programmatically update. The ESP can be configured to subscribe to certain MQTT topics from which it will receive updates, or if that is too much infrastructure there is a handy REST API which accepts JSON objects containing variables or bitmaps to update on device.

We’re totally blown away by the level of functionality in epaper_templates! Check out the repo for more detail about its capabilities. For a full demo which walks through configuration of a UI with train arrival times, weather, both instant temperature and forecast with icons, and date/time check out the video after the break. Source for the example is here, but be sure to check out examples/ in the repo for more examples.

Continue reading “Templateize Your Timetable With EPaper Templates”

New Microscope Directly Images Protein Atoms

There’s an old joke that you can’t trust atoms — they make up everything. But until fairly recently, there was no real way to see individual atoms. You could infer things about them using X-ray crystallography or measure their pull on tiny probes using atomic force microscopes, but not take a direct image. Until now. Two laboratories recently used cryo-electron microscopy to directly image atoms in a protein molecule with a resolution of about 1.2 x 10-7 millimeters or 1.2 ångströms. The previous record was 1.54 ångströms.

Recent improvements in electron beam technology helped, as did a device that ensures electrons that strike the sample travel at nearly the same speeds. The latter technique resulted in images so clear, researchers could identify individual hydrogen atoms in the apoferritin molecule and the water surrounding it.

Continue reading “New Microscope Directly Images Protein Atoms”

So What Is Protein Folding, Anyway?

The current COVID-19 pandemic is rife with problems that hackers have attacked with gusto. From 3D printed face shields and homebrew face masks to replacements for full-fledged mechanical ventilators, the outpouring of ideas has been inspirational and heartwarming. At the same time there have been many efforts in a different area: research aimed at fighting the virus itself.

Getting to the root of the problem seems to have the most potential for ending this pandemic and getting ahead of future ones, and that’s the “know your enemy” problem that the distributed computing effort known as Folding@Home aims to address. Millions of people have signed up to donate cycles from spare PCs and GPUs, and in the process have created the largest supercomputer in history.

But what exactly are all these exaFLOPS being used for? Why is protein folding something to direct so much computational might toward? What’s the biochemistry behind this, and why do proteins need to fold in the first place? Here’s a brief look at protein folding: what it is, how it happens, and why it’s important.

Continue reading “So What Is Protein Folding, Anyway?”

Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield

We are all familiar with the idea of a hologram, either from the monochromatic laser holographic images you’ll find on your bank card or from fictional depictions such as Princes Leia’s distress message from Star Wars. And we’ve probably read about how the laser holograms work with a split beam of coherent light recombined to fall upon a photographic plate. They require no special glasses or headsets and  possess both stereoscopic and spatial 3D rendering, in that you can view both the 3D Princess Leia and your bank’s logo or whatever is on your card as 3D objects from multiple angles. So we’re all familar with that holographic end product, but what we probably aren’t so familiar with is what they represent: the capture of a light field.

In his Hackaday Superconference talk, co-founder and CTO of holographic display startup Looking Glass Factory Alex Hornstein introduced us to the idea of the light field, and how its capture is key to  the understanding of the mechanics of a hologram.

Capturing the light field with a row of GoPro cameras.
Capturing the light field with a row of GoPro cameras.

His first point is an important one, he expands the definition of a hologram from its conventional form as one of those monochromatic laser-interference photographic images into any technology that captures a light field. This is, he concedes, a contentious barrier to overcome. To do that he first has to explain what a light field is.

When we take a 2D photograph, we capture all the rays of light that are incident upon something that is a good approximation to a single point, the lens of the camera involved. The scene before us has of course countless other rays that are incident upon other points or that are reflected from surfaces invisible from the single point position of the 2D camera. It is this complex array of light rays which makes up the light field of the image, and capturing it in its entirety is key to manipulating the result. This is true no matter the technology used to bring it to the viewer. A light field capture can be used to generate variable focus 2D images after the fact as is the case with the Lytro cameras, or it can be used to generate a hologram in the way that he describes.

One possible future use of the technology, a virtual holographic aquarium.
One possible future use of the technology, a virtual holographic aquarium.

The point of his talk is that complex sorcery isn’t required to capture a light field, something he demonstrates in front of the audience with a volunteer and a standard webcam on a sliding rail. Multiple 2D images are taken at different points, which can be combined to form a light field. The fact that not every component of the light field has been captured doesn’t matter as much as that there is enough to create the holographic image from the point of view of the display. And since he happens to be head honcho at a holographic display company he can show us the result. Looking Glass Factory’s display panel uses a lenticular lens to combine the multiple images into a hologram, and is probably one of the most inexpensive ways to practically display this type of image.

Since the arrival of the Lytro cameras a year or two ago the concept of a light field is one that has been in the air, but has more often been surrounded by an air of proprietary marketing woo. This talk breaks through that to deliver a clear explanation of the subject, and is a fascinating watch. Alex leaves us with news of some of the first light field derived video content being put online and with some decidedly science-fiction possible futures for the technology. Even if you aren’t planning to work in this field, you will almost certainly encounter it over the next few years.

Continue reading “Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield”

Way To Go, Einstein; His Time Spent Being Wrong

When you hear someone say “Einstein”, what’s the first thing that pops into your head? Is it high IQ… genius… or maybe E=MC2? Do you picture his wild grey hair shooting in all directions as he peacefully folds the pages back from his favorite book?  You might even think of nuclear bombs, clocks and the Nobel Prize. It will come as a surprise to many that these accomplishments were a very small part of his life. Indeed, Einstein turned the world of classical physics upside down with his general theory of relativity. But he was only in his early twenties when he did so.

What about the rest of his life? Was Einstein a “one-hit-wonder”? What else did he put his remarkable mind to? Surely he tackled other dilemmas that plagued the scientific world during his moment in history. He was a genius after all… arguably one of the smartest people to have ever walked the earth. His very name has become synonymous with genius. He pulled the rug out from under Isaac Newton, whose theories had held the universe together for over 300 years. He talked about enigmatic concepts like space and time with an elegance that laid bare the beauty hidden within their simplicity. Statues have been made of him. His name and face are recognizable across the globe.

But when you hear someone say “Einstein”, do you think of a man who spent the better half of his life… being wrong?  You should.

Continue reading “Way To Go, Einstein; His Time Spent Being Wrong”