Still Got Film To Scan? This Lego And Raspberry Pi Scanner Is For You

There was a time during the early years of mass digital photography, when a film scanner was a common sight. A small box usually connected to a USB port, it had a slot for slides or negatives. In 2020 they’re  a rare breed, but never fear! [Bezineb5] has a solution in the shape of an automated scanner using a Radpberry Pi and a mechanism made of Lego.

The Lego mechanism is a sprocket feeder that moves the film past the field of view from an SLR camera. The software on the Pi runs in a Docker container, and features a machine learning approach to spotting frame boundaries. This is beyond the capabilities of the Pi, so is offloaded to a Google Coral accelerator.

The whole process is automated with the Pi controlling not only the Lego but also the camera, to the extent of retrieving the photos from it to the Pi. There’s a smart web interface to control everything, making the process — if you’ll excuse the pun — a snap. There’s a video of it in action, that you can see below the break.

We’ve featured many film scanner projects over the years, one that remains memorable is this 3D printed lens mount.

Continue reading “Still Got Film To Scan? This Lego And Raspberry Pi Scanner Is For You”

The Protein Folding Break-Through

Researchers at DeepMind have proudly announced a major break-through in predicting static folded protein structures with a new program known as AlphaFold 2. Protein folding has been an ongoing problem for researchers since 1972. Christian Anfinsen speculated in his Nobel Prize acceptance speech in that year that the three-dimensional structure of a given protein should be algorithm determined by the one-dimensional DNA sequence that describes it. When you hear protein, you might think of muscles and whey powder, but the proteins mentioned here are chains of amino acids that fold into complex shapes. Cells use these proteins for almost everything. Many of the enzymes, antibodies, and hormones inside your body are folded proteins. We’ve discussed why protein folding is important as well covered recent advancements in cryo-electron microscopy used to experimentally determine the structure of folded proteins.

The shape of proteins largely controls their function, and if we can predict their shape then we get much closer to predicting how they interact. While AlphaFold 2 just predicts the static state, the sheer number of interactions that can change a protein, dynamic protein structures are still out of reach. The technical achievement of DeepMind is not to be understated. For a typical protein, there are an estimated 10^300 different configurations.

Out of the 180 million protein sequences in the Protein database, only 170,000 have had their structures identified. Technologies like the cryo-electron microscope make the process of mapping their structure easier, but it is still complex and tedious to go from sequence to structure. AlphaFold 2 and other folding algorithms are tested against this 170,000 member corpus to determine their accuracy. The previous highest-scoring algorithm of 2016 had a median global distance test (GDT) of 40 (0-100, with 100 being the best) in the most difficult category (free-modeling). In 2018, AlphaFold made waves by pushing that up to the high 50’s. AlphaFold 2 brings that GDT up to 87.

At this point in time, it is hard to determine what sort of effects this will have on the drug industry, healthcare, and society in general. Research has always been done to create the protein, identify what it does, then figure out its structure. AlphaFold 2 represents an avenue towards doing that whole process completely backward. Whether the next goal is to map all the proteins encoded in the human genome or find new, more effective drug treatments, we’re quite excited to see what becomes of this landmark breakthrough.

Continue reading “The Protein Folding Break-Through”

“Enhance” Is Now A Thing, But Don’t Believe What You See

It was a trope all too familiar in the 1990s — law enforcement in movies and TV taking a pixellated, blurry image, and hitting the magic “enhance” button to reveal suspects to be brought to justice. Creating data where there simply was none before was a great way to ruin immersion for anyone with a modicum of technical expertise, and spoiled many movies and TV shows.

Of course, technology marches on and what was once an utter impossibility often becomes trivial in due time. These days, it’s expected that a sub-$100 computer can easily differentiate between a banana, a dog, and a human, something that was unfathomable at the dawn of the microcomputer era. This capability is rooted in the technology of neural networks, which can be trained to do all manner of tasks formerly considered difficult for computers.

With neural networks and plenty of processing power at hand, there have been a flood of projects aiming to “enhance” everything from low-resolution human faces to old film footage, increasing resolution and filling in for the data that simply isn’t there. But what’s really going on behind the scenes, and is this technology really capable of accurately enhancing anything?

Continue reading ““Enhance” Is Now A Thing, But Don’t Believe What You See”

Robots Can Finally Answer, Are You Talking To Me?

Voice Assistants, love them, or hate them, are becoming more and more commonplace. One problem for voice assistants is the situation of multiple devices listening in the same place. When a command is given, which device should answer? Researchers at CMU’s Future Interfaces Group [Karan Ahuja], [Andy Kong], [Mayank Goel], and [Chris Harrison] have an answer; smart assistants should try to infer if the user is facing the device they want to talk to. They call it direction-of-voice or DoV.

Currently, smart assistants use a simple race to see who heard it first. The reasoning is that the device you are closest to will likely hear it first. However, in situations with echos or when you’re equidistant from multiple devices, the outcome can seem arbitrary to a user.

The implementation of DoV uses an Extra-Trees Classifier from the python sklearn toolkit. Several other machine learning algorithms were considered, but ultimately efficiency won out and Extra-Trees was selected. Another interesting facet of the research was determining what facing really means. The team had humans ‘listeners’ stand in for smart assistants.  A ‘talker’ would speak the key phrase while the ‘listener’ determined if the talker was facing them or not. Based on their definition of facing, the system can determine if someone is facing the device with 90% accuracy that rises to 93% with per-room calibration.

Their algorithm as well as the data they collected has been open-sourced on GitHub. Perhaps when you’re building your own voice assistant, you can incorporate DoV to improve wake-word accuracy.

Continue reading “Robots Can Finally Answer, Are You Talking To Me?”

Training A Neural Network To Play A Driving Game

Often, when we think of getting a computer to complete a task, we contemplate creating complex algorithms that take in the relevant inputs and produce the desired behaviour. For some tasks, like navigating a car down a road, the sheer multitude of input data and its relationship to the desired output is so complex that it becomes near-impossible to code a solution. In these cases, it can make more sense to create a neural network and train the computer to do the job, as one would a human. On a more basic level, [Gigante] did just that, teaching a neural network to play a basic driving game with a genetic algorithm.

The game consists of a basic top-down 2D driving game. The AI is given the distance to the edge of the track along five lines at different angles projected from the front of the vehicle. The AI also knows its speed and direction. Given these 7 numbers, it calculates the outputs for steering, braking and acceleration to drive the car.

To train the AI, [Gigante] started with 650 AIs, and picked the best performer, which just barely managed to navigate the first two corners. Marking this AI as the parent of the next generation, the AIs were iterated with random mutations. Each generation showed some improvement, with [Gigante] picking the best performers each time to parent the next generation. Within just four iterations, some of the cars are able to complete a full lap. With enough training, the cars are able to complete the course at great speed without hitting the walls at all.

It’s a great example of machine learning and the use of genetic algorithms to improve fitness over time. [Gigante] points out that there’s no need for a human in the loop either, if the software is coded to self-measure the fitness of each generation. We’ve seen similar techniques used to play Mario, too. Video after the break.

Continue reading “Training A Neural Network To Play A Driving Game”

Attempting To Generate Photorealistic Video With Neural Networks

Over the past decade, we’ve seen great strides made in the area of AI and neural networks. When trained appropriately, they can be coaxed into generating impressive output, whether it be in text, images, or simply in classifying objects. There’s also much fun to be had in pushing them outside their prescribed operating region, as [Jon Warlick] attempted recently.

[Jon]’s work began using NVIDIA’s GauGAN tool. It’s capable of generating pseudo-photorealistic images of landscapes from segmentation maps, where different colors of a 2D image represent things such as trees, dirt, or mountains, or water. After spending much time toying with the software, [Jon] decided to see if it could be pressed into service to generate video instead.

The GauGAN tool is only capable of taking in a single segmentation map, and outputting a single image, so [Jon] had to get creative. Experiments were undertaken wherein a video was generated and exported as individual frames, with these frames fed to GauGAN as individual segmentation maps. The output frames from GauGAN were then reassembled into a video again.

The results are somewhat psychedelic, as one would expect. GauGAN’s single image workflow means there is only coincidental relevance between consecutive frames, creating a wild, shifting visage. While it’s not a technique we expect to see used for serious purposes anytime soon, it’s a great experiment at seeing how far the technology can be pushed. It’s not the first time we’ve seen such technology used to create full motion video, either. Video after the break.

Continue reading “Attempting To Generate Photorealistic Video With Neural Networks”

Tube Amp Is Modeled With The Power Of AI

There is a certain magic and uniqueness to hardware, particularly when it comes to audio. Tube amplifiers are well-known and well-loved by audio enthusiasts and musicians alike. However, that uniqueness also comes with the price of the fact that gear takes up space and cannot be configured outside the bounds of what it was designed to do. [keyth72] has decided to take it upon themselves to recreate the smooth sound of the Fenders Blues Jr. small tube guitar amp. But rather than using hardware or standard audio software, the magic of AI was thrown at it.

In some ways, recreating a transformation is exactly what AI is designed for. There’s a clear and recordable input with a similar output. In this case, [keyth72] recorded several guitar sessions with the guitar audio sent through the device they wanted to recreate. Using WaveNet, they created a model that applies the transform to input audio in real-time. The Gain and EQ knobs were handled outside the model itself to keep things simple. Instructions on how to train your own model are included on the GitHub page.

While the model is simply approximating the real hardware, it still sounds quite impressive, and perhaps the next time you need a particular sound of your home-built amp or guitar pedal, you might reach for your computer instead.
Continue reading “Tube Amp Is Modeled With The Power Of AI”