Creating Surreal Short Films From Machine Learning

Ever since we first saw the nightmarish artwork produced by Google DeepDream and the ridiculous faux paintings produced from neural style transfer, we’ve been aware of the ways machine learning can be applied to visual art. With commercially available trained models and automated pipelines for generating images from relatively small training sets, it’s now possible for developers without theoretical knowledge of machine learning to easily generate images, provided they have sufficient access to GPUs. Filmmaker [Kira Bursky] took this a step further, creating a surreal short film that features characters and textures produced from image sets.

She began with about 150 photos of her face, 200 photos of film locations, 4600 photos of past film productions, and 100 drawings as the main datasets.

via [Kira Bursky]
Using GAN models for nebulas, faces, and skyscrapers in RunwayML, she found the results from training her face set disintegrated, realistic, and painterly. Many of the images continue to evoke aspects of her original face with distortions, although whether that is the model identifying a feature common to skyscrapers and faces or our own bias towards facial recognition is up to the viewer.

On the other hand, the results of training the film set photos on models of faces and bedrooms produced abstract textures and “surreal and eerie faces like a fever dream”. Perhaps, unlike the familiar anchors of facial features, it’s the lack of recognizable characteristics in the transformed images that gives them such a surreal feel.

[Kira] certainly uses these results to her advantage, brainstorming a concept for a short film that revolves around her main character experiencing nightmares. Although her objective was to use her results to convey a series of emotionally striking scenes, the models she uses to produce these scenes are also quite interesting.

She started off by using the MiDaS model, created by a team of researchers from ETH Zurich and Intel, for generating monocular depth maps. The results associated levels inside of an image with their appropriate depth in relation to one another. She also used the MASK R-CNN for masking out the backgrounds in generated faces and combined her generated images in Photoshop to create the main character for her short film.

via [Vox]
In order to simulate the character walking, she used the Liquid Warping GAN, a framework for human motion imitation and appearance transfer, created by a team from ShanghaiTech University and Tencent AI Lab. This allowed her to take her original images and synthesize results from reference poses of herself going through the motions of walking by using a 3D body mesh recovery module. Later on, she applied similar techniques for motion tracking on her faces, running them through the First Order Motion Model to simulate different emotions. She went on to join her facial movements with her character using After Effects.

Bringing the results together, she animated a 3D camera blur using the depth map videos to create a less disorienting result by providing anchor points for the viewers and creating a displacement map to heighten the sense of depth and movement within the scenes. In After Effects, she also overlaid dust and film grain effects to give the final result a crisper look. The result is a surprisingly cinematic film entirely made of images and videos generated from machine learning models. With the help of the depth adjustments, it almost looks like something that you might see in a nightmare.

Check out the result below:

Continue reading “Creating Surreal Short Films From Machine Learning”

Using An FPGA To Glitch The Olimex LPC-P1343

After trying out hardware hacking using an FPGA to interface with target hardware, [Grazfather] was inspired to try using the iCEBreaker (one of the many hobbyist FPGAs to have recently flooded the market) to build a UART-controllable glitcher for the Olimex LPC-P1343.

FPGA Modules (The cmd module intercepts what the host computer sends over UART, the resetter holds the reset line until the target is reset, the delay starts counting on reset and waits for a configured number of cycles before sending its signal, the trigger waits for the delay to finish before telling the pulse module to send a pulse, and the pulse works similar to the delay module and outputs to the power multiplexer.)

When the target board boots up, the bootROM reads the flash and determines whether the UART goes to a shell and if the shell can be used to read out the flash. This is meant for developing firmware and debugging it in the bootloader, only flashing a version when the firmware is production-ready. The vulnerability is that only a specific value read from address 0x2FC and the state of a few pins can lock the bootloader in the expected way, and any other value at the address causes the bootROM to consider the device unlocked. Essentially, the mechanism is the opposite of how a lock ought to work.

The goal is to get the CPU to misread the flash at the precise moment it is meant to be reading the specific value, then jumping to the bootloader in the unlocked state. The FPGA can be used as a tool between the host machine and target board, communicating via UART. The FGPA can support configuring the delay between resetting the target board and pulsing a ‘glitch voltage’, as well as resetting the target board and activating the glitch. The primary reasons for using the FPGA over a different microcontroller are that the FPGA allows for precise timing (83.3ns precision) and removes worries about jitters (a Raspberry Pi might have side effects from OS scheduling and other processes and microcontrollers might have interrupts messing up the timing).

The logic analyzer view

To simulate the various modules, [Grazfather] used Icarus Verilog as well as GTKWave to observe the waveforms generated. A separate logic analyzer observes the effects on real hardware.

With enough time, it is possible to brute force any combination of delay and width until you get a dump of the flash you’re not meant to read. You can check out how the width of the pulse gets wider until the max, when the delay is incremented and the width values are tried again.

Continue reading “Using An FPGA To Glitch The Olimex LPC-P1343”

Tired Of Fruit Ninja? Try Vegetable Assassin Using An ESP32 Sword

In a world where ninjas no longer rule the social hierarchy, where can a ninja-wannabe practice their sword fighting skills? In the popular Introduction to Embedded Systems class at the Massachusetts Institute of Technology, a team of students made their own version of the popular mobile game Fruit Ninja with a twist – you’re fighting your true nemesis, vegetables.

Vegetable Assassin allows single or multi-player mode, with players slicing vegetables on a screen using fake swords with sensors to detect the players’ motion. The web-based game allows swords to communicate their orientation to the game session with a WebSocket connection to a server, with the game generated and rendered using a 3D client JavaScript library. Rather than using MQTT, which also uses a persistent TCP connection as well as lower overhead, WebSocket provided maximum browser support.

An onboard ESP32 microcontroller and IMU track the sword movements. The game begins by calibrating the sword movements within the play area. Information is generated using the Madgwick algorithm, a 9-degrees-of-freedom algorithm that uses 3-axis data from the sword’s gyroscope, accelerometer, and magnetometer and outputs the absolute orientation of the sword.

The sword and browser both connect to the same channel on the server through a WebSocket connection, identified by a session ID similar to how web chat rooms are implemented. A statistics server manages the allocation of session IDs and other persistent game data to track high scores.

As for the graphics, a Three.js WebGL library creates the scene and camera, loading the game into the browser’s animation frame. Other scripts load the 3D models for the fruits and vegetables in the game, update their positions based on the physics engine provided by Cannon.js, and render UI elements within the game.

Curious? The project site has the microcontroller code to build your own sword that you can use to play the demo. If you don’t have an ESP32 and accelerometer handy you can play Vegetable Assassin in your browser instead.

Homemade Masks In A Time Of Shortage

Due to the worldwide pandemic of COVID-19, there has been a huge shortage of N95 masks. [] from Smart Air has been working on designs for a DIY mask that may be able to protect those who haven’t been able to secure their own masks. While there may be an abundance of memes around the various material people have been able to use to substitute for the filters, there is some very real science behind the sorts of materials that can effectively protect us from the virus.

According to a studied performed at Cambridge University during the 2009 H1N1 flu pandemic, while surgical masks perform the best at capturing Bacillus atrophaeus bacteria (0.93-1.25 microns) and Bacteriophage MS virus (0.023 microns), vacuum cleaner bags, and tea towels, and cotton T-shirts were not too far behind. The coronavirus is 0.1-0.2 microns, well within the range for the results of the tests.

As it turns out, cotton homemade masks may be quite effective as alternatives – not to mention reusable. They also found out that double layering the masks didn’t help with improving the protection against viruses. On the other hand, one significant design choice was the breathability of the material. While vacuum cleaner bags may be quite effective at keeping out small particles, they aren’t as comfortable or easy to breathe in as cotton masks.

Have you tried making your own cotton masks? In a time when hospitals are running low on surgical masks, it’s possibly the best option for helping to keep much-needed medical supplies in the hands of those helping at the front line.

[Thanks to pie for the tip!]

Drones Can Undertake Excavations Without Human Intervention

Researchers from Denmark’s Aarhus University have developed a method for autonomous drone scanning and measurement of terrains, allowing drones to independently navigate themselves over excavation grounds. The only human input is a starting location and the desired cliff face for scanning.

For researchers studying quarries, capturing data about gravel, walls, and other natural and man-made formations is important for understanding the properties of the terrain. Controlling the drones can be expensive though, since there’s considerable skill involved in manually flying the drone and keeping its camera steady and perpendicular to the wall it is capturing.

The process designed is a Gaussian model that predicts the wind encountered near the wall, estimating the strength based on the inputs it receives as it moves. It uses both nonlinear model predictive control (NMPC) and a PID controller in its feedback control system, which calculate the values to send to the drone’s motor controller. A long short-term memory (LSTM) model is used for calculating the predictions. It’s been successfully tested in a chalk quarry in Denmark and will continue to be tested as its algorithms are improved.

Getting a drone to hover and move between GPS waypoints is easy enough, but once they need to maneuver around obstacles it starts getting tricky. Research like this will be invaluable for developing systems that help drones navigate in areas where their human operators can’t reach.

[Thanks to Qes for the tip!]

Using IR LEDs To Hide In Plain Sight

Getting by without falling under the gaze of surveillance cameras doesn’t seem possible nowadays – from malls to street corners, it’s getting more common for organizations to use surveillance cameras to keep patrons in check. While the freedom of assembly is considered a basic human right in documents such as the US Condition and the Universal Declaration of Human Rights, it is not a right that is respected everywhere in the world. Often times, governments enforcing order will identify individuals using image recognition programs, preventing them from assembling or demonstrating against their government.

Freedom Shield built by engineer [Nick Bild] is an attempt at breaking away from the status quo and giving people a choice on whether they want to be seen or not. The spectrum of radiation visible to humans maxes out around 740nm, allowing the IR waves to remain undetected by normal observers.

The project uses 940nm infrared (IR) LEDs embedded in clothes to overwhelm photo diodes in IR-sensitive cameras used for surveillance. Since the wavelength of the lights are not visible to humans, they don’t obstruct normal behavior, making it an ideal way to hide in plain sight. Of course, using SMD LEDs rather than the larger sizes would also help with making the lights even less visible to the naked eye.

The result doesn’t perfectly obscure your face from cameras, but for a proof-of-concept it’s certainly a example of how to avoid being tracked.

Continue reading “Using IR LEDs To Hide In Plain Sight”

Printing Liquid Concrete

In the world of additive manufacturing, there’s always need materials being added to the list of potential filaments to use for printing objects. A method of rapid liquid printing of concrete designed by [Anatoly Berezkin] of Stoneflower 3D makes it possible to print a large variety of shapes from concrete while avoiding the negative effects of fast dehydration. The technique is based on an approach to printing polyurethanes, developed by MIT in 2017. This technique requires physically drawing a 3D object within a gel suspension using a chemical curing process. The gel allows gravity to not affect the printing process, as well as helping out with the curinng. Berezkin, an engineer and hobbyist working out of his garage, has published other work including print heads, ceramic printing, and micro printing sets.

One might be skeptical of whether the weight of the material could cause potential collapse during the printing process, or whether it is simply unrealistic to print objects given the time needed for the concrete to dry. Their demo shows the process being done in household items – bowls and tupperware – combining affordable items such as clay, concrete, and sand for the matrix and mortar. The viscous clay is strong enough to act as a good scaffold for keeping the concrete structure in place as it is being printed. As their video demonstrates, at least for simply objects, the process seems relatively fast.

RLPC doesn’t require toxic chemicals or proprietary components such as gels and suspensions. Its immersion of the final printed object in a humid environment is also superior to the standard process of liquid deposition for hardening concrete. Moreover, the process simply requires clay or retarded mortar for the matrix and mortar paste for turning into concrete. It’s advertised as eco-friendly, but just the simplicity of the materials needed for the matrix and mortar make this a promising technique.

Continue reading “Printing Liquid Concrete”