Can You Remembrandt Where This Is From?

A group of researchers have built an algorithm for finding hidden connections in artwork.

The team, comprised of computer scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Microsoft, used paintings from the Metropolitan Museum of Art and Amsterdam’s Rijksmuseum to demonstrate these hidden connections, which link artwork that shares similar styles, such as Francisco de Zurbarán’s The Martyrdom of Saint Serapion (above left) and Jan Asselijn’s The Threatened Swan (above right). They were initially inspired by the “Rembrandt and Velazquez” exhibition in the Rijksmuseum, which demonstrated similarities between the artists’ work despite the former hailing from the Protestant Netherlands and the latter from Catholic Spain.

The algorithm, dubbed “MosAIc”, differs from probabilistic generative adversarial network (GAN)-based projects that generate artwork since it focuses on image retrieval instead. Rather than focusing solely on obvious factors such as color and style, the algorithm also tries to uncover meaning and theme. It does this by constructing a data structure called a conditional k-nearest neighbor (KNN) tree, which provides a tree-like structure where branches off a central image indicate similarity to the image. In order to query the data structure, these branches are followed until the closest match to an image in a dataset is found. In further iterations, it prunes unpromising branches in order to improve its time for new queries.

Some results from running the algorithm against museum collections were finding similarities between the Dutch Double Face Banyan and a Chinese ceramic figurine, traced to the flow of porcelain and iconography from the Chinese to the Dutch in the 16th to 20th centuries.

A surprising result of this study was discovering that the approach could also be applied to find problems with deep nerual networks, which are used for creating deepfakes. While GANs can often have blind spots in their models, struggling to recreate certain classes of photos, MosAIc was able to overcome these shortcomings and accurately reproduce realistic images.

While the team admits that their implementation isn’t the most optimized version of KNN, their main objective was to present a broad conditioning scheme that is simple but effective for applications. Their hope is to inspire related researchers to consider multi-disciplinary applications for algorithms.

Let’s Take A Closer Look At This Robotic Airship

It’s not a balloon, however shiny its exterior may seem. This miniature indoor robotic airship created by the University of Auckland mechanical engineering research group [New Dexterity] is an asymmetric system experimenting with the possibilities of an open-source helium-based airship.

Why a helium airship, as opposed to a fixed wing aircraft? The group wanted to experiment with the advantages of lighter-than-air (LTA) travel, namely the higher mobility and looser path planning constraints. Furthermore, LTA airships have a less obstructed field of vision and fewer locomotion issues. While unmanned aerial vehicles (UAV) may be capable of hovering in one place, their lift is generated by rotor thrust, which drains their batteries quickly in the order of minutes. LTA airships can hover for longer periods of time.

The design was created for educational and research purposes, focusing on the financial feasibility of manufacturing the platform, the environmental impact of the materials, and the helium loss through the balloon-like envelope. By measuring these parameters, the researchers are able to study the effects of circumstances such as the cost of indoor commercial balloons and the mechanical properties of balloon materials.

The airship gondola was designed and 3D printed in a modular fashion, then attached to the envelope with Velcro. The placement with respect to the horizontal symmetry of the gondola was done for flight stability, with several configurations tested for the side rotor angle.

The group open-sourced their CAD files and ROS interface for controlling the airship. They primarily use off-the-shelf components such as Raspberry Pi boards, propellers, a DC single brushed motor driver carrier, and LiPo batteries for a total cost of $90 for the platform, with an addition $20 for the balloon and initial helium filling. The price is comparable to the cost of indoor blimps like the Blimpduino 2.0.

You can check out the completed airship below, where the team demonstrates its path following capabilities based on a carrot chasing path finding algorithm. And if you’re interested in learning more about the gotchas of building lighter-than-air vehicles, check out [Sophi Kravitz’s] blimp talk from Hackaday Belgrade.

Continue reading “Let’s Take A Closer Look At This Robotic Airship”

This Soap Dispenser Will Crush Your Germs

When it comes to cleaning your hands, [Arnov Sharma] is not messing around. He built an automatic soap dispenser using ultrasonic sensors, a stepper motor for activating the pump, and 3D printed components for housing a bottle of soap – a spectacular display of over-engineering. At least he won’t be needing to stand in line at the supermarket for motion detection soap dispensers anytime soon.

Initially, he had the idea to build the dispenser using a common servo motor-based method.  This would involve activating motors to push down on the plunger for the soap bottle to dispense soap. Instead, he for a different approach that ended up being fairly straightforward in theory, although the execution is pretty involved.

Model of the soap dispenser made in Fusion 360

He started off by 3D printing the compartment where the soap bottle would sit and the structural support for the Z-axis rail that would be pushing down on the soap bottle. It’s similar to the type of linear actuator you might find in a 3D printer or PCB mill, where a motor controls a rotating screw that moves the carriage across a belt. (We presume the linear rail came first, and the ultrasonic soap dispenser second.)

In this build, there are two additional rods added to help support the lever pressing down on the soap dispenser.

The setup is controlled by an Arduino, which triggers the movement from the linear actuator if it receives a signal from an ultrasonic sensor. He’s added the model files and Arduino code for other makers curious about building a similar project. Check out his video for the soap dispenser in action – the stepper motor definitely makes for a much more powerful plunge than you might expect.

Continue reading “This Soap Dispenser Will Crush Your Germs”

Filmmaking From Home With Projection Mapping

Stuck at home in self-quarantine, artist and filmmaker [Kira Bursky] had fewer options than normal for her latest film project. While a normal weekend film sprint would have involved collaborating with actors, set designers, and cinematographers in a frenzied attempt to finish in less than 48 hours, she instead chose to indulge in her curiosity for projection mapping, a technique that involves projecting visuals onto three-dimensional or flat surfaces.

In order for the images to properly map onto a surface, the surface first has to be mapped so that the projection is able to properly transform the flat image in order to produce the illusion of the light wrapping around the object. The technique is done in layers, in software similar to Photoshop, making it easier for the designer to organize the different interacting components in their animation.

[Kira] used a tool called Lightform to design her projections, which relies on a camera to calibrate the location of the surface and a projector to display the visuals. Her animated figures are drawn with loose lines and characterized by their slow gradients and ethereal movements. In the background of her film, a rhythmic sound plays while she brings the figures closer to view. Their outlines come into greater focus until the figures transform into her physical body, which also dances with the meandering lights.

Check out the short film below.

Continue reading “Filmmaking From Home With Projection Mapping”

Engineers Develop A Brain On A Chip

Our abilities to multitask, to quickly learn complex maneuvers, and to instantly recognize objects even as infants are just some of the ways that human brains make use of our billions of synapses. Biologically, our brain requires fluid-filled cavities, nerve fibers, and numerous other cells and connections in order to function. This isn’t the case with a new kind of brain recently announced by a team of MIT engineers in Nature Nanotechnology. Compared to the size of a typical human brain, this new “brain-on-a-chip” is able to fit on a piece of confetti.

When you take a look at the chip, it is more similar to tiny metal carving than to any neurological organ. The technology used to design the chip is based on memristors – silicon-based components that mimic the transmissions of synapses. A concatenation of “memory” and “resistor”, they exist as passive circuit elements that retain a relationship between the time integrals of current and voltage across an element. As resistance varies, tiny read charges are able to access a history of applied voltage. This can be accomplished by hysteresis and other non-linear properties of passive circuitry.

These properties can be best observed at nanoscale levels, where they aren’t dwarfed by other electronic and field effects. A tiny positive and negative electrode are separated by a “switching medium”, or space between the two electrodes. Voltage applied to one end causes ions to flow through the medium, forming a conduction channel to the other end. These ions make up the electrical signal transmitted through the circuit.

In order to fabricate these memristors, the researchers used alloys of silver for the positive electrode, and copper alongside silicon for the negative electrode. They sandwiched the two electrodes along an amorphous medium and patterned this on a silicon chip tens of thousands of times to create an array of memristors. To train the memristors, they ran the chips through visual tasks to store images and reproduce them until cleaner versions were produced. These new devices join a new category of research into neuromorphic computing – electronics that function similar to the way the brain’s neural architecture operates.

The opportunity for electronics that are capable of making instantaneous decisions without consulting other devices or the Internet spell the possibility of portable artificial intelligence systems. Though we already have software systems capable of simulating synaptic behavior, developing neuromorphic computing devices could vastly increase the capability of devices to do tasks once thought to belong solely to the human brain.

Creating Surreal Short Films From Machine Learning

Ever since we first saw the nightmarish artwork produced by Google DeepDream and the ridiculous faux paintings produced from neural style transfer, we’ve been aware of the ways machine learning can be applied to visual art. With commercially available trained models and automated pipelines for generating images from relatively small training sets, it’s now possible for developers without theoretical knowledge of machine learning to easily generate images, provided they have sufficient access to GPUs. Filmmaker [Kira Bursky] took this a step further, creating a surreal short film that features characters and textures produced from image sets.

She began with about 150 photos of her face, 200 photos of film locations, 4600 photos of past film productions, and 100 drawings as the main datasets.

via [Kira Bursky]
Using GAN models for nebulas, faces, and skyscrapers in RunwayML, she found the results from training her face set disintegrated, realistic, and painterly. Many of the images continue to evoke aspects of her original face with distortions, although whether that is the model identifying a feature common to skyscrapers and faces or our own bias towards facial recognition is up to the viewer.

On the other hand, the results of training the film set photos on models of faces and bedrooms produced abstract textures and “surreal and eerie faces like a fever dream”. Perhaps, unlike the familiar anchors of facial features, it’s the lack of recognizable characteristics in the transformed images that gives them such a surreal feel.

[Kira] certainly uses these results to her advantage, brainstorming a concept for a short film that revolves around her main character experiencing nightmares. Although her objective was to use her results to convey a series of emotionally striking scenes, the models she uses to produce these scenes are also quite interesting.

She started off by using the MiDaS model, created by a team of researchers from ETH Zurich and Intel, for generating monocular depth maps. The results associated levels inside of an image with their appropriate depth in relation to one another. She also used the MASK R-CNN for masking out the backgrounds in generated faces and combined her generated images in Photoshop to create the main character for her short film.

via [Vox]
In order to simulate the character walking, she used the Liquid Warping GAN, a framework for human motion imitation and appearance transfer, created by a team from ShanghaiTech University and Tencent AI Lab. This allowed her to take her original images and synthesize results from reference poses of herself going through the motions of walking by using a 3D body mesh recovery module. Later on, she applied similar techniques for motion tracking on her faces, running them through the First Order Motion Model to simulate different emotions. She went on to join her facial movements with her character using After Effects.

Bringing the results together, she animated a 3D camera blur using the depth map videos to create a less disorienting result by providing anchor points for the viewers and creating a displacement map to heighten the sense of depth and movement within the scenes. In After Effects, she also overlaid dust and film grain effects to give the final result a crisper look. The result is a surprisingly cinematic film entirely made of images and videos generated from machine learning models. With the help of the depth adjustments, it almost looks like something that you might see in a nightmare.

Check out the result below:

Continue reading “Creating Surreal Short Films From Machine Learning”

Using An FPGA To Glitch The Olimex LPC-P1343

After trying out hardware hacking using an FPGA to interface with target hardware, [Grazfather] was inspired to try using the iCEBreaker (one of the many hobbyist FPGAs to have recently flooded the market) to build a UART-controllable glitcher for the Olimex LPC-P1343.

FPGA Modules (The cmd module intercepts what the host computer sends over UART, the resetter holds the reset line until the target is reset, the delay starts counting on reset and waits for a configured number of cycles before sending its signal, the trigger waits for the delay to finish before telling the pulse module to send a pulse, and the pulse works similar to the delay module and outputs to the power multiplexer.)

When the target board boots up, the bootROM reads the flash and determines whether the UART goes to a shell and if the shell can be used to read out the flash. This is meant for developing firmware and debugging it in the bootloader, only flashing a version when the firmware is production-ready. The vulnerability is that only a specific value read from address 0x2FC and the state of a few pins can lock the bootloader in the expected way, and any other value at the address causes the bootROM to consider the device unlocked. Essentially, the mechanism is the opposite of how a lock ought to work.

The goal is to get the CPU to misread the flash at the precise moment it is meant to be reading the specific value, then jumping to the bootloader in the unlocked state. The FPGA can be used as a tool between the host machine and target board, communicating via UART. The FGPA can support configuring the delay between resetting the target board and pulsing a ‘glitch voltage’, as well as resetting the target board and activating the glitch. The primary reasons for using the FPGA over a different microcontroller are that the FPGA allows for precise timing (83.3ns precision) and removes worries about jitters (a Raspberry Pi might have side effects from OS scheduling and other processes and microcontrollers might have interrupts messing up the timing).

The logic analyzer view

To simulate the various modules, [Grazfather] used Icarus Verilog as well as GTKWave to observe the waveforms generated. A separate logic analyzer observes the effects on real hardware.

With enough time, it is possible to brute force any combination of delay and width until you get a dump of the flash you’re not meant to read. You can check out how the width of the pulse gets wider until the max, when the delay is incremented and the width values are tried again.

Continue reading “Using An FPGA To Glitch The Olimex LPC-P1343”