Someone setting down an arUco tag

Make Your Own Virtual Set

An old adage says out of cheap, fast, and good, choose two. So if you’re like [Philip Moss] and trying to make a comedy series on a limited budget rapidly, you will have to take some shortcuts to have it still be good. One shortcut [Philip] took was to do away with the set and make it all virtual.

If you’ve heard about the production of a certain western-style space cowboy that uses a virtual set, you probably know what [Philip] did. But for those who haven’t been following, the idea is to have a massive LED wall and tracking of where the camera is. By creating a 3d set, you can render that to the LED wall so that the perspective is correct to the camera. While a giant LED wall was a little out of budget for [Philip], good old green screen fabric wasn’t. The idea was to set up a large green screen backdrop, put some props in, get some assets online, and film the different shots needed. The camera keeps track of where in the virtual room it is, so things like calculating perspective are easy. They also had large arUco tags to help unreal know where objects are. You can put a wall right where the actors think there’s a wall or a table exactly where you put a table covered in green cloth.

Initially, the camera was tracked using a Vive tracker and LiveLink though the tracking wasn’t smooth enough while moving to be used outside of static shots. However, this wasn’t a huge setback as they could move the camera, start a new shot, and not have to change the set in Unreal or fiddle with compositing. Later on, they switched to a RealSense camera instead of the Vive and found it much smoother, though it did tend to drift.

The end result called ‘Age of Outrage’, was pretty darn good. Sure, it’s not perfect, but it doesn’t jump out and scream “rendered set!” the way CGI tv shows in the 90’s did. Not too shabby considering the hardware/software used to create it!

Tracking Drone Flight Path Via Video, Using Cameras We Can Get

Calculating three-dimensional position from two-dimensional projections are literal textbook examples in geometry, but those examples are the “assume a spherical cow” type of simplifications. Applicable only in an ideal world where the projections are made with mathematically perfect cameras at precisely known locations with infinite resolution. Making things work in the real world is a lot harder. But not only have [Jingtong Li, Jesse Murray et al.] worked through the math of tracking a drone’s 3D flight from 2D video, they’ve released their MultiViewUnsynch software on GitHub so we can all play with it.

Instead of laboratory grade optical instruments, the cameras used in these experiments are available at our local consumer electronics store. A table in their paper Reconstruction of 3D Flight Trajectories from Ad-Hoc Camera Networks (arXiv:2003.04784) listed several Huawei cell phone cameras, a few Sony digital cameras, and a GoPro 3. Video cameras don’t need to be placed in any particular arrangement, because positions are calculated from their video footage. Correlating overlapping footage from dissimilar cameras is a challenge all in itself, since these cameras record at varying framerates ranging from 25 to 59.94 frames per second. Furthermore, these cameras all have rolling shutters, which adds an extra variable as scanlines in a frame are taken at slightly different times. This is not an easy problem.

There is a lot of interest in tracking drone flights, especially those flying where they are not welcome. And not everyone have the budget for high-end equipment or the permission to emit electromagnetic signals. MultiViewUnsynch is not quite there yet, as it tracks a single target and video files were processed afterwards. The eventual goal is to evolve this capability to track multiple targets on live video, and hopefully help reduce frustrating public embarrassments.

[IROS 2020 Presentation video (duration 14:45) requires free registration, available until at least Nov. 25th 2020.]

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

FPGAs Keep Track Of Your Ping Pong Game

It’s graduation time, and you know what that means! Another great round of senior design projects doing things that are usually pretty unique. [Bruce Land] sent in a great one from Cornell where the students have been working on a project that uses FPGAs and a few video cameras to keep score of a ping-pong game.

The system works by processing a live NTSC feed of a ping pong game. The ball is painted a particular color to aid in detection, and the FPGAs that process the video can keep track of where the net is, how many times the ball bounces, and if the ball has been hit by a player. With all of this information, the system can keep track of the score of the game, which is displayed on a monitor near the table. Now, the players are free to concentrate on their game and don’t have to worry about keeping score!

This is a pretty impressive demonstration of FPGAs and video processing that has applications beyond just ping pong. What would you use it for? It’s always interesting to see what students are working on; core concepts from these experiments tend to make their way into their professional lives later on. Maybe they’ll even take this project to the next level and build an actual real, working ping pong robot to work with their scoring system!

Continue reading “FPGAs Keep Track Of Your Ping Pong Game”

Transcribing Piano Rolls With Python

Piano Roll

 

Perforated rolls of paper, called piano rolls, are used to input songs into player pianos. The image above was taken from a YouTube video showing a player piano playing a Gershwin tune called Limehouse Nights. There’s no published sheet music for the song, so [Zulko] decided to use Python to transcribe it.

First off the video was downloaded from YouTube. This video was processed with MoviePy library to create a single image plotting the notes. Using a Fourier Transform, the horizontal spacing between notes was found. This allowed the image to be reduced so that one pixel corresponded with one key.

With that done, each column could be assigned to a specific note on the piano. That takes care of the pitches, but the note duration requires more processing. The Fourier Transform is applied again to determine the length of a quarter note. With this known, the notes can be quantized, and a note duration can be applied to each.

Once the duration and notes are known, it’s time to export sheet music. LilyPond, an open source language for music notation, was used. This converts ASCII text into a sheet music PDF. The final result is a playable score of the piece, which you can watch after the break.

Continue reading “Transcribing Piano Rolls With Python”