Raspberry Pis And A Video Triptych

A filmmaker friend of [Thomas] mentioned that she would like to display a triptych at the 2015 Venice Art Walk. This is no ordinary triptych with a frame for three pictures – this is a video triptych, with three displays each showing a different video, and everything running in sync. Sounds like a cool engineering challenge, huh?

The electronics used in the build were three Raspberry Pi 2s and a trio of HDMI displays. Power is provided by a 12V, 10A switching supply with 5V stepdown converters for the Pis. The chassis is a bunch of aluminum bars and U channel encased in an extremely well made arts and crafts style frame. So far, nothing out of the ordinary.

Putting three monitors and three Pis in a frame isn’t the hard part of this build; getting three different displays all showing different videos is. For this, [Thomas] networked the Pis through an Ethernet hub, got the videos to play independently on a RAM disk with omxplayer. One of the Raspberry Pis serves as the master, commanding the slaves to start, stop, and rewind the video on cue. According to [Thomas], it’s a somewhat hacky solution with a bunch of sleep statements at the beginning of the script to allow the boot processes to finish. It’s a beautiful build, though, and if you ever need to command multiple monitors to display the same thing, this is how you do it.

20 thoughts on “Raspberry Pis And A Video Triptych

      1. Yeah, I considered it but since I needed the hub to network the machines anyway, I decided to use the hub for OSC as well because it seemed simpler from both a wiring and scripting standpoint. Actually, technically, I didn’t HAVE to network the machines for the final product but as I was putting it together, I needed the ability to constantly log into the three machines to tweak the OS and the main script. Also, a cheap hub is under $10 so it’s not like it broke the bank.

  1. Too bad there doesn’t seem to be a video. I’m not familiar with the effect they are trying to achieve. I guess I don’t understand the necessity of the “perfect sync”. Probably makes more sense if I saw it in action. Nicely done either way. The frame does a nice job of tying it all together.

  2. “In order to avoid continually reading from the SD card, I created a ram disk so my script could copy the movie file to it and allow omxplayer to play back from there.”

    Curious why this was done. Reading from the SD card causes minimal wear, or none if you disable update of the “file last accessed” timestamp. Perhaps SD access caused timing issues.

    1. I actually didn’t know that reading from the SD card caused minimal wear. I think it’s a reflex from using hard drives. I wanted to do whatever I could to ensure this would continually play without any problems. Also, as you surmise, at some point, I was having some timing issues and I wanted to take SD access out of the equation.

      1. I can understand eliminating any possible source of issue. Having done something similar, I know your project sounds easier than it was. Have experienced dropped commands on another backend player/platform, variations in time it takes for the player to start playing, delays sending commands over TCP/IP due to Nagle’s algorithm, etc.

        1. I definitely would not call it “perfect”. It’s definitely more of a “got it working well enough to serve its purpose within a short time frame”. I would say that the biggest weakness as far as I can tell is that I am controlling omxplayer by using pexpect to mimic keyboard commands. I wish there was a more direct API way to control it. Also, though omxplayer plays back the movies very efficiently, the seeking, rewinding and other moving the “playback head” around the file is a little clunky. I looked at other options and it does seem like some people have compiled a hardware accelerated VLC on raspi, but that was a bit too much for me to take on in the short time I had to get this done.
          I will also look into ways to reduce TCP/IP delays, thanks for the tip on that.

          1. I’ve been working on a media player frontend that can be linked across multiple computers, so that the various MPCs throughout my house can play the same thing in multiple rooms. To avoid audio “echo” between the rooms, sync has to be better than the Haas Effect (aka Precedence Effect) limits, typically considered 25-35ms. Though I found I could still hear echo until I cut those limits by half, which was challenging to accomplish given both the variable delays of the wireless connections, and backend player limits.

            I noticed you said you’re seeking, and that is something you may want to avoid. All player backends I’ve tried will only seek to a keyframe, which depending on the format and encoding options may be many seconds apart. Though in your case, you are the creator of the video, and so could theoretically encode all video to be 100% keyframes.

            VLC does have a native API, at least under Windows (my target OS), probably in Linux too. Though it only reports position every 0.1s, which was hard to interpolate to the needed accuracy, and I spent a week trying to recompile it without even getting close to success. I’m currently using MPlayer, which I can modify for the needed precision by changing a single byte in the executable with a hex editor. It’s controlled via standard streams, like simulated keyboard input; which isn’t ideal, but with some experimentation I figured out how to control it adequately without dropped commands.

            I’ll be happy to help with any questions or issues you have, either here, or I can friend you on Facebook.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.