Video With Sensor Data Overlay Via Arduino Mega

If you haven’t been paying attention, big wheel trikes are a thing. There are motor driven versions as well as OG pedal pushing types . [Flux Axiom] is of the OG (you only get one link, now its on you) flavor and has written an instructable that shows how to achieve some nice looking on screen data that he syncs up with the video for a professional looking finished product which you can see in the video after the break.

[Flux Axiom] is using an Arduino Mega in his setup along with a cornucopia of sensors and all their data is being logged onto an SD card. All the code used in his setup is available in his GitHub repository. [Flux Axiom] was also nice enough to include the calibration process he used for the sensors which is also located in the GitHub download.

Sadly [Flux Axiom] uses freedom hating software for combining the video and data, Race Render 3 is his current solution and he is pleased with the results. Leave it in the comments if you have an open source solution for combining the video and data that we can offer him as a replacement.

Edit: Correct spelling of handle.

17 thoughts on “Video With Sensor Data Overlay Via Arduino Mega

      1. I’m trying to think of the simplest hardware to make this happen and can’t come up with it.

        I would suspect some type of FPGA pass-through that manages to harvest the video from the GoPro but you’d have a heck of a time writing out all of the video on the other side, right?

        1. Just give the camera an actual physical HUD and project the sensor data on that, then it just becomes part of the recorded video.

          You just need to solve the rendering issue in real-time. This guy seems to have done a lot with very little resources when it comes to that. But I suspect that a second MCU will be required to render the data flowing off the first.

        2. If you can afford the FPGA that can hold a video encoder and the video encoder code for the FPGA. The problem is hat you have to add the overlay to the video stream and then encode it. The encoding is the problem. And if you dont have the raw video you have to re-code the whole thing, in real time preferably.

  1. I appreciate the helpful comments here as I continue to develop this project. I’m working on implementing Dashware instead of using Race Render, DW is really not user friendly but I realize the benefit of having the whole project be open source. When I started this I really couldn’t believe that someone hadn’t come up with a solution for all this yet. Expect a big update from me soon on the instructable as well as github. Brandon do you mind correcting my name in the article though? Its not Flux Axium its Flux Axiom thank you –

  2. hello, Nice demonstration, I want to do the same thing, with a drone, I want to mix (overlay) the flow of Video and information coming from the arduino (gps, …) and display it on a composite flow , which path should I take?
    thank you for your reply

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.