GPS Overlays Give Real Life Racing A Video Game Feel

Racing is certainly exciting for the person rocketing around the track fast enough to get the speedometer into the triple digits, and tends to be a decent thrill for the spectators if they’ve got good seats. But if you’re just watching raw race videos on YouTube from the comfort of your office chair it can be a bit difficult to appreciate. There’s a lack of context for the viewer, and it can be hard to get the same sense of speed and position that you’d have if you saw the event first hand.

In an effort to give his father’s racing videos a bit more punch, [DusteD] came up with a clever way of adding video game style overlays to the recordings. The system provides real-time speed, lap times, and even a miniature representation of the track complete with a marker to show where the action is taking place. The end result is that recordings of Dad’s exploits on the track could pass as gameplay footage from Gran Turismo (we know GT doesn’t have motorcycles, but you get the idea).

The first part of the system is the tracker itself, which consists of a GPS receiver, an Arduino Pro Micro, and an SD card module. [DusteD] powers the device with two 18650 cells in parallel, and a DC-DC boost converter to step it up to 5V. Everything is contained in a 3D printed enclosure that he designed in OpenSCAD, with the only external elements being a toggle switch, a momentary switch, and most critically, a set of LEDs.

These LEDs play into the second part of the system, the software. The blinking LEDs are positioned so they’ll get picked up by the camera, which is then used to help synchronize the data stored on the SD card with the video. [DusteD] came up with some software that will take the speed and position information from the card, and turn it into PNG files with transparent backgrounds. These are then placed on top of the video with the help of FFmpeg. It takes a little adjustment to get everything lined up properly, but as the video after the break shows the end result is very impressive.

This build reminds us of the Raspberry Pi powered GPS helmet camera we featured a few years back, and it’s interesting to see how the two projects achieved what’s essentially the same goal in different ways.

Continue reading “GPS Overlays Give Real Life Racing A Video Game Feel”

Hummingbirds, 3D Printing, And Deep Learning

Setting camera traps in your garden to see what local wildlife is around is quite popular. But [Chris Lam] has just one subject in mind: the hummingbird. He devised a custom setup to capture the footage he wanted using some neat tech.

To attract the hummingbirds, [Chris] used an off-the-shelf feeder — no need to re-invent the wheel there. To obtain the closeup footage required, a 4K action cam was used. This was attached to the feeder with a 3D-printed mount that [Chris] designed.

When it came to detecting the presence of a hummingbird in the video, there were various approaches that could have been considered. On the hardware side, PIR and ultrasonic distance sensors are popular for projects of this kind, but [Chris] wanted a pure software solution. The commonly used motion detection libraries for this type of project might have fallen over here, since the whole feeder was swinging in the air on a string, so [Chris] opted for machine learning.

A RESNET architecture was used to run a classification on each frame, to determine if the image contained a hummingbird or not. The initial attempt was not greatly successful, but after cropping the image to a smaller area around the feeder, classification accuracy greatly increased. After a bit of FFmpeg magic, the selected snippets were concatenated to make one video containing all the interesting parts; you can see the result in the clip after the break.

It seems that machine learning and wildlife cams are a match made in heaven. We’ve already written about a proof-of-concept project which identifies different animals in the footage when motion is detected.

Continue reading “Hummingbirds, 3D Printing, And Deep Learning”

4-Mation Fish eats fish

Time-Stretching Zoetrope Animation Runs Longer Than It Should

3D printers have long since made it easy for anyone to make 3-dimensional zoetropes but did you know you can take advantage of a 4th dimension by stretching time? Previously the duration of a zoetrope animation would be however long it took for the platform to rotate once. To make it more interesting to watch for longer, you filled out the scene by creating concentric rings of animations. [Kevin Holmes], [Charlie Round-Turner], and [Johnathan Scoon] have instead come up with a way to make their animations last for multiple rotations, longer than three in one example. If you’re not at all familiar with these 3D zoetropes, you might want to check out this simpler version first.

4-Mation Fish eats Fish zoetropeTheir project name is 4-Mation but they call the time-stretching technique, animation multiplexing. One way to implement it is to use one long spiral beginning in the center and ending on the platform’s periphery. It’s the spiral path which make the animation last longer.

In their Fish eating Fish animation, the spiral is of a small fish which exits a clam at the center and gets progressively larger as it spirals outward until it swallows another fish located in a ring at the periphery. Of course when you look at it with a properly timed strobe light, there is no spiral. Instead, it appears as though a bunch of fish move more-or-less radially out from the center. The second video embedded below walks through the animation step-by-step, making it easier to follow the intricacies of what’s going on.

Other features include built-in strobe lighting and both manual and phone app control. This project is a product for a kickstarter campaign and so normally, details of the electronics would be absent. But clearly [Kevin] is familiar with Hackaday and sent in some additional info which you can find below, along with the videos.

Continue reading “Time-Stretching Zoetrope Animation Runs Longer Than It Should”

Rediffusion Television: Early Cable TV Delivered Like Telephone

Recently I spent an enjoyable weekend in Canterbury, staying in my friend’s flat with a superb view across the rooftops to the city’s mediaeval cathedral. Bleary-eyed and in search of a coffee on the Sunday morning, my attention was immediately drawn to one of her abode’s original built-in features. There on the wall in the corner of the room was a mysterious switch.

Housed on a standard-sized British electrical fascia was a 12-position rotary switch, marked with letters A through L. An unexpected thing to see in the 21st century and one probably unfamiliar to most people under about 40, I’d found something I’d not seen since my university days in the early 1990s: a Rediffusion selector switch.

If you have cable TV, there is probably a co-axial cable coming into your home. It is likely to carry a VHF signal, either a series of traditional analogue channels or a set of digital multiplexes. “Cable ready” analogue TVs had wideband VHF tuners to allow the channels to be viewed, and on encrypted systems there would have been a set-top box with its own analogue tuner and decoder circuitry.

Your digital cable TV set-top box will do a similar thing, giving you the channels you have subscribed to as it decodes the multiplex. At the dawn of television transmission though, none of this would have been possible. Co-axial cable was expensive and not particularly high quality, and transistorised wideband VHF tuners were still a very long way away. Engineers designing the earliest cable TV systems were left with the technology of the day derived from that of the telephone networks, and in Britain at least that manifested itself in the Rediffusion system whose relics I’d found.

Continue reading “Rediffusion Television: Early Cable TV Delivered Like Telephone”

Object Detection, With TensorFlow

Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

His object recognition software runs on a Raspberry Pi equipped with a webcam, and also makes use of Open CV. [Evan] notes that this opens up a lot of creative low-cost detection applications for the Pi, such as setting up a camera that detects when a pet is waiting at the door to be let inside or outside, counting the number of bees entering and exiting a beehive, or monitoring parking spaces at an office.

This project uses a number of other toolkits as well, including Protobuf. It also makes extensive use of Python scripts, but if you’re comfortable with that and you have an application for computer vision, [Evan]’s tutorial will get you started.

Continue reading “Object Detection, With TensorFlow”

Bringing Augmented Reality To The Workbench

[Ted Yapo] has big ideas for using Augmented Reality as a tool to enhance an electronics workbench. His concept uses a camera and projector system working together to detect objects on a workbench, and project information onto and around them. [Ted] envisions virtual displays from DMMs, oscilloscopes, logic analyzers, and other instruments projected onto a convenient place on the actual work area, removing the need to glance back and forth between tools and the instrument display. That’s only the beginning, however. A good camera and projector system could read barcodes on component bags to track inventory, guide manual PCB assembly by projecting which components go where, display reference data, and more.

An open-sourced, accessible machine vision system working in tandem with a projector would open a lot of doors. Fortunately [Ted] has prior experience in this area, having previously written the computer vision code for room-scale dynamic projection environments. That’s solid experience that he can apply to designing a workbench-scale system as his entry for The Hackaday Prize.