Beating Super Hexagon with OpenCV and DLL Injection

Every few months a game comes along which is so addictive, players can’t seem to put it down – no matter how frustrating it may get. Last year one of those games was Super Hexagon. After fighting his way through several levels, [Val] decided that designing a bot to beat the game would be more efficient than doing it himself. Having played a few rounds of Super Hexagon ourselves, we can’t fault him on that front!

At its core, Super Hexagon is a simple game. Walls move from the screen edges toward a ship located near the center of the screen. The player uses the arrow keys to “orbit” the ship around a central shape. Avoid getting crushed by the walls, and you’re golden. However, the entire game board is constantly spinning, expanding, contracting, flashing, and generally doing things to disorient the player while ever more complex wall patterns move in to kill you. In short, Super Hexagaon makes Touhou bullet hell games look like a cakewalk.

The first step in beating the game is to capture the screen. [Val] tried Fraps and VLC, but lags of 2 seconds or more were not going to work. Then [Val] turned to DLL Injection. Super Hexagon calls the OpenGL function glutSwapBuffers() to implement double buffering. Every frame of the game is rendered in the background. Once rendering is complete glutSwapBuffers() is called to swap the buffers, and the process starts over again. [Val] changed the game code such that his own frame capture function would be called instead of glutSwapBuffers(). Once he was done capturing the game’s video buffer, [Val] then called the real glutSwapBuffers() function. It worked perfectly.

Now that he had an image, [Val] used OpenCV to process it. Although game is graphically very noisy, there are only a few colors used at any one time. It didn’t take much work to come up with an algorithm which would create a binary image of the walls and the ship itself.

step5[Val] cast rays from the center of each wall through the center of the screen. The ray which was longest before intersecting another wall would be the best escape route. This simple solution worked, but only for about 40 seconds. At that point, Super Hexagon would start throwing more complex patterns, and the AI would fail. The final solution was to create an accessibility condition which also took into account how much space was available between the various approaching walls. This new version of the AI was able to beat the game.

So was this a more efficient method than grinding through Super Hexagon manually? Since [Val] now knows all about DLL injection and OpenCV, we sure think it was!

Click past the break to see the [Val’s] bot in action!

Continue reading “Beating Super Hexagon with OpenCV and DLL Injection”

ATtiny85 Does Over The Air NTSC

[CNLohr] has made a habit of using ATtiny microcontrollers for everything, and one of his most popular projects is using an ATTiny85 to generate NTSC video. With a $2 microcontroller and eight pins, [CNLohr] can put text and simple graphics on any TV. He’s back at it again, only this time the microcontroller isn’t plugged into the TV.

The ATtiny in this project is overclocked to 30MHz or so using the on-chip PLL. That, plus a few wires of sufficient length means this chip can generate and broadcast NTSC video.

[CNLohr] mentions that it should be possible to use this board to transmit closed captioning directly to a TV. If you’re looking for the simplest way to display text on a monitor with an AVR, there ‘ya go: a microcontroller and two wires. He’s unable to actually test this, as he lost the remote for his tiny TV from the turn of the millennium. Because there’s no way for [CNLohr] to enable closed captioning on his TV, he can’t build the obvious application for this circuit – a closed caption Twitter bot. That doesn’t mean you can’t.

Video below.

Continue reading “ATtiny85 Does Over The Air NTSC”

How Green Screen Worked Before Computers

If you know anything about how films are made then you have probably heard about the “green screen” before. The technique is also known as chroma key compositing, and it’s generally used to merge two images or videos together based on color hues. Usually you see an actor filmed in front of a green background. Using video editing software, the editor can then replace that specific green color with another video clip. This makes it look like the actor is in a completely different environment.

It’s no surprise that with computers, this is a very simple task. Any basic video editing software will include a chroma key function, but have you ever wondered how this was accomplished before computers made it so simple? [Tom Scott] posted a video to explain exactly that.

In the early days of film, the studio could film the actor against an entirely black background. Then, they would copy the film over and over using higher and higher contrasts until they end up with a black background, and a white silhouette of the actor. This film could be used as a matte. Working with an optical printer, the studio could then perform a double exposure to combine film of a background with the film of the actor. You can imagine that this was a much more cumbersome process than making a few mouse clicks.

For the green screen effect, studios could actually use specialized optical filters. They could apply one filter that would ignore a specific wavelength of the color green. Then they could film the actor using that filter. The resulting matte could then be combined with the footage of the actor and the background film using the optical printer. It’s very similar to the older style with the black background.

Electronic analog video has some other interesting tricks to perform the same basic effect. [Tom] explains that the analog signal contained information about the various colors that needed to be displayed on the screen. Electronic circuits were built that could watch for a specific color (green) and replace the signal with one from the background video. Studios even went so far as to record both the actor and a model simultaneously, using two cameras that were mechanically linked together to make the same movements. The signals could then be run through this special circuit and the combined image recorded all simultaneously.

There are a few other examples in the video, and the effects that [Tom] uses to describe these old techniques go a long way to help understand the concepts. It’s crazy to think of how complicated this process can be, when nowadays we can do it in minutes with the computers we already have in our homes. Continue reading “How Green Screen Worked Before Computers”

Generating Video With The PIC

[bekeband] recently came across an old industrial monitor. It’s small, monochrome, has a beautiful green phosphor, and does not accept a composite signal. Instead, there’s a weird TTL input with connectors for horizontal sync, vertical sync, and video. Intrigued, [bekeband] brought it home and started working on a project that would drive this monitor. He succeeded, and with a chip we don’t see much of on the Hackaday tips line: a 16-bit PIC.

The project uses the dsPIC30F3011, a strange little 16-bit PIC in a 40-pin package. The board for this build actually comes from an earlier build, and after connecting the horizontal sync, vertical sync, and video to this tiny board, [bekeband] started writing some code.

There are two programs written for this board. The first is a static image tester that displays a single image on the CRT. The second is one that displays a simple animation, in this case, a horse running in place. It’s not the fanciest project, but it does work, and even though [bekeband] isn’t using a high-speed ARM, he is getting a reasonably high resolution out of this chip.

Video below.

Continue reading “Generating Video With The PIC”

Keep Tabs on Passing Jets with Pi and SDR

Obviously Software Defined Radio is pretty cool. For a lot of hackers you just need the right project to get you into it. Submitted for your approval is just that project. [Simon Aubury] has been using a Raspberry Pi and SDR to record video of planes passing overhead. The components are cheap and most places have planes passing by; this just might be the perfect project.

We’re not just talking static frames with planes passing through them, oh no. Simon used two hobby servos and some brackets to gimbal his Pi camera board. A DVB dongle allows the rig to listen in on the Automatic Dependent Surveillance Broadcast (ADS-B) coming from the planes. This system is mandated for most commercial aircraft (deadlines for implementation vary). ADS-B consists of positioning data being broadcast from planes using known frequencies and protocols. Once [Simon] locks onto this data he can accomplish a lot, like keeping the plane in the center of the video, establishing which flight is being recorded, and automatically uploading the footage. With such a marvelously executed build we’re certain we will see more people giving it a try.

[Simon] did a great job with the writeup too. Not only did he include a tl;dr, but drilled down through a project summary and right to the gritty details. Well done documentation is itself worth celebrating!

Continue reading “Keep Tabs on Passing Jets with Pi and SDR”

Cairo Hackerspace Gets A $14 Projector

The Cairo hackerspace needed a projector for a few presentations during their Internet of Things build night, and of course Friday movie night. They couldn’t afford a real projector, but these are hackers. Of course they’ll be able to come up with something. They did. They found an old slide projector made in West Germany and turned it into something capable of displaying video.

The projector in question was a DIA projector that was at least forty years old. They found it during a trip to the Egyptian second-hand market. Other than the projector, the only other required parts were a 2.5″ TFT display from Adafruit and a Nokia smartphone.

All LCDs are actually transparent, and if you’ve ever had to deal with a display with a broken backlight, you’ll quickly realize that any backlight will work, like the one found in a slide projector. By carefully removing the back cover of the display, the folks at the Cairo hackerspace were able to get a small NTSC display that would easily fit inside their projector.

After that, it was simply a matter of putting the LCD inside the display, getting the focus right, and mounting everything securely. The presentations and movie night were saved, all from a scrap heap challenge.

An External Autofocus for DSLRs

Most modern DSLR cameras support shooting full HD video, which makes them a great cheap option for video production. However, if you’ve ever used a DSLR for video, you’ve probably ran into some limitations, including sluggish autofocus.

Sensopoda tackles this issue by adding an external autofocus to your DSLR. With the camera in manual focus mode, the device drives the focus ring on the lens. This allows for custom focus control code to be implemented on an external controller.

To focus on an object, the distance needs to be known. Sensopoda uses the HRLV-MaxSonar-EZ ultrasonic sensor for this task. An Arduino runs a control loop that implements a Kalman filter to smooth out the input. This is then used to control a stepper motor which is attached to the focus ring.

The design is interesting because it is rather universal; it can be adapted to run on pretty much any DSLR. The full writeup (PDF) gives all the details on the build.