New Part Day: SPI RAM and a Video Controller

Generating video signals with a microcontroller or old CPU is hard if you haven’t noticed. If you’re driving even a simple NTSC or PAL display at one bit per pixel, you’re looking at a minimum of around 64kB of RAM being used as a frame buffer. Most microcontrollers don’t have this much RAM on the chip, and the AVR video builds we’ve seen either have terrible color or relatively low resolution.

Here’s something interesting that solves the memory problem and also generates analog video signals. Yes, such a chip exists, and apparently this has been in the works for a very long time. It’s the VLSI VS23s010C-L, and it has 131,072 bytes of SRAM and a video display controller that supports NTSC and PAL output.

There are two chips in the family, one being an LQFP48 package, the other a tiny SMD 8-pin package. From what I can tell from the datasheets, the 8-pin version is only an SPI-based SRAM chip. The larger LQFP package is where the action is, with parallel and SPI interfaces to the memory, an input for the colorburst crystal, and composite video and sync out.

After looking at the datasheet (PDF), it looks like generating video with this chip is simply a matter of connecting an RCA jack, throwing a few commands to the chip over SPI, and pushing bits into the SRAM. That’s it. You’re not getting hardware acceleration, you’re going to have to draw everything pixel by pixel, but this looks like the easiest way to generate relatively high-resolution video with a single part.

Thanks [antibyte] for the tip on this one.

Retrotechtacular: The Early Days of CGI

We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.

In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.

Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days of CGI”

The Making Of The Hackaday Prize Video

As you’re probably aware, there’s a video announcing the launch of The Hackaday Prize blocking the front page of Hackaday right now. This is by design, and surprisingly we haven’t gotten any complaints saying, ‘not a hack’ yet. I’m proud of you. Yes, all of you.

Making this video wasn’t easy. The initial plans for it were something along the lines of the new Star Wars trailer. Then we realized we could do something cooler. The idea still had Star Wars in it, but we were going for the classics, and not the prequels. As much as we love spending two hours watching a movie about trade disputes, we needed to go to Tatooine.

QV4A4035I just wanted to go to Toshi station

This meant building a prop. We decided on the moisture vaporators from Uncle Owen’s farm. It’s a simple enough structure to build at the Hackaspace in a weekend, and could be broken down relatively easily for transport to the shooting site. I’ve created a hackaday.io project for the actual build, but the basic idea is a few pieces of plywood, an iron pipe for the structural support, and some Coroplast and spray paint to make everything look like it’s been sitting underneath two suns for several decades.

Oh, I was the only person at the hackaspace that knew what greebles were. That’s not pertinent in any way, I’d just like to point that out.

The Suit

The vaporator is the star of the show, but we also rented a space suit. No one expected teflon-covered beta cloth when we were calling up costume rental places, but the suit can really only be described as a space-suit shaped piece of clothing. The inlet and outlet ports are resin, and the backpack is a block of foam. If anyone knows where we can get an Orlan spacesuit, or even a NASA IVA or Air Force high altitude suit, let us know.

Credits

[Matt Berggren] led the prop build and starred in the assembly footage. [Aleksandar Braic] and [Rich Hogben] rented a ridiculous amount of camera equipment. On set for the hijinks was [Aleksandar “Bilke” Bilanovic], [Brian Benchoff] (me), [Jasmine Bracket], [Sophi Kravitz], and [Mike Szczys].

Beating Super Hexagon with OpenCV and DLL Injection

Every few months a game comes along which is so addictive, players can’t seem to put it down – no matter how frustrating it may get. Last year one of those games was Super Hexagon. After fighting his way through several levels, [Val] decided that designing a bot to beat the game would be more efficient than doing it himself. Having played a few rounds of Super Hexagon ourselves, we can’t fault him on that front!

At its core, Super Hexagon is a simple game. Walls move from the screen edges toward a ship located near the center of the screen. The player uses the arrow keys to “orbit” the ship around a central shape. Avoid getting crushed by the walls, and you’re golden. However, the entire game board is constantly spinning, expanding, contracting, flashing, and generally doing things to disorient the player while ever more complex wall patterns move in to kill you. In short, Super Hexagaon makes Touhou bullet hell games look like a cakewalk.

The first step in beating the game is to capture the screen. [Val] tried Fraps and VLC, but lags of 2 seconds or more were not going to work. Then [Val] turned to DLL Injection. Super Hexagon calls the OpenGL function glutSwapBuffers() to implement double buffering. Every frame of the game is rendered in the background. Once rendering is complete glutSwapBuffers() is called to swap the buffers, and the process starts over again. [Val] changed the game code such that his own frame capture function would be called instead of glutSwapBuffers(). Once he was done capturing the game’s video buffer, [Val] then called the real glutSwapBuffers() function. It worked perfectly.

Now that he had an image, [Val] used OpenCV to process it. Although game is graphically very noisy, there are only a few colors used at any one time. It didn’t take much work to come up with an algorithm which would create a binary image of the walls and the ship itself.

step5[Val] cast rays from the center of each wall through the center of the screen. The ray which was longest before intersecting another wall would be the best escape route. This simple solution worked, but only for about 40 seconds. At that point, Super Hexagon would start throwing more complex patterns, and the AI would fail. The final solution was to create an accessibility condition which also took into account how much space was available between the various approaching walls. This new version of the AI was able to beat the game.

So was this a more efficient method than grinding through Super Hexagon manually? Since [Val] now knows all about DLL injection and OpenCV, we sure think it was!

Click past the break to see the [Val’s] bot in action!

Continue reading “Beating Super Hexagon with OpenCV and DLL Injection”

ATtiny85 Does Over The Air NTSC

[CNLohr] has made a habit of using ATtiny microcontrollers for everything, and one of his most popular projects is using an ATTiny85 to generate NTSC video. With a $2 microcontroller and eight pins, [CNLohr] can put text and simple graphics on any TV. He’s back at it again, only this time the microcontroller isn’t plugged into the TV.

The ATtiny in this project is overclocked to 30MHz or so using the on-chip PLL. That, plus a few wires of sufficient length means this chip can generate and broadcast NTSC video.

[CNLohr] mentions that it should be possible to use this board to transmit closed captioning directly to a TV. If you’re looking for the simplest way to display text on a monitor with an AVR, there ‘ya go: a microcontroller and two wires. He’s unable to actually test this, as he lost the remote for his tiny TV from the turn of the millennium. Because there’s no way for [CNLohr] to enable closed captioning on his TV, he can’t build the obvious application for this circuit – a closed caption Twitter bot. That doesn’t mean you can’t.

Video below.

Continue reading “ATtiny85 Does Over The Air NTSC”

How Green Screen Worked Before Computers

If you know anything about how films are made then you have probably heard about the “green screen” before. The technique is also known as chroma key compositing, and it’s generally used to merge two images or videos together based on color hues. Usually you see an actor filmed in front of a green background. Using video editing software, the editor can then replace that specific green color with another video clip. This makes it look like the actor is in a completely different environment.

It’s no surprise that with computers, this is a very simple task. Any basic video editing software will include a chroma key function, but have you ever wondered how this was accomplished before computers made it so simple? [Tom Scott] posted a video to explain exactly that.

In the early days of film, the studio could film the actor against an entirely black background. Then, they would copy the film over and over using higher and higher contrasts until they end up with a black background, and a white silhouette of the actor. This film could be used as a matte. Working with an optical printer, the studio could then perform a double exposure to combine film of a background with the film of the actor. You can imagine that this was a much more cumbersome process than making a few mouse clicks.

For the green screen effect, studios could actually use specialized optical filters. They could apply one filter that would ignore a specific wavelength of the color green. Then they could film the actor using that filter. The resulting matte could then be combined with the footage of the actor and the background film using the optical printer. It’s very similar to the older style with the black background.

Electronic analog video has some other interesting tricks to perform the same basic effect. [Tom] explains that the analog signal contained information about the various colors that needed to be displayed on the screen. Electronic circuits were built that could watch for a specific color (green) and replace the signal with one from the background video. Studios even went so far as to record both the actor and a model simultaneously, using two cameras that were mechanically linked together to make the same movements. The signals could then be run through this special circuit and the combined image recorded all simultaneously.

There are a few other examples in the video, and the effects that [Tom] uses to describe these old techniques go a long way to help understand the concepts. It’s crazy to think of how complicated this process can be, when nowadays we can do it in minutes with the computers we already have in our homes. Continue reading “How Green Screen Worked Before Computers”

Generating Video With The PIC

[bekeband] recently came across an old industrial monitor. It’s small, monochrome, has a beautiful green phosphor, and does not accept a composite signal. Instead, there’s a weird TTL input with connectors for horizontal sync, vertical sync, and video. Intrigued, [bekeband] brought it home and started working on a project that would drive this monitor. He succeeded, and with a chip we don’t see much of on the Hackaday tips line: a 16-bit PIC.

The project uses the dsPIC30F3011, a strange little 16-bit PIC in a 40-pin package. The board for this build actually comes from an earlier build, and after connecting the horizontal sync, vertical sync, and video to this tiny board, [bekeband] started writing some code.

There are two programs written for this board. The first is a static image tester that displays a single image on the CRT. The second is one that displays a simple animation, in this case, a horse running in place. It’s not the fanciest project, but it does work, and even though [bekeband] isn’t using a high-speed ARM, he is getting a reasonably high resolution out of this chip.

Video below.

Continue reading “Generating Video With The PIC”