Augmented reality game could come from the seventies

[Niklas Roy] sent in a project he just completed called PING! Augmented Pixel. At first glance the entire build is just a plain jane retro video game stuffed into an ATmega8 but looks can be deceiving. The video game is actually an augmented reality device that inserts a pixel into a video feed. The bouncing pixel can be manipulated with a camera – push the pixel and it goes off in another direction.

The project runs on an ATmega8 clocked at 16 MHz, and reads the video feed with the help of an LM1881 sync separator. There’s no schematics, but he thankfully included some code for his project. Everything is set up for PAL video, but this could be easily adapted for NTSC. Any Hack A Day readers want to take up the challenge of building this from just a description?

[Niklas] says there’s no reason this couldn’t have been done by Atari in the late seventies. There were economic reasons for not putting out a video camera controller, of course, and the R&D department may have been too busy playing Breakout with their eyebrows.

Check out the demo of the augmented pixel after the break.

Comments

  1. aztraph says:

    makes you wonder, “what if” this game were done in the seventies, were would games be now?

    awesome build

  2. Tel says:

    It would be interesting to see this taken further, the same idea added to a pair of those ‘gaming goggles’ (The ones that have video screens instead of lenses) and a camera used for image input, to make Augmented Reality glasses cheaper than you can buy them.
    Similar to http://www.vuzix.com/consumer/products_wrap920vrbundle.html , but hacked together instead of paying $500.

  3. Matthieu says:

    Isn’t augmented reality the fact of bringing virtuality into reality? This should be augmented virtuality right? (does this word even exist?)

    Except that, this looks awesome. I wonder how he made the video processing on an ATmega8.

  4. s1500 says:

    If Atari made Kinect. :)

  5. Dave says:

    Very cool idea.

    @Matthieu- it’s not really video processing. He’s just using the LM1881 to give him Vsync and Hsync timing, and using the comparator input to look for darker spots (his hand) near the location of the white ball.

  6. MattQ says:

    Yeah, they could have done this in the 70’s, but are we all forgetting what video cameras looked like in the 70s? The giant shoulder mount things? Now that I think about it, I’m not even sure if my families old camcorder from the 80’s even had video out on it, and while I’m sure my memory of my kid is making it seem much bigger than it actually was, I’m pretty sure it too was huge. Camcorders today are to camcorders of the 70’s as iPods are to boomboxes of the 80’s.

  7. Doc Oct says:

    @MattQ, those were big but don’t forget they were basically stuffing a whole vcr into it also. The sensors weren’t /that/ much bigger than they are now. In fact, there was a line of camcorders that looked like startrek phasers with the tape mechanics in an external satchel that you wore on your shoulder. The camcorder had a pistol grip and optical viewfinder. I remember seeing some old electronics hobbyist magazines with articles on how to build short range tv transmitters so you could set the tape deck down somewhere and not carry so much.

  8. SuperNuRd says:

    I can’t imagine this being to dificult all you do is scan the oridginal enviroment with no disruptions and then have a cordinated path react to foreign objects from the oridginal scan.

  9. Phil Burgess says:

    Indeed something similar was done starting in the early 1970s; look up “Videoplace” on YouTube or Google. As I recall, that system required something like a PDP-10 augmented with lots of additional custom hardware. So yes, while this could’ve been done in the seventies, it would’ve filled a room and cost more than the whole house. :)

  10. Yann says:

    I don’t know if it’s because the video signal was recorded independently and added to the TV in post to get a good quality, but it looks fake to me. Too fluid, too accurate, too clean for something done using a petty 16MHz AVR processor.

  11. Doc Oct says:

    Welcome to the Internet. Nothing on here is real, everything is fake. Of course it couldn’t be an overlay on top of video. Video-titler machines used to do it all the time in the early 80s and those were run by a 6502 in a C64 or similar computer. Obviously if a 16mhz AVR does it in 2011 it must be fake though! I mean, there’s absolutely positively no other way to do it than digitize the video, overlay the graphics and then output the whole thing together. It’s not like you can just synchronize the graphics to the video and overlay it without ever digitizing the video first. I suppose the next thing you’re going to tell me is you have a nice bridge you’d like to sell me.

  12. Niklas says:

    @Yann – you got it. My camera captured with 30fps – video signal was 25fps. The original looked shitty, so I recorded the video signal seperately and overlayed it afterwards. With some crappy rotoscoping skillz involved.

    @Doc Oct – n00bs shout fake, hackers read the source code. It is not necessary to capture the full image for moving (and bouncing) this little pixel. As @Dave mentioned, I only captured the area around the pixel and checked the brightness vs. a threshold, adjusted by the poti that you can see on the right side of the console.

    @Phil Burgess – this is very interesting. I didn’t know ‘Videoplace’ before and the stuff looks impressing. But actually, what I was doing is a lot more simple. It’s really just an analog comparator that checks the brightnes around the pixel. Cheap stuff. Was even cheap and small in the seventies.

  13. Yann says:

    @Doc chill out dude. I’m not paranoid, and I don’t go for fake as the first explanation. I totally understand what you’re saying about video switching. I never implied that the signal was fully digitized. BUT I said that the video of the animation on the CRT is very clean and that’s hard to get when filming and that it looks like it was done in postprod (which, again, doesn’t say that the AVR part is fake per se). What intrigued me was the very smooth animations (which as far as I can remember didn’t exist in the ’80s, all you could get were static texts). Here the “ball” moves pixels by pixels. And when the bandwidth of a NTSC video is more than 10MHz, not much is left for computation, most of the time is spent watching the sync clock. That’s without analysing the video level to do the augmented part (which requires some sort of digitalization, even if done at a lower resolution to limit processing). All of this makes me very skeptical. Again I’m not saying “I know for a fact it’s fake”, just that there are quite a few red herring.

  14. Doc Oct says:

    @yann, if you can’t handle a little sarcasm then it’s your problem not mine. Everyone calls everything fake on the Internet.

  15. CRJEEA says:

    @Tel :
    I can envisage a pair of thoughs goggles with two high resolution mini cameras and a pair microphones mounted on them to replicate the users normal view. Then with the aid of some video possessing add features like zoom extra. and over lay information maybe with the aid of GPS and text to speech (or visa versa) and ocr. (would be great for the blind or deaf extra)

    maybe just for fun add a 3 axis gyroscope and move the image with a slight lag to make the user feel drunk… haha

  16. CRJEEA says:

    btw… love this concept of digital environments being interactive via the physical world (:
    would love to see someone build this out of 70’s/80’s components just to “prove” it can be done and to show how big it would actually be :D

  17. t&p says:

    amazing
    most would have a delay! The pixel did get stuck in his hand but I doubt it’s from the delay of taking the picture and doing collision
    for the most part I do wish the guy could go back in time!!! It may have been too much for a console but for an arcade it would have worked!

  18. Niklas says:

    Ok, to reduce confusion, here’s how it works:

    Drawing onto the signal/image:
    One output of the AVR is connected via a 1K resistor to the video signal. Switching this output to HIGH rises the signal about a few mV -> the image becomes brighter, then.

    Digitizing the video image:
    Doesn’t happen. Instead, only the brightness of the area around the pixel is captured. Imagine a grid of 3×3 squares. The square in the middle is the pixel. When the signal is at this middle position, the output that draws on the video is switched to HIGH. If the signal is within one of the eight areas surrounding that pixel, the AVR compares the specific brightness (Voltage level) of that area with a threshold. That’s how an obstacle and its position in relation to the position of the pixel is detected.

    Calculating the animation:
    When the beam (or signal) has finished drawing the lower white bar, there’s plenty of time to calculate the new position of the pixel until the next image has to be drawn. As it is all synced with the video signal, this animation happens smooth. Couldn’t happen more smooth. An animation ‘pixel by pixel’ is also no problem, as it is all about counting video lines (y) or delaying within a specific video line (x).

    This also explains why the starting animation is rendered smooth.

    And it also means, that there is a delay until the pixel reacts: It reacts in the next image that is drawn. No magic here.

    Speed of the processor:
    The AVR is clocked with a 16MHz quarz. The duration (only image content) of one PAL video line is 64uS. => There are 1024 clock cycles per visible part of each video line. That’s really sufficient for what the program has to do: Which is mainly waiting, a bit of counting and sometimes reading the internal comparator bit or switching an output.

    It’s all written down here:
    http://niklasroy.com/codes/ping.txt

  19. Chris says:

    @Niklas: That’s cool! I don’t doubt your video, or that it works exactly the way you say it does.

    The Atari 400, released in 1978, could use a light pen. I built one for it. I even found the article I built it from:

    http://www.atariarchives.org/creativeatari/Build_Your_Own_Light_Pen.php

    What you’ve done is essentially implemented a light pen in reverse. Instead of determining the timing at which a signal (light pen phototransistor) turns on, you determine if a signal (video camera output) is on at a single, specific timing. No video capture necessary.

    So yes, that really could have been done in the 70’s. Even without a CPU or memory at all. I’ve seen video tennis implementations done completely in analog and simple logic chips, that could conceivably be modified to support this.

    Of course, video cameras were expensive beasts at the time, and not for games. I remember getting run off from playing with one at Sears in the 70’s, after I pointed it at the attached TV to generate a video feedback tunnel. An ignorant employee saw the screen, and honestly believed I was breaking it!

    No doubt that employee went on to have many children and grandchildren, who are now around to cry “fake” on HAD…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s