Siggraph Best-Of 2005


Siggraph is a hot bed for tech prototype research and crazy art each year. [Dan Kaminsky] attended the conference last month and graced us with tons of pictures and descriptions of his favorite projects and pieces. Thanks Dan! Many of the exhibitors at Siggraph are hardware hackers and handheld gadget modders. Where possible we’ve linked to project pages and videos. We’ve gone ahead and added a few more of our personal faves as well to round out this round-up. Get your groove on at this visualization and interaction party.

by Dan Kaminsky

Emerging Technologies

Earning ‘coolest thing since protude, flow‘ was
Andy Wilson‘s
display.  Somehow, he managed to create a rear projection screen that is actually transparent when seen from
behind.  To give you an idea of how cool this is — you know that old joke, where someone tries to scan a sheet of
paper by holding it up to the monitor?  Yeah, what if it wasn’t a joke?  Not only did Andy scan a page right
in front of me, he actually “grabbed the sides” of the projected images with his hands proceeded to stretch and rotate
the photographed image.  Amazingly cool; I can’t wait until his videoconferencing demo (where your eyes can
actually align with the eyes of the person you’re speaking to) is up and running.
on Siggraph

touchlight2 touchlight3

Haptic Training: This was a really interesting approach to force feedback.  Kinesthetic knowledge — the ability
to make your body move in patterned ways automatically — is nontrivial to absorb.  This training methodology
overdrives a force feedback system such that if you’re doing something wrong, the tool you’re using will pull you into
correctness.  In a way, you’re receiving corrective data through the same channel that needs to be ultimately
corrected, as opposed to your brain having to translate advice through either language (“you’re doing it wrong!”) or
imagery (“see!  you’re doing it wrong!”).  This is a limited prototype, but it’s a really interesting idea.
on Siggraph


Last year’s Emerging Technologies exhibit witnessed a burst of development focused on the realization that a
computer could analyze precisely how a stretchy substance was being stretched, pushed, or pulled, and then use that as
a user interface for “something”.  This year, there was actually something cool put together.  Video was
projected on a flexible screen.  Pushing the screen caused the area thus pushed to accelerate to a different point
in time — daytime would become night, in the distorted region.  This was actually really, really cool.
on Siggraph



I swear, there’s some sort of secret X-Prize style contest out there for R2D2’s Princess Leia Projector.  You
can’t tell me that many geeks could love Star Wars and not one of them would get their hands on the DARPA budget… 
So this is another contender for “images floating in mid-air”.  They drop fog down a column of air, creating a
diffuse but relatively flat screen upon which to project images.  Using aligned front and rear projection
(projector alignment being mastered for various 3D stunts), someone can actually interact with the display without
casting a visible shadow.
on Siggraph


The Augmented Coliseum

Indoor positioning sucks.  Sure, life is great if you can see the GPS birds and don’t care where you are plus or
minus fifty feet, but short of some of the stranger tricks done with Wifi or IR lasers figuring out where you are
inside a building is enormously complicated and quite expensive.  These guys were cheap, and they wanted to be
able to control their little robots with near-absolute knowledge of their position.  Did they use USB? 
Bluetooth?  Wifi?

Nah.  They put an optical sensor on the bottom of their robots, cracked open their laptops to full-open, and
used the screen itself to tell the robots where to go. Those…sick…minded…fools…

Totally worked, though :) 
on Siggraph


Our bonus pick from this category: Ubiquitous Graphics is a whiteboard tracking system hacked together with a
handheld computer so you can zoom in or see new data about a chunk of a projected image with the handheld.  The
handheld has x/y position sensors embedded in it that were in the whiteboard tracking pens.
(preliminary pdf on the tech here,
video of preliminary tech here,
on Siggraph here).

Art Gallery

There are two ways a hack can make you laugh uncontrollably:  When it is just
hideous beyond all
, or when it is so utterly beautiful, yet so very wrong, your brain shudders through laughter in an attempt
to comprehend.

Yuta Nakayama is the sick, sick man who brought the
latter.  If one cameraphone is good, two must be better, for
then, three dimensions becomes possible!  But rather than just leave this as idle speculation, Yuta borrowed the
design for a 19th century stereograph
and modded it to contain two 21st century cell phones.  Now possessing a device capable of taking three
dimensional images with two linked cell phones and displaying them to users, he took a photo of his device, printed it
out, and let users see his creation through…

…an actual 19th century stereographic viewer.





Tracert: actual
traceroute output cross stitched in grey embroidery silk on black aida cloth by Kate Pemberton.  We’re bummed that
it doesn’t re-stitch itself in real time yet:



Jean-Pierre Hébert’s Ulysse: On a much more somber note —
technology that actually creates “soft” art is surprisingly rare.  In fact, if you’re a geek, you may not have an
idea what I mean by soft.  Here’s a hint:  Cyberpunk?  Not that.  Anyway, there’s something oddly
soothing about seeing a ball roll through sand on its own apparent volition, leaving patterns in a finely strewn
surface of sand.  I want more.


Our bonus pick for the art gallery section:
Elf by Pascal Glissmann and Martina Höfflin.
Elf is a set of small sound and movement solar panel robots that are “released” into the wild then “trapped” in glass
jars.  We wish we lived in Germany so we could attend their various
building sessions


Dipa: Play Equipment With Respiration-Sense
, by Yohei Takahashi and Naohito Okude from Keio University. Dan’s description: “Meditation Tech!”:


So it turns out that there’s a pediatric illness that involves normally harmless nasal bacteria making their way
into the normally sterile inner ear.  Things end up pretty ugly after that, to the tune of $5B/year in
treatment.  Researchers looking to manage this situation use chinchillas as sample environments to test how fast,
given certain treatments, the disease spreads.  The problem has been that to measure growth rates, doctors have to
kill and dissect the chinchilla.  So, to get ten samples, you have to infect ten, then kill one after one day,
another the next, another the next, and assume that growth rates were constant across your set of rodents.

Besides being quite work intensive and, er, not exactly the most PETA-friendly stunt this side of a bucket of blood,
that’s a heck of an assumption.  So doctors created a modified version of the bacteria that glows (yes, that
Jellyfish luciferase gene gets around).  Since the more bacteria you have, the brighter you glow, you can measure
population levels by how much light you can detect through the skull.  And of course, where things are glowing
shows you the hot spots.

The only question is — how can you quantify population levels when you’ve got a living, breathing, skull-intact
chinchilla sitting between you and your glowing population?  How can you tell the precise amount the rodent’s head
will alter the light coming from the bacterium?  Easy…just by…wait for it…

Modeling the fluffy lens.

Modeling the Fluffy Lens: Construction of the Virtual Chinchilla by
William C. Ray and Joseph A. Jurcisek from the
Columbus Children’s Research Institute:


So tech in the kitchen is generally the worst idea ever — no, I don’t want my refrigerator to badger me about buying
more food, thank you very much — which makes this poster all the more impressive.  These kids at the MIT Media Lab
actually found a couple things to do with kitchen tech that were good ideas.  First, they added LEDs to the
kitchen faucet, so you can visually see the temperature of the water (a talent previously reserved to people wearing
night vision goggles).  And second, they projected flames behind an active electric grill, making it much harder
not to know the surface was hot.  Heh, that’s pretty cool.

The Augmented Reality Kitchen by Chia-Hsun Lee,
Leonardo Bonanni and Ted Selker at the MIT Media Lab.  We’re still holding on to our
netbsd toaster for the time being:


Fbz says: “And the winner of the best poster layout and execution at Siggraph 2005 goes to…”

Dan says: “With my sheets of paper, I prove you all wrong!”:


So once we finally got the ability to synthesize photorealistic images almost perfectly (anyone realize pretty much
every scene with the baby in Lemony Snicket was synthesized?), people started asking the obvious question…what if we
don’t want to render things perfectly? It took thousands of years before humans got photographs, and yet we did come up
with mechanisms for reproducing scenes and representing information. Environments that claim to represent the era
before photographs probably shouldn’t possess satellite imagery :) With
this hack, they don’t:  Automatically generated maps get paired with automatically generated woodcut maps.

Semantics-Guided Procedural Rendering for Woodcut Maps by Neeharika Adabala and
Kentaro Toyama from Microsoft Research India:


The Conference Floor

 Dan: “Gigabit Ethernet Animation — Using each port’s LED’s!” (update:
video here) (from this




The Ghetto Display:

When it comes to autostereoscopy — i.e. screens that are just 3D when you look at them, with no glasses on — don’t
believe the hype.  Displays that try to route a different flat image to each eye look uniformly awful. 
(Displays that synthesize a genuine 3D image, however, look OK but have bandwidth issues since they have to actually
create voxels at each represented depth.)  No, the way to go involves polarized glasses, long beloved by Imax,
that Star Trek 4-D ride, and the
fantastic DILA theatre that
was showing off scientific visualizations in 3D HD (and Spy Kids 3-D too).  The way light works, vertically
polarized light will not pass through a horizontal filter, and horizontally polarized light will not pass through a
vertical filter.  So with extremely cheap glasses, each eye can be given a perfectly unique full color

Now, how to make our TFT monitors do it?  There’s this solution, which may very well be the first witnessed
product to qualify for the phrase “ghettofabuluous” (also here
and here):



And there’s the right way to do it, which is to have two LCD screens stacked on top of each other, one color
filtering vertically polarized light, the other color filtering horizontally polarized light, with a funky final filter
that filtered any other polarizations.  You’d think this was impossible, but Benq (yes, “the B stands for budget”)
got one, I saw its output right there on the floor at SIGGRAPH…

…and forgot to take a photo of the company selling it w/ their drivers.  The biggest insult?  Benq’s
selling the kit for under a grand…that’s in my budget.

Lastly but not leastly:


Forget the green screen…just add an
out-of-focus camera.


There was a sale at Webcams R Us.
(pdf here ,
video here)  (Major
props to the hackaday crew if you get a homebrew of this up and running.)


That’s a nice drawing.
Lets make it move.
) (If you get a chance download their application to test it out.  Really cool.)


That’s a nice photo. Lets walk into it.
(video here)
(Not the annoying kind of popups you’re blocking with your browser.)


In conclusion, I would like to extend a huge thank you to [Dan
] for the use of his personal photos and extensive comments. Without him, this roundabout round-up would
not have existed. May you, oh hackaday reader, be inspired to homebrew these technologies and write them up!

12 thoughts on “Siggraph Best-Of 2005

  1. This was an incredible article. I downloaded the videos for the camera array and the pop-up photos, but I wish there was a video of the Gigabit Ethernet animation. (I wonder if one could port Tetris to an array of those things?)

  2. rudy–

    It was definitely a separate vendor — specifically, it was a company that was selling drivers to expand at least an OpenGL window into full 3D. I mailed Benq, trying to find out whose screen it was, to no avail.


    It was $25 to get into ETech.


Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.