Siggraph Best-Of 2005

bwhaptic

Siggraph is a hot bed for tech prototype research and crazy art each year. [Dan Kaminsky] attended the conference last month and graced us with tons of pictures and descriptions of his favorite projects and pieces. Thanks Dan! Many of the exhibitors at Siggraph are hardware hackers and handheld gadget modders. Where possible we’ve linked to project pages and videos. We’ve gone ahead and added a few more of our personal faves as well to round out this round-up. Get your groove on at this visualization and interaction party.

by Dan Kaminsky

Emerging Technologies

Earning ‘coolest thing since protude, flow‘ was
Andy Wilson‘s
TouchLight
display.  Somehow, he managed to create a rear projection screen that is actually transparent when seen from
behind.  To give you an idea of how cool this is — you know that old joke, where someone tries to scan a sheet of
paper by holding it up to the monitor?  Yeah, what if it wasn’t a joke?  Not only did Andy scan a page right
in front of me, he actually “grabbed the sides” of the projected images with his hands proceeded to stretch and rotate
the photographed image.  Amazingly cool; I can’t wait until his videoconferencing demo (where your eyes can
actually align with the eyes of the person you’re speaking to) is up and running.
(
on Siggraph
):

touchlight2 align=”top” border=”0″ hspace=”4″ vspace=”4″ /> touchlight3 src=”http://www.weblogsinc.com/common/images/3363694268303459.JPG?0.9611726727382166″ align=”top” border=”0″ hspace=”4″
vspace=”4″ />

Haptic Training: This was a really interesting approach to force feedback.  Kinesthetic knowledge — the ability
to make your body move in patterned ways automatically — is nontrivial to absorb.  This training methodology
overdrives a force feedback system such that if you’re doing something wrong, the tool you’re using will pull you into
correctness.  In a way, you’re receiving corrective data through the same channel that needs to be ultimately
corrected, as opposed to your brain having to translate advice through either language (“you’re doing it wrong!”) or
imagery (“see!  you’re doing it wrong!”).  This is a limited prototype, but it’s a really interesting idea.
(
on Siggraph
):

haptic border=”0″ hspace=”4″ vspace=”4″ />

Last year’s Emerging Technologies exhibit witnessed a burst of development focused on the realization that a
computer could analyze precisely how a stretchy substance was being stretched, pushed, or pulled, and then use that as
a user interface for “something”.  This year, there was actually something cool put together.  Video was
projected on a flexible screen.  Pushing the screen caused the area thus pushed to accelerate to a different point
in time — daytime would become night, in the distorted region.  This was actually really, really cool.
(
on Siggraph
):

timescreen2 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

timescreen3 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

I swear, there’s some sort of secret X-Prize style contest out there for R2D2’s Princess Leia Projector.  You
can’t tell me that many geeks could love Star Wars and not one of them would get their hands on the DARPA budget… 
So this is another contender for “images floating in mid-air”.  They drop fog down a column of air, creating a
diffuse but relatively flat screen upon which to project images.  Using aligned front and rear projection
(projector alignment being mastered for various 3D stunts), someone can actually interact with the display without
casting a visible shadow.
(
on Siggraph
):

fogscreen border=”0″ hspace=”4″ vspace=”4″ />

The Augmented Coliseum

Indoor positioning sucks.  Sure, life is great if you can see the GPS birds and don’t care where you are plus or
minus fifty feet, but short of some of the stranger tricks done with Wifi or IR lasers figuring out where you are
inside a building is enormously complicated and quite expensive.  These guys were cheap, and they wanted to be
able to control their little robots with near-absolute knowledge of their position.  Did they use USB? 
Bluetooth?  Wifi?

Nah.  They put an optical sensor on the bottom of their robots, cracked open their laptops to full-open, and
used the screen itself to tell the robots where to go. Those…sick…minded…fools…

Totally worked, though :) 
(
on Siggraph
):

laptoprobot align=”top” border=”0″ hspace=”4″ vspace=”4″ />

Our bonus pick from this category: Ubiquitous Graphics is a whiteboard tracking system hacked together with a
handheld computer so you can zoom in or see new data about a chunk of a projected image with the handheld.  The
handheld has x/y position sensors embedded in it that were in the whiteboard tracking pens.
(preliminary pdf on the tech here,
video of preliminary tech here,
on Siggraph here).

Art Gallery

There are two ways a hack can make you laugh uncontrollably:  When it is just
hideous beyond all
compare
, or when it is so utterly beautiful, yet so very wrong, your brain shudders through laughter in an attempt
to comprehend.

Yuta Nakayama is the sick, sick man who brought the
latter.  If one cameraphone is good, two must be better, for
then, three dimensions becomes possible!  But rather than just leave this as idle speculation, Yuta borrowed the
design for a 19th century stereograph
viewer
and modded it to contain two 21st century cell phones.  Now possessing a device capable of taking three
dimensional images with two linked cell phones and displaying them to users, he took a photo of his device, printed it
out, and let users see his creation through…

…an actual 19th century stereographic viewer.

Wow.

bestthing1 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

bestthing2 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

bestthing3 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

Tracert: actual
traceroute output cross stitched in grey embroidery silk on black aida cloth by Kate Pemberton.  We’re bummed that
it doesn’t re-stitch itself in real time yet:

traceroute align=”top” border=”0″ hspace=”4″ vspace=”4″ />

traceroute2 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

Jean-Pierre Hébert’s Ulysse: On a much more somber note —
technology that actually creates “soft” art is surprisingly rare.  In fact, if you’re a geek, you may not have an
idea what I mean by soft.  Here’s a hint:  Cyberpunk?  Not that.  Anyway, there’s something oddly
soothing about seeing a ball roll through sand on its own apparent volition, leaving patterns in a finely strewn
surface of sand.  I want more.

ulysse border=”0″ hspace=”4″ vspace=”4″ />

Our bonus pick for the art gallery section:
Elf by Pascal Glissmann and Martina Höfflin.
Elf is a set of small sound and movement solar panel robots that are “released” into the wild then “trapped” in glass
jars.  We wish we lived in Germany so we could attend their various
robot
building sessions
.

Posters

Dipa: Play Equipment With Respiration-Sense
Interface
, by Yohei Takahashi and Naohito Okude from Keio University. Dan’s description: “Meditation Tech!”:

dipaposter src=”http://www.weblogsinc.com/common/images/0916925257168892.JPG?0.7601042448183698″ align=”top” border=”0″ hspace=”4″
vspace=”4″ />

So it turns out that there’s a pediatric illness that involves normally harmless nasal bacteria making their way
into the normally sterile inner ear.  Things end up pretty ugly after that, to the tune of $5B/year in
treatment.  Researchers looking to manage this situation use chinchillas as sample environments to test how fast,
given certain treatments, the disease spreads.  The problem has been that to measure growth rates, doctors have to
kill and dissect the chinchilla.  So, to get ten samples, you have to infect ten, then kill one after one day,
another the next, another the next, and assume that growth rates were constant across your set of rodents.

Besides being quite work intensive and, er, not exactly the most PETA-friendly stunt this side of a bucket of blood,
that’s a heck of an assumption.  So doctors created a modified version of the bacteria that glows (yes, that
Jellyfish luciferase gene gets around).  Since the more bacteria you have, the brighter you glow, you can measure
population levels by how much light you can detect through the skull.  And of course, where things are glowing
shows you the hot spots.

The only question is — how can you quantify population levels when you’ve got a living, breathing, skull-intact
chinchilla sitting between you and your glowing population?  How can you tell the precise amount the rodent’s head
will alter the light coming from the bacterium?  Easy…just by…wait for it…

Modeling the fluffy lens.

Modeling the Fluffy Lens: Construction of the Virtual Chinchilla by
William C. Ray and Joseph A. Jurcisek from the
Columbus Children’s Research Institute:

fluffy src=”http://www.weblogsinc.com/common/images/7931583372647026.JPG?0.01859050716912991″ align=”top” border=”0″
hspace=”4″ vspace=”4″ />

So tech in the kitchen is generally the worst idea ever — no, I don’t want my refrigerator to badger me about buying
more food, thank you very much — which makes this poster all the more impressive.  These kids at the MIT Media Lab
actually found a couple things to do with kitchen tech that were good ideas.  First, they added LEDs to the
kitchen faucet, so you can visually see the temperature of the water (a talent previously reserved to people wearing
night vision goggles).  And second, they projected flames behind an active electric grill, making it much harder
not to know the surface was hot.  Heh, that’s pretty cool.

The Augmented Reality Kitchen by Chia-Hsun Lee,
Leonardo Bonanni and Ted Selker at the MIT Media Lab.  We’re still holding on to our
netbsd toaster for the time being:

kitchen src=”http://www.weblogsinc.com/common/images/8435433665465288.JPG?0.498947262980561″ align=”top” border=”0″ hspace=”4″
vspace=”4″ />

Fbz says: “And the winner of the best poster layout and execution at Siggraph 2005 goes to…”

Dan says: “With my sheets of paper, I prove you all wrong!”:

paperwrong src=”http://www.weblogsinc.com/common/images/8277496740663607.JPG?0.497891632136086″ align=”top” border=”0″ hspace=”4″
vspace=”4″ />

So once we finally got the ability to synthesize photorealistic images almost perfectly (anyone realize pretty much
every scene with the baby in Lemony Snicket was synthesized?), people started asking the obvious question…what if we
don’t want to render things perfectly? It took thousands of years before humans got photographs, and yet we did come up
with mechanisms for reproducing scenes and representing information. Environments that claim to represent the era
before photographs probably shouldn’t possess satellite imagery :) With
this hack, they don’t:  Automatically generated maps get paired with automatically generated woodcut maps.

Semantics-Guided Procedural Rendering for Woodcut Maps by Neeharika Adabala and
Kentaro Toyama from Microsoft Research India:

woodcut src=”http://www.weblogsinc.com/common/images/9524298218796387.jpg?0.642470499497507″ align=”top” border=”0″ hspace=”4″
vspace=”4″ />

The Conference Floor

 Dan: “Gigabit Ethernet Animation — Using each port’s LED’s!” (update:
video here) (from this
company
):

ethernet1 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

ethernet2 border=”0″ hspace=”4″ vspace=”4″ />

ethernet3 border=”0″ hspace=”4″ vspace=”4″ />

The Ghetto Display:

When it comes to autostereoscopy — i.e. screens that are just 3D when you look at them, with no glasses on — don’t
believe the hype.  Displays that try to route a different flat image to each eye look uniformly awful. 
(Displays that synthesize a genuine 3D image, however, look OK but have bandwidth issues since they have to actually
create voxels at each represented depth.)  No, the way to go involves polarized glasses, long beloved by Imax,
that Star Trek 4-D ride, and the
fantastic DILA theatre that
was showing off scientific visualizations in 3D HD (and Spy Kids 3-D too).  The way light works, vertically
polarized light will not pass through a horizontal filter, and horizontally polarized light will not pass through a
vertical filter.  So with extremely cheap glasses, each eye can be given a perfectly unique full color
image.

Now, how to make our TFT monitors do it?  There’s this solution, which may very well be the first witnessed
product to qualify for the phrase “ghettofabuluous” (also here
and here):

threedee border=”0″ hspace=”4″ vspace=”4″ />

threedee2 align=”top” border=”0″ hspace=”4″ vspace=”4″ />

And there’s the right way to do it, which is to have two LCD screens stacked on top of each other, one color
filtering vertically polarized light, the other color filtering horizontally polarized light, with a funky final filter
that filtered any other polarizations.  You’d think this was impossible, but Benq (yes, “the B stands for budget”)
got one, I saw its output right there on the floor at SIGGRAPH…

…and forgot to take a photo of the company selling it w/ their drivers.  The biggest insult?  Benq’s
selling the kit for under a grand…that’s in my budget.

Lastly but not leastly:

Papers

Forget the green screen…just add an
out-of-focus camera.
  Elegant.
(video
here
)

blonde border=”0″ hspace=”4″ vspace=”4″ />

There was a sale at Webcams R Us.
(pdf here ,
video here)  (Major
props to the hackaday crew if you get a homebrew of this up and running.)

camarray align=”top” border=”0″ hspace=”4″ vspace=”4″ />

That’s a nice drawing.
Lets make it move.
(video
here
) (If you get a chance download their application to test it out.  Really cool.)

whaley border=”0″ hspace=”4″ vspace=”4″ />

That’s a nice photo. Lets walk into it.
(video here)
(Not the annoying kind of popups you’re blocking with your browser.)

photowalk align=”top” border=”0″ hspace=”4″ vspace=”4″ />

In conclusion, I would like to extend a huge thank you to [Dan
Kaminsky
] for the use of his personal photos and extensive comments. Without him, this roundabout round-up would
not have existed. May you, oh hackaday reader, be inspired to homebrew these technologies and write them up!

Comments

  1. Bucky says:

    now this is hackaday at its finest! I can’t wait to explore some of this stuff further. major props to fabienne and dan.

  2. fucter says:

    whoa.
    this is the coolest thing ever.
    im very impressed!

  3. CaptSnuffy says:

    wow they really put a lot of work into these entries and they are REALLY impressive!

    great stuff!

  4. Steyr says:

    Man, I wish I could go to siggraph… :(

  5. n00bmaster says:

    I was like, uh, wondering.

    Is the hackaday podcast project eliminated. I found them quite cool, but I havent seen one in months.

  6. wildweasel says:

    This was an incredible article. I downloaded the videos for the camera array and the pop-up photos, but I wish there was a video of the Gigabit Ethernet animation. (I wonder if one could port Tetris to an array of those things?)

  7. Dan kaminsky says:
  8. fabienne says:

    n00bmaster: no the podcasts are not cancelled, there’s one in the works.

  9. markie says:

    wow (drops jaw), I would’ve loved to have seen some of this for real… These are some fine examples of doing smart things with stupid technology :D

  10. Gary Fixler says:

    This was stuff from the Emerging Tech wing, which I saw last year. This year they changed the ticket levels, and I was sent away right at the door, which killed, because that’s the primary reason I go to Siggraph. I’m glad I get to see some of it here.

    Here’s my gallery of all the cool tech out on the main floor if anyone’s intersted.

    http://flickr.com/photos/garyfixler/sets/694944/

  11. Rudy fink says:

    Was the 3d stereo display from Benq or was it from a seperate vendor? I was searching around for the kit but I was unable to locate it.

  12. dan kaminsky says:

    rudy–

    It was definitely a separate vendor — specifically, it was a company that was selling drivers to expand at least an OpenGL window into full 3D. I mailed Benq, trying to find out whose screen it was, to no avail.

    gary–

    It was $25 to get into ETech.

    –Dan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s