We Don’t Need To Brainstorm Projects; Xkcd Does That For Us

[Randall Munroe], the guy behind our favorite web comic xkcd, gave us yet another great project idea that falls on the heels of securing our valuables and silencing loud car stereos. The xkcd forum has been talking about how to implement this, and we’d like to hear what Hack A Day readers think about this idea.

The project isn’t much different from 3D photography. [Carl Pisaturo] has done a lot of art and experimentation based on this idea that basically amounted to largish binoculars. A poster on the xkcd forum has already built this using mirrors, but we’re wondering how much the parallax can be increased with this method. Two cameras and a smart phone would also allow automatic pan and tilt that corresponds to head movement.

We’re not quite sure if this idea can be applied to astronomy. The angular resolution of the human eye is around one arc minute, every star except for the Sun has an annual parallax less than one arc second. If anyone wants to try this out with a longer baseline (From Earth to Pluto for example), we would suggest simulating this in Stellarium. Seeing the moon as a sphere would be possible with a few hundred miles between cameras, though.

Tell us how you would build this in the comments, and be sure to send in your write-up if you manage to build it. We’ll put it up right away.

Thanks to [Theon144] for sending this in.

EDIT: Because the comments are actually bearing fruit, check out the thread on the Hack A Day forums for this post: link.

52 thoughts on “We Don’t Need To Brainstorm Projects; Xkcd Does That For Us

    1. Yeah, sounds neat.

      Until you watch the videos titled “Watch VLBA Videos Of Black Holes In Action” which should be a video of a black hole tearing ass around the universe eating stuff but is in fact a yellow circle with a purple circle around it.

  1. Getting a sense of depth feels like it would be incredibly difficult for your eyes to try and resolve, even knowing the size of objects doesn’t particularly help, as obviously, it’s relative to distance, and it’s beyond anything we’d normally expect to see. Mainly, unless it’s a planet or a cluster of stars or some other deep space object, it will look like a single pixel on a huge resolution screen. Astronomers regularly use binoculars (anything from cheap 10×50’s to real astro bins with filters)plus there’s a big binocular observatory:

    http://www.lbto.org/index.htm

    and the stereo project does parallax, I think?

    http://www.nasa.gov/mission_pages/stereo/main/index.html

    1. The stereo satellites are in their final position now and are 180 degrees with no overlap so no stereo image.

      I don’t think the binocular observatory is going for any stereo separation, but probably more for interferometry where a shift compared to the wavelenth you are monitoring can give you a higher resolution image.

  2. 3d photos of the moon sound interesting. You could just have a bunch of people on different sides of the continent take pictures of the moon at the same time (GMT, not local) and find two good east and west shots that match up.

      1. Yeah, you would probably have to coordinate and actually take the picture at the same time.. A gibbous moon would probably be best, lots of moon, but still have a terminator.

        I’m going to be looking for multiple viewpoints for my landscape photos now. Combine that with a tilt shift for bonus points :)

  3. I think the xkcd method would work fine, up to a point. An obvious limitation is that you’d be mimicking your head without neck movement, giving us what? About 200 degrees of arc to play with? not too bad. If I wanted to keep it simple, I’d grab one laptop, two usb webcams, a couple of Phidgets and r/c servos, and cat5 usb extenders. This would give me (ideally) a football field worth of baseline. If I render the images as grey scale, I can display them on the laptop as an anaglyph. Maybe VirtualDub, AviSynth/ffdshow, or ffmpeg can do it in real-time? I’d need to code up the cameras panning controller using the Phidget SDK.

    1. Rotating the cameras wouldn’t be any good, once you went 90 degrees either way, they would both be looking at the same thing. You need to rotate the entire football field in that example. You could pan up and down, but that’s about all.

      1. Okay, so at least three webcams (more is better, but less necessary if you can just get more space between them) arranged radially around a single point. Automatically (or not) select the pair which are the farthest apart for the angle you are looking at. You can probably just get buy/make three cheap networked (WiFi would be handy here) pan+tilt webcams and do the rest in software.

  4. Hmm, another one to add to the “gotta try this one day” list, because I have several identical Sony CCD bullet cameras and some 800×600 res LCD video glasses that have two composite video inputs as well as two VGA inputs so it can show stereoscopic video. It can also do stereoscopic display from a single source that’s interlaced.

    The biggest hurdle will be accurately aligning the cameras, I remember from testing stereoscopic viewing with two cameras at eye distance apart that the alignment needed to be quit accurate for the depth perception to work well.

    1. If they’re servo-controlled (as opposed to aimed by hand or something), you could probably just use OpenCV to automatically adjust one to match the other as closely as possible.

    2. You’ll want to lock them left/right once you get them lined up or it’ll do screwy things with your depth perception. Human brains aren’t designed to cope with what happens to our view when the distance between our eyes is actively changing.

      at least… I don’t think they are. Never experimented with it myself.

      1. well, when you refocus your eyes on a target that’s further/closer, both eyes rotate symmetrically to “lock on” to the target, i.e. center the point of interest in both eyes. i’m sure with a non-retarded servo setup, it would be possible to simulate well enough that a human could adjust.

    3. It’s a lot easier than you think. Pick a vertical and horizontal spot in the middle of the horizon, over lay the two images alining the cameras till that spot(not the whole image) is in the same place. Then view the images as with the method describe by Randal.

      1. I’ve been giving this setup some thought since posting and I’ve realised that yes it should be easier than I first thought to align the cameras correctly – point the first camera at the sky/horizon and then you only have to get the 2nd camera aligned to the first.

        Right then, I know what I’m going to be doing this afternoon :)

        *goes off to dig out the LCD video glasses, plugs, cable, soldering iron & power supplies*

      2. Soldered two cameras with phonos & a power plug, one camera connection is wired with a 3.5mm stereo headphone plug/socket so I can put it on a 22 foot headphone extension cable I have, powered it all up and worked straight off.

        But it started raining heavily near the end of the ‘build’.

        That was over an hour ago and it’s still raining heavily.

        Good ole British summer weather :(

  5. It occurs to me that you can achieve the same effect pretty easily by taking a video looking out the window of a plane. You then delay the frames for one eye so that it is a few hundred feet behind the other eye and abracadabra – 3d.

    1. Works just as long as there isn’t much movement of what you’re observing. For instance, a flock of geese would look really strange if in one eye their wings were up and in the other they were down.

    2. I have done that many times. Don’t sit near the wing. Anything in the foreground will confuse your brain. Take a picture, wait a few seconds, and take another. To view the 3D, learn to use the cross eyed method. It is simple and free. Put the 2 images on the screen side by side as close as you can. Cross your eye and watch the overlap. When you have a third picture at the overlap, focus on it. That is what takes practice. I can do it in a second.

  6. Tracking the moon is relatively simple, plenty of motorised, computerised mounts out there that will track pretty much anything that you have a position for. Skywatcher do their synscan/syntrek mounts, synscan has a handset that has a 40,000 item database(I think, might be more) that runs on a PIC chip. They are easily controllable using software called EQMod and an FTDI cable in place of the controller handset.

    @josh malone, With the ‘Very Long Baseline Array’ mentioned in the first post, does that give you a sense of depth of field or just an enormous zoom?

  7. They have a big ass telescope in a plane already, http://www.popsci.com/technology/article/2010-01/inside-sofia-nasas-airplane-mounted-telescope It’s crazy, the plane moves around the telescope if that makes sense? So that it keeps exactly in line with the object it’s tracking.

    Getting 2 telescopes to point at exactly the same spot on the moon isn’t particularly difficult, although I’m not sure if you could lock on, I know it’s simple to track other deep space objects and lock on using a 2nd ‘guide’ scope on each of your 2 telescopes and mounts.

  8. I really do wonder if the brain could pull any real 3d type depth from stars, given that a light year is roughly 6 trillion miles and the nearest star is about 25 trillion or so miles away, I don’t think we could actually resolve that kind of information, I think this is the main reason it all looks 2d when we look up into the night sky.

    Looking at the moon, some of the planets, galaxies and nebula would give some kind of 3d information that the brain could process but you’d be hard pushed to pull enough light in quickly enough to do it in real time without having to resort to a biiiig, expensive bit of glass.

    If anyone has a pair of anaglyph glasses, http://apod.nasa.gov/apod/ap070602.html click on the image for full size image you can save.

    1. My experience has been that you can enjoy an *apparent* 3D effect in the night sky from your sleeping bag, if you can find a spot with sufficiently low light pollution. The differing apparent magnitudes of the stars, with the Milky Way as a backdrop, provides an interesting depth effect after one’s eyes have had time to adjust.

  9. If one was willing to wait six months, couldn’t you take a picture or video of the sky, and then wait for the Earth to move to the opposite side of its orbit, and take another picture or video of the same patch of sky? I feel like with a 186 million mile difference, you might be able to see SOME depth, with closer stars at least.

  10. Woooow!

    I tried setting up the ‘rig’ I put together earlier, whilst it was still light; two Sony HAD 420-line CCD bullet cameras plugged into a set of LCD video glasses which have 2x VGA & 2x composite video inputs (i-visor DH4400VPD), and the depth effect is really quite something.

    I pointed each camera out of different windows, the distance between the cameras was about 8-10 feet, looking out up a hill with trees/gardens in front and to the left side are houses along the road going up the hill.

    The trees look really close and a bit small, the depth effect on the slope of the hill makes it look like it goes about 400-500 meters backwhen in fact the furthest distance is around 200 meters.

    Hopefully tomorrow it won’t be tipping it down and there’ll be some decent clouds to try looking at with the cameras 22 feet apart!

      1. Yes, I can plug a video capture device (like my old Archos AV500) into each camera’s output in turn to grab a picture.

        I’ll post to the thread tomorrow, either with shots from the windows and/or some pictures of clouds in 3D.

        Fingers crossed the weather forecast is right and there’s sun & clouds instead of the blasted rain that I’m now drying off from after cycling to shops.

  11. Orbit two swarms of nanosatellites at the Earth-Sun L4 and L5 orbits. The swarms in each libration orbit could act in concert to each work as a single long baseline optical array. Data could be combined between the two swarms and telescopes like Hubble, giving a full-sphere optical interferometer with a baseline of roughly 260 million kilometers. The nanosat swarms themselves would require only two launches and could be built on assembly lines. Ah, if only I had a tiny portion of NASA’s budget…

  12. I’ve been playing with this on a smaller (larger?) scale – I have two flip cams on my quadcopter spaced with a 6-9″ diopter – When I play the recordings back it’s as if I am a Giant 2-3x my normal size. Flying at 4-6 meters alt helps too.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.