Have you ever heard of a wigglegram? They are made by shooting multiple pictures at once using multiple lenses, and the the resulting stitched-together ‘gram is kind of a gif version of a stereographic image. It looks 3D, and it — well, it wiggles. The ones with a boomerang effect (i.e. a good loop) are especially prized.
Wigglegrams are often produced with Nishika quadrascopic cameras, which have naturally climbed in price to address the growing demand. Nishikas have four lenses and create four separate half-frame images by splitting the four photos across two frames of film. In contrast, [Joshua]’s DIY eye uses three plastic lenses from disposable film cameras to put three images onto a single frame of film.
The only real drawback is that the camera has to be close to the subject because the three lenses are so tightly packed. Another drawback is that there is no viewfinder while using this lens. There have to be divider walls between the three lenses to keep the images separate, and these walls have to extend all the way into the camera body. The Canon A-1’s viewfinder mirror does not allow for this, so [Joshua] pushed it up out of the way.
[Joshua]’s initial design approach to finding the ideal lens distance from the film plane was to do a bunch of calculations, but he ended up Goldilocks-ing it and iterating a bunch of times until it was just right. If you have a Canon SLR and want to build one of these, you’re in luck as far as the STLs go.
What else can you do with a bunch of old disposable cameras? Build your own flash, of course.
[via r/functionalprint]
Hmm, I guess that I could grab a few frames from any moving-camera video I’ve shot and fill my hard drive with enough wigglegrams to drive me dizzy – no special hardware needed.
Is this really a thing now? Might appeal to those with short attention spans that don’t find conventional photos stimulating enough.
It’s another toy camera fad. It isn’t as popular as people claim nor is it going to last long. Toy camera fads never do.
They’ve been a thing since around 2000-ish, since that’s about how long I’ve been aware of them.
I’ve never seen anyone use a special camera for them, though. You usually just take a couple shots from a slightly different angle quickly and you’re done.
I actually have one of those Nishika cameras. I’m going to have to look into this more!
Pretty cool!
I wonder about the possibilities with video chat and a multi-lens(angles) cameras like that. With some head/eye tracking to find the viewing angle and compute/display the appropriate image. It wouldn’t be able to compensate for large movements, but for minor/natural-for-being-still-ish it may help improve the video conferencing experience.
Like:
https://hackaday.com/2021/05/20/project-starline-realizes-asimovs-3d-vision/
Also might work well with:
https://hackaday.com/2020/05/29/two-way-mirror-improves-video-conferencing/
It remembers me how lizards wiggle their heads to get depth perception.
“When in the desert it is okay to talk to the lizards, but it is not okay if they talk back.”
One advantage of a special multi-lens cameras, shooting multiple perspectives simultaneously, I suppose, is the ability to produce action shots with that surreal, frozen-in-time look. That thumbs-up portrait shot doesn’t show this off though. The subject should have been juggling some tennis balls or star-jumping or something instead.
Why do my replies to a post always show up at the bottom of the thread instead of immediately after the post I replied to? Yes, I did use the “Reply” button at the end of the the post in question.
Well it seen that I can reliably post in-line replies to myself. That’s great.