Particle Paves Way For LTE Selfies

From cars to refrigerators, it seems as if every new piece of tech is connected to the Internet. For better or for worse, we’re deep into the “Internet of Things”. But what about your camera? No, not the camera in your smartphone; that one’s already connected to the Internet and selling your secrets to the highest bidder. Don’t you think your trusty DSLR could be improved by an infusion of Wide Area Networking?

Regardless of what you’re answer to that question might be, [Thomas Kittredge] decided his life would be improved by making his beloved Canon EOS Rebel T6 an honorary member of the Internet of Things. Truth be told he says that he hasn’t quite figured out an application for this project. But since he was looking to mess around with both the LTE-enabled Particle Boron development board and designing his own PCB for professional production, this seemed a good a way to get his feet wet as any.

The resulting board is a fairly simple “shield” for the Particle Boron that let’s [Thomas] trigger up to two cameras remotely over the Internet or locally with Bluetooth. If LTE isn’t your sort of thing though, don’t worry. Since the Boron follows the Adafruit Feather specification, there’s a whole collection of development boards with various connectivity options that this little add-on can be used with.

In the GitHub repository, [Thomas] has put up the files for the PCB, the STLs for the 3D printed enclosure, and of course the firmware source code to load onto the Particle board. He currently has code to expose the two shutter triggers as functions the the Particle Cloud API, as well as a practical example that fires off the camera when specific words are used in a Slack channel.

Out for a little over a year, the Particle Boron is a fairly new addition to the world of cellular development boards. Historically we haven’t seen a whole lot of cellular capable projects, likely because it’s been such a hassle to get them online, but with new boards like the Boron we might start seeing an uptick in the random pieces of gear that have this form connectivity and an internet-facing IP address. Surely nothing bad could come of this!

Super Simple Sensor Makes DSLR Camera Motion Sensitive

Do you have a need to photographically document the doings of warm-blooded animals? If so, a game camera from the nearest hunting supplier is probably your best bet. But if you don’t need the value-added features such as a weather-resistant housing that can be chained to a tree, this DIY motion trigger for a DSLR is a quick and easy build, and probably loads more fun.

The BOM on [Jeremy S Cook]’s build is extremely short – just a PIR sensor and an optoisolator, with a battery, a plug for the camera’s remote jack, and a 3D-printed bracket. The PIR sensor is housed in a shroud to limit its wide field of view; [Jeremy] added a second shroud when an even narrower field is needed. No microcontroller is needed because all it does is trigger the camera when motion is sensed, but one could be added to support more complicated use cases, like an intervalometer or constraining the motion sensing to certain times of the day. The video below shows the build and some quick tests.

Speaking of intervalometers, we’ve seen quite a few of those over the years. From the tiny to the tinier to the electromechanical, people seem to have a thing for taking snapshots at regular intervals.

Continue reading “Super Simple Sensor Makes DSLR Camera Motion Sensitive”

This Raspberry Pi Is A Stereo Camera And So Much More

Over the years we have featured a huge array of projects featuring the Raspberry Pi, but among them there is something that has been missing in all but a few examples. The Raspberry P Compute Module is the essentials of a Pi on a form factor close to that of a SODIMM module, and it is intended as a way to embed a Pi inside a commercial product. It’s refreshing then to see [Eugene]’s StereoPi project, a PCB that accepts a Compute Module and provides interfaces for two Raspberry Pi cameras.

What makes this board a bit special is that as well as the two camera connectors at the required spacing for stereophotography it also brings out all the interfaces you’d expect on a regular Pi, so there is the familiar 40-pin expansion header as well as USB and Ethernet ports. It has a few extras such as a pin-based power connector, and an on-off switch.

Where are they going with this one? So far we’ve seen demonstrations of the rig used to create depth maps with ROS (Robot Operating System). But even more fun is seeing the 3rd-person-view rig shown in the video below. You strap on a backpack that holds the stereo camera above your head, then watch yourself through VR goggles. Essentially you become the video game. We’ve seen this demonstrated before and now it looks like it will be easy to give it a try yourself as StereoPi has announced they’re preparing to crowdfund.

So aside from the stereophotography why is this special? The answer comes in that it is as close as possible to a fresh interpretation of a Raspberry Pi board without being from the Pi Foundation themselves. The Pi processors are not available to third party manufacturers, so aside from the Odroid W (which was made in very limited numbers) we have never seen a significant alternative take on a compatible Raspberry Pi. The idea that this could be achieved through the Compute Module is one that we hope might be taken up by other designers, potentially opening a fresh avenue in the Raspberry Pi story.

The Raspberry Pi Compute Module has passed through two iterations since its launch in 2014, but probably due to the lower cost of a retail Raspberry Pi we haven’t seen it in many projects save for a few game consoles. If the advent of boards like this means we see more of it, that can be no bad thing.

Continue reading “This Raspberry Pi Is A Stereo Camera And So Much More”

Voice Controlled Camera For Journalist In Need

Before going into the journalism program at Centennial College in Toronto, [Carolyn Pioro] was a trapeze performer. Unfortunately a mishap in 2005 ended her career as an aerialist when she severed her spinal cord,  leaving her paralyzed from the shoulders down. There’s plenty of options in the realm of speech-to-text technology which enables her to write on the computer, but when she tried to find a commercial offering which would let her point and shoot a DSLR camera with her voice, she came up empty.

[Taras Slawnych] heard about [Carolyn’s] need for special camera equipment and figured he had the experience to do something about it. With an Arduino and a couple of servos to drive the pan-tilt mechanism, he came up with a small device which Carolyn can now use to control a Canon camera mounted to an arm on her wheelchair. There’s still some room for improvement (notably, the focus can’t be controlled via voice currently), but even in this early form the gadget has caught the attention of Canon’s Canadian division.

With a lavalier microphone on the operator’s shirt, simple voice commands like “right” and “left” are picked up and interpreted by the Arduino inside the device’s 3D printed case. The Arduino then moves the appropriate servo motor a set number of degrees. This doesn’t allow for particularly fine-tuned positioning, but when combined with movements of the wheelchair itself, gives the user an acceptable level of control. [Taras] says the whole setup is powered off of the electric wheelchair’s 24 VDC batteries, with a step-down converter to get it to a safe voltage for the Arduino and servos.

As we’ve seen over the years, assistive technology is one of those areas where hackers seem to have a knack for making serious contribution’s to the lives of others (and occasionally even themselves). The highly personalized nature of many physical disabilities, with specific issues and needs often unique to the individual, can make it difficult to develop devices like this commercially. But as long as hackers are willing to donate their time and knowledge to creating bespoke assistive hardware, there’s still hope.

Continue reading “Voice Controlled Camera For Journalist In Need”

Seeing Like Bees With Ultraviolet Photography

When it comes to seeing in strange spectrums, David Prutchi is the guy you want to talk to. He’s taken pictures of rocks under long, medium and short UV light, he’s added thermal imaging to consumer cameras, and he’s made cameras see polarization. There’s a lot more to the world than what the rods and cones on your retina can see, and David is one of the best at revealing it. For this year’s talk at the Hackaday Superconference, David is talking about DIY Ultraviolet Photography. It’s how bees see, and it’s the bees knees.

Continue reading “Seeing Like Bees With Ultraviolet Photography”

Arduino One Pixel Camera Sees All (Eventually)

Taking pictures in the 21st century is incredibly easy. So easy in fact that most people don’t even own a dedicated camera; from smartphones to door bells there are cameras built into nearly electronic device we own. So in this era of ubiquitous photography, you might think that a very slow and extremely low resolution camera wouldn’t be of interest. Under normal circumstances that’s probably true, but this single pixel camera built by [Tucker Shannon] is anything but normal.

Continue reading “Arduino One Pixel Camera Sees All (Eventually)”

Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield

We are all familiar with the idea of a hologram, either from the monochromatic laser holographic images you’ll find on your bank card or from fictional depictions such as Princes Leia’s distress message from Star Wars. And we’ve probably read about how the laser holograms work with a split beam of coherent light recombined to fall upon a photographic plate. They require no special glasses or headsets and  possess both stereoscopic and spatial 3D rendering, in that you can view both the 3D Princess Leia and your bank’s logo or whatever is on your card as 3D objects from multiple angles. So we’re all familar with that holographic end product, but what we probably aren’t so familiar with is what they represent: the capture of a light field.

In his Hackaday Superconference talk, co-founder and CTO of holographic display startup Looking Glass Factory Alex Hornstein introduced us to the idea of the light field, and how its capture is key to  the understanding of the mechanics of a hologram.

Capturing the light field with a row of GoPro cameras.
Capturing the light field with a row of GoPro cameras.

His first point is an important one, he expands the definition of a hologram from its conventional form as one of those monochromatic laser-interference photographic images into any technology that captures a light field. This is, he concedes, a contentious barrier to overcome. To do that he first has to explain what a light field is.

When we take a 2D photograph, we capture all the rays of light that are incident upon something that is a good approximation to a single point, the lens of the camera involved. The scene before us has of course countless other rays that are incident upon other points or that are reflected from surfaces invisible from the single point position of the 2D camera. It is this complex array of light rays which makes up the light field of the image, and capturing it in its entirety is key to manipulating the result. This is true no matter the technology used to bring it to the viewer. A light field capture can be used to generate variable focus 2D images after the fact as is the case with the Lytro cameras, or it can be used to generate a hologram in the way that he describes.

One possible future use of the technology, a virtual holographic aquarium.
One possible future use of the technology, a virtual holographic aquarium.

The point of his talk is that complex sorcery isn’t required to capture a light field, something he demonstrates in front of the audience with a volunteer and a standard webcam on a sliding rail. Multiple 2D images are taken at different points, which can be combined to form a light field. The fact that not every component of the light field has been captured doesn’t matter as much as that there is enough to create the holographic image from the point of view of the display. And since he happens to be head honcho at a holographic display company he can show us the result. Looking Glass Factory’s display panel uses a lenticular lens to combine the multiple images into a hologram, and is probably one of the most inexpensive ways to practically display this type of image.

Since the arrival of the Lytro cameras a year or two ago the concept of a light field is one that has been in the air, but has more often been surrounded by an air of proprietary marketing woo. This talk breaks through that to deliver a clear explanation of the subject, and is a fascinating watch. Alex leaves us with news of some of the first light field derived video content being put online and with some decidedly science-fiction possible futures for the technology. Even if you aren’t planning to work in this field, you will almost certainly encounter it over the next few years.

Continue reading “Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield”