CES2017: Astrophotography In The Eyepiece

If you’ve never set up a telescope in your back yard, you’ve never been truly disappointed. The Hubble can take some great shots of Saturn, nebulae, and other astronomical phenomena, but even an expensive backyard scope produces only smudges. To do astronomy properly, you’ll spend your time huddled over a camera and a computer, stacking images to produce something that almost lives up to your expectations.

At CES, Unistellar introduced a device designed to fit over the eyepiece of a telescope to do all of this for you.

According to the guys at Unistellar, this box contains a small Linux computer, camera, GPS, and an LCD. Once the telescope is set up, the module takes a few pictures of the telescope’s field of view, stacks the images, and overlays the result in the eyepiece. Think of this as ‘live’ astrophotography.

In addition to making Jupiter look less like a Great Red Smudge, the Unistellar module adds augmented reality; it knows where the telescope is pointing and will add a label if you’re looking at any astronomical objects of note.

While I wasn’t able to take a look inside this extremely cool device, the Unistellar guys said they’ll be launching a crowdfunding campaign in the near future.

The Story of Kickstarting the OpenMV

Robots are the ‘it’ thing right now, computer vision is a hot topic, and microcontrollers have never been faster. These facts lead inexorably to the OpenMV, an embedded computer vision module that bills itself as the ‘Arduino of Machine Vision.’

The original OpenMV was an entry for the first Hackaday Prize, and since then the project has had a lot of success. There are tons of followers, plenty of users, and the project even had a successful Kickstarter. That last bit of info is fairly contentious — while the Kickstarter did meet the minimum funding level, there were a lot of problems bringing this very cool product to market. Issues with suppliers and community management were the biggest problems, but the team behind OpenMV eventually pulled it off.

At the 2016 Hackaday SuperConference, Kwabena Agyeman, one of the project leads for the OpenMV, told the story about bringing the OpenMV to market:

Continue reading “The Story of Kickstarting the OpenMV”

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”

Protecting Your Home Against Potato Invaders

Not sure where the potatoes were sneaking in, [24Gospel] did what any decent hacker would do: strapped a camera to a Raspberry Pi, hacked a bit on OpenCV, and built himself a potato detection system. Now those pesky Russets can’t get into the house without tripping the tuber alarm.

oku0kbr

OK, seriously. [24Gospel] works for a potato farm as a systems/software developer. (How big does a potato farm have to be to require a dedicated software guy?) His system is still a first step, but the goal is to grade the potatoes, record data about size and defects, and even tell different potato types apart. And he’s found decent success so far, especially for the money. We don’t often build projects that need to operate in hostile environments, but we appreciate the nice plastic case and rugged adjustable steel frame that supports the Pi and camera over the sorting bed.

Even more, we applaud the hacker spirit here. [24Gospel] is obviously working in a serious production environment, but still he’s trying out new things in an attempt to make it work better. While it would be impossible to quantify the impact of this kind of on-the-job ingenuity, we bet it’s not insignificant. Why don’t we see more documented workplace hacks around here? Would the unsung heroes please stand up?

[via /r/raspberry_pi]

Counting Eggs With A Webcam

You’ll have to dig out your French dictionary (or Google translate) for this one, but it is worth it. [Nicolas Giraud] has been experimenting with ways to use a webcam to detect the number of eggs chickens have laid in a chicken coop. This page documents these experiments using a number of different algorithms to automatically detect the number of eggs and notify the owner. The system is simple, built around a Pi running Debian Jesse Lite and a cheap USB webcam. An LED running off one of the GPIO pins illuminates the eggs, and the camera then captures the image for analysis.

Continue reading “Counting Eggs With A Webcam”

NASA Knows Where the Meteors Are

NASA has been tracking bright meteoroids (“fireballs”) using a distributed network of video cameras pointed upwards. And while we usually think of NASA in the context of multi-bazillion dollar rocket ships, but this operation is clearly shoe-string. This is a hack worthy of Hackaday.

droppedimage

The basic idea is that with many wide-angle video cameras capturing the night sky, and a little bit of image processing, identifying meteoroids in the night sky should be fairly easy. When enough cameras capture the same meteoroid, one can use triangulation to back out the path of the meteoroid in 3D, estimate its mass, and more. It’s surprising how many there are to see on any given night.

You can watch the videos of a meteoroid event from any camera, watch the cameras live, and even download the meteoroid’s orbital parameters. We’re bookmarking this website for the next big meteor shower.

cameraThe work is apparently based on [Rob Weryk]’s ASGARD system, for which the code is unfortunately unavailable. But it shouldn’t be all that hard to hack something together with a single-board computer, camera, and OpenCV. NASA’s project is limited to the US so far, but we wonder how much more data could be collected with a network of cameras all over the globe. So which ones of you are going to take up our challenge? Build your own version and let us know about it!

Between this project and the Radio Meteor Zoo, we’re surprised at how much public information there is out there about the rocky balls of fire that rain down on us every night, and will eventually be responsible for our extinction. At least we can be sure we’ll get it on film.

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.