Ecology is a strange discipline. At its most basic, it’s the study of how living things interact with their environment. It doesn’t so much seek to explain how life works, but rather how lives work together. A guiding principle of ecology is that life finds a way to exploit niches, subregions within the larger world with a particular mix of resources and challenges. It’s actually all quite fascinating.
But what does ecology have to do with Luka Mustafa’s talk at the 2018 Hackaday Belgrade Conference? Everything, as it turns out, and not just because Luka and his colleagues put IoT tools on animals and in their environments to measure and monitor them. It’s also that Luka has found a fascinating niche of his own to exploit, one on the edge of technology and ecology. As CEO of Institute IRNAS, a non-profit technology development group in Slovenia, Luka has leveraged his MEng degree, background in ham radio, and interest in LoRaWAN and other wide-area radio networks to explore ecological niches in ways that would have been unthinkable even 10 years ago, let alone in the days when animal tracking was limited by bulky radio collars.
Continue reading “Hackaday Belgrade: Luka Mustafa on Exploiting IoT Niches”
Getting a good measurement is a matter of using the right tool for the job. A tape measure and a caliper are both useful tools, but they’re hardly interchangeable for every task. Some jobs call for a hands-off, indirect way to measure small distances, which is where this image analysis measuring technique can come in handy.
Although it appears [Saulius Lukse] purpose-built this rig, which consists of a microscopic lens on a digital camera mounted to the Z-axis of a small CNC machine, we suspect that anything capable of accurately and smoothly transitioning a camera vertically could be used. The idea is simple: the height of the camera over the object to be measured is increased in fine increments, with an image acquired in OpenCV at each stop. A Laplace transformation is performed to assess the sharpness of each image, which when plotted against the frame number shows peaks where the image is most in focus. If you know the distance the lens traveled between peaks, you can estimate the height of the object. [Salius] measured a coin using this technique and it was spot on compared to a caliper. We could see this method being useful for getting an accurate vertical profile of a more complex object.
From home-brew lidar to detecting lightning in video, [Saulius] has an interesting skill set at the intersection of optics and electronics. We’re looking forward to what he comes up with next.
Those of you who’ve never had a real sourdough have never had real bread. Good food fights back a little when you eat it, and a proper sourdough, with its crispy crust and tangy center, certainly fits the bill. Sourdough aficionados, your humble writer included, all have recipes that we pretend are ancient family secrets while in reality we’re all just guessing. Sourdough is partly science, partly art, but mostly delicious black magic.
In an effort to demystify his sourdough process, [Justin Lam] has gone digital with this image processing sourdough starter monitor. Sourdough breads are leavened not by the addition of brewers yeast (Saccharomyces cerevisiae), but by the inclusion of a starter, a vibrant ecosystem of wild yeasts that is carefully nurtured, sometimes for years. Like any other living thing, it needs to be fed, a task that should happen at the point of maximum fermentation. Rather than guess when this might be, [Justin] used a Raspberry Pi Zero and PiCam to capture a time-lapse video of the starter as the beasties within give off their CO₂, thus expanding it up inside its container. A little Python does the work of thresholding and finding the top of the starter as it rises, allowing [Justin] to plot height of the starter over time. He found that peak height, and therefore peak fermentation, occurs about six hours after feeding. He has used his data to better inform his feeding schedule and to learn how best to revive neglected starters.
Surprisingly, this isn’t the first time we’ve discussed sourdough here. It seems that someone uses Git for iterative sourdough recipe development, and we once featured a foundry made from a pyrolyzed loaf of sourdough.
Continue reading “Raspberry Pi Tracks Starter Fermentation For Optimized Sourdough”
Things rarely go well when humans mix with wildlife. The problems are exacerbated in the suburbs, where bears dine on bird feeders and garbage cans, raccoons take up residence in attics, and coyotes make off with the family cat. And in the suburbs, nuisance wildlife can be an intractable problem because the options for dealing with it are so limited.
Not to be dissuaded in the battle to protect his roses, [dlf.myyta] built this motion-activated sentry gun to apply some watery aversion therapy to marauding deer. Shown in action below against a bipedal co-conspirator, the sentry gun has pretty much what you’d expect under the hood — Raspberry Pi, NoIR camera, a servo for aiming and a solenoid valve to control the water. OpenCV takes care of locating the intruders and swiveling the nozzle to center mass; since the deer are somewhat constrained by a fence, there’s no need to control the nozzle’s elevation. Everything is housed nicely in a plastic ammo can for portability and waterproofing. Any target that stands still for more than three seconds gets a hosing; we assume this is effective, but alas, no snuff films were provided.
We’re not sure if [dlf.myyta]’s code can discern friend from foe, and in this litigious world, hosing the neighbor’s kid could be a catastrophe. Perhaps version 2.0 can include image recognition for target verification.
Continue reading “Auto-Tracking Sentry Gun Gives Deer a Super Soaking”
Some people look forward to the day when robots have taken over all our jobs and given us an economy where we can while our days away on leisure activities. But if your idea of play is drone racing, you may be out of luck if this AI pilot for high-speed racing drones has anything to say about it.
NASA’s Jet Propulsion Lab has been working for the past two years to develop the algorithms needed to let high-performance UAVs navigate typical drone racing obstacles, and from the look of the tests in the video below, they’ve made a lot of progress. The system is vision based, with the AI drones equipped with wide-field cameras looking both forward and down. The indoor test course has seemingly random floor tiles scattered around, which we guess provide some kind of waypoints for the drones. A previous video details a little about the architecture, and it seems the drones are doing the computer vision on-board, which we find pretty impressive.
Despite the program being bankrolled by Google, we’re sure no evil will come of this, and that we’ll be in no danger of being chased down by swarms of high-speed flying killbots anytime soon. For now we can take solace in the fact that JPL’s algorithms still can’t beat an elite human pilot like [Ken Loo], who bested the bots overall. But alarmingly, the human did no better than the bots on his first lap, which suggests that once the AI gets a little creativity and intuition like that needed to best a Go champion, [Ken] might need to find another line of work.
Continue reading “High-Speed Drones Use AI to Spoil the Fun”
They say the eyes are the windows to the soul. But with a new smartphone app, the eyes may be a diagnostic window into the body that might be used to prevent a horrible disease — pancreatic cancer. A research team at the University of Washington led by [Alex Mariakakis] recently described what they call “BiliScreen,” a smartphone app to detect pancreatic disease by imaging a patient’s eyes.
Pancreatic cancer is particularly deadly because it remains asymptomatic until it’s too late. One early symptom is jaundice, a yellow-green discoloration of the skin and the whites of the eyes as the blood pigment bilirubin accumulates in the body. By the time enough bilirubin accumulates to be visible to the naked eye, things have generally progressed to the inoperable stage. BiliScreen captures images of the eyes and uses image analysis techniques to detect jaundice long before anyone would notice. To control lighting conditions, a 3D-printed mask similar to Google’s Cardboard can be used; there’s also a pair of glasses that look like something from [Sir Elton John]’s collection that can be used to correct for ambient lighting. Results look promising so far, with BiliScreen correctly identifying elevated bilirubin levels 90% of the time, as compared to later blood tests. Their research paper has all the details (PDF link).
Tools like BiliScreen could really make a difference in the early diagnosis and prevention of diseases. For an even less intrusive way to intervene in disease processes early, we might also be able to use WiFi to passively detect Parkinson’s.
Continue reading “Detecting Dire Diseases – with a Selfie?”
Ever wonder what’s inside a surface-mount inductor? Wonder no more as you watch this SMT inductor teardown video.
“Teardown” isn’t really accurate here, at least by the standard of [electronupdate]’s other component teardowns, like his looks inside LED light bulbs and das blinkenlights. “Rubdown” is more like it here, because what starts out as a rather solid looking SMT component needs to be ground down bit by bit to reveal the inner ferrite and copper goodness. [electronupdate] embedded the R30 SMT inductor in epoxy and hand lapped the whole thing until the windings were visible. Of course, just peeking inside is never enough, so he set upon an analysis of the inductor’s innards. Using a little careful macro photography and some simple image analysis, he verified the component’s data sheet claims; as an aside, is anyone else surprised that a tiny SMT component can handle 30 amps?
Looking for more practical applications for decapping components? How about iPhone brain surgery?
Continue reading “What Lies Within: SMT Inductor Teardown”