Multi-Year Doorbell Project

Camera modules for the Raspberry Pi became available shortly after its release in the early ’10s. Since then there has been about a decade of projects eschewing traditional USB webcams in favor of this more affordable, versatile option. Despite the amount of time available there are still some hurdles to overcome, and [Esser50k] has some supporting software to drive a smart doorbell which helps to solve some of them.

One of the major obstacles to using the Pi camera module is that it can only be used by one process at a time. The PiChameleon software that [Esser50k] built is a clever workaround for this, which runs the camera as a service and allows for more flexibility in using the camera. He uses it in the latest iteration of a smart doorbell and intercom system, which uses a Pi Zero in the outdoor unit armed with motion detection to alert him to visitors, and another Raspberry Pi inside with a touch screen that serves as an interface for the whole system.

The entire build process over the past few years was rife with learning opportunities, including technical design problems as well as experiencing plenty of user errors that caused failures as well. Some extra features have been added to this that enhance the experience as well, such as automatically talking to strangers passing by. There are other unique ways of using machine learning on doorbells too, like this one that listens for a traditional doorbell sound and then alerts its user.

Continue reading “Multi-Year Doorbell Project”

Automated Drone Takes Care Of Weeds

Commercial industrial agriculture is responsible for providing food to the world’s population at an incredibly low cost, especially when compared to most of human history when most or a majority of people would have been involved in agriculture. Now it’s a tiny fraction of humans that need to grow food, while the rest can spend their time in cities and towns largely divorced from needing to produce their own food to survive. But industrial agriculture isn’t without its downsides. Providing inexpensive food to the masses often involves farming practices that are damaging to the environment, whether that’s spreading huge amounts of synthetic, non-renewable fertilizers or blanket spraying crops with pesticides and herbicides. [NathanBuildsDIY] is tackling the latter problem, using an automated drone system to systemically target weeds to reduce his herbicide use.

The specific issue that [NathanBuildsDIY] is faced with is an invasive blackberry that is taking over one of his fields. To take care of this issue, he set up a drone with a camera and image recognition software which can autonomously fly over the field thanks to Ardupilot and a LiDAR system, differentiate the blackberry weeds from other non-harmful plants, and give them a spray of herbicide. Since drones can’t fly indefinitely, he’s also build an automated landing pad complete with a battery swap and recharge station, which allows the drone to fly essentially until it is turned off and uses a minimum of herbicide in the process.

The entire setup, including drone and landing pad, was purchased for less than $2000 and largely open-source, which makes it accessible for even small-scale farmers. A depressing trend in farming is that the tools to make the work profitable are often only attainable for the largest, most corporate of farms. But a system like this is much more feasible for those working on a smaller scale and the automation easily frees up time that the farmer can use for other work. There are other ways of automating farm work besides using drones, though. Take a look at this open-source robotics platform that drives its way around the farm instead of flying.

Thanks to [PuceBaboon] for the tip!

Continue reading “Automated Drone Takes Care Of Weeds”

Insect class-order-family-genus-species chart with drawn examples

Neural Network Identifies Insects, Outperforming Humans

There are about one million known species of insects – more than for any other group of living organisms. If you need to determine which species an insect belongs to, things get complicated quick. In fact, for distinguishing between certain kinds of species, you might need a well-trained expert in that species, and experts’ time is often better spent on something else. This is where CNNs (convolutional neural networks) come in nowadays, and this paper describes a CNN doing just as well if not better than human experts.

Continue reading “Neural Network Identifies Insects, Outperforming Humans”

A man performing push-ups in front of a PC

Machine Learning Helps You Get In Shape While Working A Desk Job

Humans weren’t made to sit in front of a computer all day, yet for many of us that’s how we spend a large part of our lives. Of course we all know that it’s important to get up and move around every now and then to stretch our muscles and get our blood flowing, but it’s easy to forget if you’re working towards a deadline. [Victor Sonck] thought he needed some reminders — as well as some not-so-gentle nudging — to get into the habit of doing a quick workout a few times a day.

To this end, he designed a piece of software that would lock his computer’s screen and only unlock it if he performed five push-ups. Locking the screen on his Linux box was as easy as sending a command through the network, but recognizing push-ups was a harder task for which [Victor] decided to employ machine learning. A Raspberry Pi with a webcam attached could do the trick, but the limited processing power of the Pi’s CPU might prove insufficient for processing lots of raw image data.

[Victor] therefore decided on using a Luxonis OAK-1, which is a 4K camera with a built-in machine-learning processor. It can run various kinds of image recognition systems including Blazepose, a pre-trained model that can recognize a person’s pose from an image. The OAK-1 uses this to send out a set of coordinates that describe the position of a person’s head, torso and limbs to the Raspberry Pi through a USB interface. A second machine-learning model running on the Pi then analyzes this dataset to recognize push-ups.

[Victor]’s video (embedded below) is an entertaining introduction into the world of machine-learning systems for video processing, as well as a good hands-on example of a project that results in a useful tool. If you’re interested in learning more about machine learning on small platforms, check out this 2020 Remoticon talk on machine learning on microcontrollers, or this 2019 Supercon talk about implementing machine vision on a Raspberry Pi.

Continue reading “Machine Learning Helps You Get In Shape While Working A Desk Job”

Firmware Hints That Tesla’s Driver Camera Is Watching

Currently, if you want to use the Autopilot or Self-Driving modes on a Tesla vehicle you need to keep your hands on the wheel at all times. That’s because, ultimately, the human driver is still the responsible party. Tesla is adamant about the fact that functions which allow the car to steer itself within a lane, avoid obstacles, and intelligently adjust its speed to match traffic all constitute a driver assistance system. If somebody figures out how to fool the wheel sensor and take a nap while their shiny new electric car is hurtling down the freeway, they want no part of it.

So it makes sense that the company’s official line regarding the driver-facing camera in the Model 3 and Model Y is that it’s there to record what the driver was doing in the seconds leading up to an impact. As explained in the release notes of the June 2020 firmware update, Tesla owners can opt-in to providing this data:

Help Tesla continue to develop safer vehicles by sharing camera data from your vehicle. This update will allow you to enable the built-in cabin camera above the rearview mirror. If enabled, Tesla will automatically capture images and a short video clip just prior to a collision or safety event to help engineers develop safety features and enhancements in the future.

But [green], who’s spent the last several years poking and prodding at the Tesla’s firmware and self-driving capabilities, recently found some compelling hints that there’s more to the story. As part of the vehicle’s image recognition system, which usually is tasked with picking up other vehicles or pedestrians, they found several interesting classes that don’t seem necessary given the official explanation of what the cabin camera is doing.

If all Tesla wanted was a few seconds of video uploaded to their offices each time one of their vehicles got into an accident, they wouldn’t need to be running image recognition configured to detect distracted drivers against it in real-time. While you could make the argument that this data would be useful to them, there would still be no reason to do it in the vehicle when it could be analyzed as part of the crash investigation. It seems far more likely that Tesla is laying the groundwork for a system that could give the vehicle another way of determining if the driver is paying attention.

Continue reading “Firmware Hints That Tesla’s Driver Camera Is Watching”

Twitter: It’s Not The Algorithm’s Fault. It’s Much Worse.

Maybe you heard about the anger surrounding Twitter’s automatic cropping of images. When users submit pictures that are too tall or too wide for the layout, Twitter automatically crops them to roughly a square. Instead of just picking, say, the largest square that’s closest to the center of the image, they use some “algorithm”, likely a neural network, trained to find people’s faces and make sure they’re cropped in.

The problem is that when a too-tall or too-wide image includes two or more people, and they’ve got different colored skin, the crop picks the lighter face. That’s really offensive, and something’s clearly wrong, but what?

A neural network is really just a mathematical equation, with the input variables being in these cases convolutions over the pixels in the image, and training them essentially consists in picking the values for all the coefficients. You do this by applying inputs, seeing how wrong the outputs are, and updating the coefficients to make the answer a little more right. Do this a bazillion times, with a big enough model and dataset, and you can make a machine recognize different breeds of cat.

What went wrong at Twitter? Right now it’s speculation, but my money says it lies with either the training dataset or the coefficient-update step. The problem of including people of all races in the training dataset is so blatantly obvious that we hope that’s not the problem; although getting a representative dataset is hard, it’s known to be hard, and they should be on top of that.

Which means that the issue might be coefficient fitting, and this is where math and culture collide. Imagine that your algorithm just misclassified a cat as an “airplane” or as a “lion”. You need to modify the coefficients so that they move the answer away from this result a bit, and more toward “cat”. Do you move them equally from “airplane” and “lion” or is “airplane” somehow more wrong? To capture this notion of different wrongnesses, you use a loss function that can numerically encapsulate just exactly what it is you want the network to learn, and then you take bigger or smaller steps in the right direction depending on how bad the result was.

Let that sink in for a second. You need a mathematical equation that summarizes what you want the network to learn. (But not how you want it to learn it. That’s the revolutionary quality of applied neural networks.)

Now imagine, as happened to Google, your algorithm fits “gorilla” to the image of a black person. That’s wrong, but it’s categorically differently wrong from simply fitting “airplane” to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally, you would want them to never happen, so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did — their “workaround” was to stop classifying “gorilla” entirely because the loss incurred by misclassifying a person as a gorilla was so large.

This is a fundamental problem with neural networks — they’re only as good as the data and the loss function. These days, the data has become less of a problem, but getting the loss right is a multi-level game, as these neural network trainwrecks demonstrate. And it’s not as easy as writing an equation that isn’t “racist”, whatever that would mean. The loss function is being asked to encapsulate human sensitivities, navigate around them and quantify them, and eventually weigh the slight risk of making a particularly offensive misclassification against not recognizing certain animals at all.

I’m not sure this problem is solvable, even with tremendously large datasets. (There are mathematical proofs that with infinitely large datasets the model will classify everything correctly, so you needn’t worry. But how close are we to infinity? Are asymptotic proofs relevant?)

Anyway, this problem is bigger than algorithms, or even their writers, being “racist”. It may be a fundamental problem of machine learning, and we’re definitely going to see further permutations of the Twitter fiasco in the future as machine classification is being increasingly asked to respect human dignity.

Garbage Can Takes Itself Out

Home automation is a fine goal but typically remains confined to lights, blinds, and other things that are relatively stationary and/or electrical in nature. There is a challenge there to be certain, but to really step up your home automation game you’ll need to think outside the box. This automated garbage can that can take itself out, for example, has all the home automation street cred you’d ever need.

The garbage can moves itself by means of a scooter wheel which has a hub motor inside and is powered by a lithium battery, but the real genius of this project is the electronics controlling everything. A Raspberry Pi Zero W is at the center of the build which controls the motor via a driver board and also receives instructions on when to wheel the garbage can out to the curb from an Nvidia Jetson board. That board is needed because the creator, [Ahad Cove], didn’t want to be bothered to tell his garbage can to take itself out or even schedule it. He instead used machine learning to detect when the garbage truck was headed down the street and instruct the garbage can to roll itself out then.

The only other thing to tie this build together was to get the garage door to open automatically for the garbage can. Luckily, [Ahad]’s garage door opener was already equipped with WiFi and had an available app, unbeknownst to him, which made this a surprisingly easy part of the build. If you have a more rudimentary garage door opener, though, there are plenty of options available to get it on the internets.

Continue reading “Garbage Can Takes Itself Out”