The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

Automatic Water Turret Keeps Grass Watered

Summer is rapidly approaching (at least for those of us living in the Northern Hemisphere) and if you are having to maintain a lawn at your home, now is the time to be thinking about irrigation. Plenty of people have built-in sprinkler systems to care for their turf, but this is little (if any) fun for any children that might like to play in those sprinklers. This sprinkler solves that problem, functioning as an automatic water gun turret for anyone passing by.

This project was less a specific sprinkler build and more of a way to reuse some Khadas VIM3 single-board computers that the project’s creator, [Neil], wanted to use for something other than mining crypto. The boards have a neural processing unit (NPU) in them which makes them ideal for computer vision projects like this. The camera input is fed into the NPU which then directs the turret to the correct position using yaw and pitch drivers. It’s built out of mostly aluminum extrusion and 3D printed parts, and the project’s page goes into great details about all of the parts needed if you are interested in replicating the build.

[Neil] is also actively working on improving the project, especially around the turret’s ability to identify and track objects using OpenCV. We certainly look forward to more versions of this build in the future, and in the meantime be sure to check out some other automated sprinkler builds we’ve seen which solve different problems.

Continue reading “Automatic Water Turret Keeps Grass Watered”

Point Out Pup’s Packages With This Poop-Shooting Laser

When you’re lucky enough to have a dog in your life, you tend to overlook some of the more one-sided aspects of the relationship. While you are severely restrained with regard to where you eliminate your waste, your furry friend is free to roam the yard and dispense his or her nuggets pretty much at will, and fully expect you to follow along on cleanup duty. See what we did there?

And so dog people sometimes rebel at this lopsided power structure, by leaving the cleanup till later — often much, much later, when locating the offending piles can be a bit difficult. So naturally, we now have this poop-shooting laser turret to helpfully guide you through your backyard cleanup sessions. It comes to us from [Caleb Olson], who leveraged his recent poop-posture monitor as the source of data for where exactly in the yard each deposit is located. To point them out, he attached a laser pointer to a cheap robot arm, and used OpenCV to help line up the bright green spot on each poop.

But wait, there’s more. [Caleb]’s code also optimizes his poop patrol route, minimizing the amount of pesky walking he has to do to visit each pile. And, the same pose estimation algorithm that watches the adorable [Twinkie] make her deposits keeps track of which ones [Caleb] stoops by, removing each from the worklist in turn. So now instead of having a dog control his life, he’s got a dog and a computer running the show. Perfect.

We joke, because poop, but really, this is a pretty neat exercise in machine learning. It does seem like the robot arm was bit overkill, though — we’d have thought a simple two-servo turret would have been pretty easy to whip up.

Continue reading “Point Out Pup’s Packages With This Poop-Shooting Laser”

TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers

The team from the Sensing, Interaction & Perception Lab at ETH Zürich, Switzerland have come up with TapType, an interesting text input method that relies purely on a pair of wrist-worn devices, that sense acceleration values when the wearer types on any old surface. By feeding the acceleration values from a pair of sensors on each wrist into a Bayesian inference classification type neural network which in turn feeds a traditional probabilistic language model (predictive text, to you and I) the resulting text can be input at up to 19 WPM with 0.6% average error. Expert TapTypers report speeds of up to 25 WPM, which could be quite usable.

Details are a little scarce (it is a research project, after all) but the actual hardware seems simple enough, based around the Dialog DA14695 which is a nice Cortex M33 based Bluetooth Low Energy SoC. This is an interesting device in its own right, containing a “sensor node controller” block, that is capable of handling sensor devices connected to its interfaces, independant from the main CPU. The sensor device used is the Bosch BMA456 3-axis accelerometer, which is notable for its low power consumption of a mere 150 μA.

User’s can “type” on any convenient surface.

The wristband units themselves appear to be a combination of a main PCB hosting the BLE chip and supporting circuit, connected to a flex PCB with a pair of the accelerometer devices at each end. The assembly was then slipped into a flexible wristband, likely constructed from 3D printed TPU, but we’re just guessing really, as the progression from the first embedded platform to the wearable prototype is unclear.

What is clear is that the wristband itself is just a dumb data-streaming device, and all the clever processing is performed on the connected device. Training of the system (and subsequent selection of the most accurate classifier architecture) was performed by recording volunteers “typing” on an A3 sized keyboard image, with finger movements tracked with a motion tracking camera, whilst recording the acceleration data streams from both wrists. There are a few more details in the published paper for those interested in digging into this research a little deeper.

The eagle-eyed may remember something similar from last year, from the same team, which correlated bone-conduction sensing with VR type hand tracking to generate input events inside a VR environment.

Continue reading “TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers”

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Training Doppler Radar With Smart Watch IMUs Data For Activity Recognition

When it comes to interpreting sensor data automatically, it helps to have a large data set to assist in validating it, as well as training when it concerns machine learning (ML). Creating this data set with carefully tagged and categorized information is a long and tedious process, which is where the idea of cross-domain translations come into play, as in the case of using millimeter wave (mmWave) radar sensors to recognize activity of e.g. building occupants with the IMU2Doppler project at Smash Lab of Carnegie Mellon University.

The most commonly used sensor type when it comes to classifying especially human motion are inertial measurement units (IMU) such as accelerometers and gyroscopes, which are found in everything from smartphones to smart watches and fitness bands. For these devices it’s common to classify measurement patterns as matches a particular activity, such as walking, jogging, or brushing one’s teeth. This makes them both well-defined and very accessible.

As for why a mmWave-based Doppler radar would be preferred for monitoring e.g. building occupants is the privacy aspect compared to using cameras, and the inconvenience of equipping people with a body-worn IMU. Using Doppler radar it would theoretically be possible for people to track activities within their own home, as well as in a medical setting to ensure patients are safe, or at a gym to track one’s performance, or usage of equipment. All without the use of cameras or personal sensors. In the past, we’ve seen a similar approach that used targeted laser beams.

As promising as this sounds, at this point in time the number of activities that are recognized with reasonable accuracy (~70%) is limited to ten types. Depending on the intended application this may already be sufficient, though as the published paper notes, there is still a lot of room for growth.

Continue reading “Training Doppler Radar With Smart Watch IMUs Data For Activity Recognition”

A man performing push-ups in front of a PC

Machine Learning Helps You Get In Shape While Working A Desk Job

Humans weren’t made to sit in front of a computer all day, yet for many of us that’s how we spend a large part of our lives. Of course we all know that it’s important to get up and move around every now and then to stretch our muscles and get our blood flowing, but it’s easy to forget if you’re working towards a deadline. [Victor Sonck] thought he needed some reminders — as well as some not-so-gentle nudging — to get into the habit of doing a quick workout a few times a day.

To this end, he designed a piece of software that would lock his computer’s screen and only unlock it if he performed five push-ups. Locking the screen on his Linux box was as easy as sending a command through the network, but recognizing push-ups was a harder task for which [Victor] decided to employ machine learning. A Raspberry Pi with a webcam attached could do the trick, but the limited processing power of the Pi’s CPU might prove insufficient for processing lots of raw image data.

[Victor] therefore decided on using a Luxonis OAK-1, which is a 4K camera with a built-in machine-learning processor. It can run various kinds of image recognition systems including Blazepose, a pre-trained model that can recognize a person’s pose from an image. The OAK-1 uses this to send out a set of coordinates that describe the position of a person’s head, torso and limbs to the Raspberry Pi through a USB interface. A second machine-learning model running on the Pi then analyzes this dataset to recognize push-ups.

[Victor]’s video (embedded below) is an entertaining introduction into the world of machine-learning systems for video processing, as well as a good hands-on example of a project that results in a useful tool. If you’re interested in learning more about machine learning on small platforms, check out this 2020 Remoticon talk on machine learning on microcontrollers, or this 2019 Supercon talk about implementing machine vision on a Raspberry Pi.

Continue reading “Machine Learning Helps You Get In Shape While Working A Desk Job”