A face made up of 3 OLEDs

It’s Nice Having Someone To Talk To

We all get a bit lonely from time to time and talking to other humans can be a challenge. With social robots still finding their way these days, [Markus] decided to find a DIY solution he could make cheaply, resulting in the “Conversation Face.”

The build is actually pretty simple, really. You have three different OLED displays, two for the eyes and one for the mouth, that have different graphic images programmed onto them depending on the expression being displayed. There’s also a small electret microphone that senses when you are speaking to the face.  Finally, a simple face cutout covers the electronics and solidifies the aesthetic.

The eyes are programmed identically since they would move together for most expressions. [Markus] was able to get a blinking animation by quickly moving a white circle vertically through the eye screens and the results are pretty convincing. He also moves the eyes around the OLED to make the expressions seem more dynamic.

There’s not much to the mouth. [Markus] only has a mouth open and a mouth closed animation. The mouth opens when it’s the face’s turn to talk or closes when the face should be listening. This information is easily determined by measuring the output of the microphone. Interestingly enough, you can program the face to be quiet and attentive when it’s being spoken to or quite chatty to show that it’s actively engaging in the conversation.

I don’t know about you, but we can’t decide if the Conversation Face is more or less creepy than those social robots. Either way, we thought you would get a kick out of it regardless. It also looks like a funny anime character if you ask us.

Espresso maker with added nixie flair

AI Powered Coffee Maker Knows A Bit Too Much About You

People keep warning that Skynet and the great robot uprising is not that far away, what with all this recent AI and machine-learning malarky getting all the attention lately. But we think going straight for a terminator robot army is not a very smart approach, not least due to a lack of subtlety. We think that it’s a much better bet to take over the world one home appliance at a time, and this AI Powered coffee maker might just well be part of that master plan.

Raspberry Pi Zero sitting atop the custom nixie tube driver PCB
PCB stackup with Pi Zero sat atop the driver / PSU PCBs

[Mark Smith] has taken a standard semi-auto espresso maker and jazzed it up a bit, with a sweet bar graph nixie tube the only obvious addition, at least from the front of the unit. Inside, a Raspberry Pi Zero sits atop his own nixie tube hat and associated power supply. The whole assembly is dropped into a 3D printed case and lives snuggled up to the water pump.

The Pi is running a web application written with the excellent Flask framework, and also an additional control application written in python. This allows the user to connect to the machine via Ethernet and see its status. The smarts are in the form of a simple self-grading machine learning algorithm, that takes a time series as an input (in this case when you take your shots of espresso) and after a few weeks of data, is able to make a reasonable prediction as to when you might want it in the future. It then automatically heats up in time for you to use the machine, when you usually do, then cools back down to save energy. No more pointless wandering around to see if the machine is hot enough yet – as you can just check the web page and see from the comfort of your desk.

But that’s not all [Mark] has done. He also improved the temperature control of the water boiler, and added an interlock that prevents the machine from producing a shot until the water temperature is just so. Water level is indicated by the glorious bar graph nixie tube, which also serves a few other user indication duties when appropriate. All in all a pretty sweet build, but we do add a word of caution: If your toaster starts making an unreasonable number of offers of toasted teacakes, give it a wide berth.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”

Smart Camera Based On Google Coral

As machine learning and artificial intelligence becomes more widespread, so do the number of platforms available for anyone looking to experiment with the technology. Much like the single board computer revolution of the last ten years, we’re currently seeing a similar revolution with the number of platforms available for machine learning. One of those is Google Coral, a set of hardware specifically designed to take advantage of this new technology. It’s missing support to work with certain hardware though, so [Ricardo] set out to get one working with a Raspberry Pi Zero with this smart camera build based around Google Coral.

The project uses a Google Coral Edge TPU with a USB accelerator as the basis for the machine learning. A complete image for the Pi Zero is available which sets most of the system up right away including headless operation and includes a host of machine learning software such as OpenCV and pytesseract. By pairing a camera to the Edge TPU and the Raspberry Pi, [Ricardo] demonstrates many of its machine learning capabilities with several example projects such as an automatic license plate detector and even a mode which can recognize whether or not a face mask is being worn, and even how correctly it is being worn.

For those who want to get into machine learning and artificial intelligence, this is a great introductory project since the cost to entry is so low using these pieces of hardware. All of the project code and examples are available on [Ricardo]’s GitHub page too. We could even imagine his license plate recognition software being used to augment this license plate reader which uses a much more powerful camera.

Speech Recognition On An Arduino Nano?

Like most of us, [Peter] had a bit of extra time on his hands during quarantine and decided to take a look back at speech recognition technology in the 1970s. Quickly, he started thinking to himself, “Hmm…I wonder if I could do this with an Arduino Nano?” We’ve all probably had similar thoughts, but [Peter] really put his theory to the test.

The hardware itself is pretty straightforward. There is an Arduino Nano to run the speech recognition algorithm and a MAX9814 microphone amplifier to capture the voice commands. However, the beauty of [Peter’s] approach, lies in his software implementation. [Peter] has a bit of an interplay between a custom PC program he wrote and the Arduino Nano. The learning aspect of his algorithm is done on a PC, but the implementation is done in real-time on the Arduino Nano, a typical approach for really any machine learning algorithm deployed on a microcontroller. To capture sample audio commands, or utterances, [Peter] first had to optimize the Nano’s ADC so he could get sufficient sample rates for speech processing. Doing a bit of low-level programming, he achieved a sample rate of 9ksps, which is plenty fast for audio processing.

To analyze the utterances, he first divided each sample utterance into 50 ms segments. Think of dividing a single spoken word into its different syllables. Like analyzing the “se-” in “seven” separate from the “-ven.” 50 ms might be too long or too short to capture each syllable cleanly, but hopefully, that gives you a good mental picture of what [Peter’s] program is doing. He then calculated the energy of 5 different frequency bands, for every segment of every utterance. Normally that’s done using a Fourier transform, but the Nano doesn’t have enough processing power to compute the Fourier transform in real-time, so Peter tried a different approach. Instead, he implemented 5 sets of digital bandpass filters, allowing him to more easily compute the energy of the signal in each frequency band.

The energy of each frequency band for every segment is then sent to a PC where a custom-written program creates “templates” based on the sample utterances he generates. The crux of his algorithm is comparing how closely the energy of each frequency band for each utterance (and for each segment) is to the template. The PC program produces a .h file that can be compiled directly on the Nano. He uses the example of being able to recognize the numbers 0-9, but you could change those commands to “start” or “stop,” for example, if you would like to.

[Peter] admits that you can’t implement the type of speech recognition on an Arduino Nano that we’ve come to expect from those covert listening devices, but he mentions small, hands-free devices like a head-mounted multimeter could benefit from a single word or single phrase voice command. And maybe it could put your mind at ease knowing everything you say isn’t immediately getting beamed into the cloud and given to our AI overlords. Or maybe we’re all starting to get used to this. Whatever your position is on the current state of AI, hopefully, you’ve gained some inspiration for your next project.

Self-Driving Or Mind Control? Which Do You Prefer?

We know you love a good biohack as much as we do, so we thought you would like [Tony’s] brainwave-controlled RC truck. Instead of building his own electroencephalogram (EEG), he thought he would use NeuroSky’s MindWave. EEGs are pretty complex, multi-frequency waves that require some fairly sophisticated circuitry and even more sophisticated signal processing to interpret. So, [Tony] thought it would be nice to off-load a bit of that heavy-lifting, and luckily for him, the MindWave headset is fairly hacker-friendly.

EEGs are a very active area of research, so some of the finer details of the signal are still being debated. However, It appears that attention can be quantified by measuring alpha waves which are EEG content between 8-10 Hz. And it seems as though eye blinks can be picked from the EEG as well. Conveniently, the MindWave exports these energy levels to an accompanying smartphone application which [Tony] then links to his Arduino over Bluetooth using the ever-so-popular HC-05 module.

To control the car, he utilized the existing remote control instead of making his own. Like most people, [Tony] thought about hooking up the Arduino pins to the buttons on the remote control, thereby bypassing the physical buttons, but he noticed the buttons were a bit smaller than he was comfortable soldering to and he didn’t want to risk damaging the circuit board. [Tony’s] RC truck has a pistol grip transmitter, which inspired a slightly different approach. He mounted the servo onto the controller’s wheel mechanism, allowing him to control the direction of the truck by rotating the wheel using the servo. He then fashioned another servo onto the transmitter such that the servo could depress the throttle when it rotates. We thought that was a pretty nifty workaround.

Cool project, [Tony]! We’ve seen some cool EEG Hackaday Prize entries before. Maybe this could be the next big one.

Continue reading “Self-Driving Or Mind Control? Which Do You Prefer?”

AI Makes Linux Do What You Mean, Not What You Say

We are always envious of the Star Trek Enterprise computers. You can just sort of ask them a hazy question and they will — usually — figure out what you want. Even the automatic doors seemed to know the difference between someone walking into a turbolift versus someone being thrown into the door during a fight. [River] decided to try his new API keys for the private beta of an AI service to generate Linux commands based on a description. How does it work? Watch the video below and find out.

Some examples work fairly well. In response to “email the Rickroll video to Jeff Bezos,” the system produced a curl command and an e-mail to what we assume is the right place. “Find all files in the current directory bigger than 1 GB” works, too.

Continue reading “AI Makes Linux Do What You Mean, Not What You Say”