A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot

While it is hard to tell with a photo, this robot looks more like a model of an old- fashioned clock than anything resembling a Nixie tube. It’s the kind of project that could have been created by anyone with a little bit of Arduino tinkering experience. In this case, the 3D printer used by the Nixie clock project is a Prusa i3 (which is the same printer used to make the original Nixie tubes).

The Nixie clock project was started by a couple of students from the University of Washington who were bored one day and decided to have a go at creating their own timepiece. After a few prototypes and tinkering around with the code , they came up with a design for the clock that was more functional than ornate.

The result is a great example of how one can create a functional and aesthetically pleasing project with a little bit of free time.

Confused yet? You should be.

If you’ve read this far then you’re probably scratching your head and wondering what has come over Hackaday. Should you not have already guessed, the paragraphs above were generated by an AI — in this case Transformer — while the header image came by the popular DALL-E Mini, now rebranded as Craiyon. Both of them were given the most Hackaday title we could think of, “A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot“, and told to get on with it. This exercise was sparked by curiosity following the viral success of AI generators, which posed the question of whether an AI could make a passable stab at a Hackaday piece. Transformer runs on a prompt model in which the operator is given a choice of several sentence fragments so the text reflects those choices, but the act of choosing could equally have followed any of the options.

The text is both reassuring as a Hackaday writer because it doesn’t manage to convey anything useful, and also slightly shocking because from just that single prompt it’s created meaningful and clear sentences which on another day might have flowed from a Hackaday keyboard as part of a real article. It’s likely that we’ve found our way into whatever corpus trained its model and it’s also likely that subject matter so Hackaday-targeted would cause it to zero in on that part of its source material, but despite that it’s unnerving to realise that a computer somewhere might just have your number. For now though, Hackaday remains safe at the keyboards of a group of meatbags.

We’ve considered the potential for AI garbage before, when we looked at GitHub Copilot.

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural networks are essentially just piles of maths inside a computer, and while they are capable of great things, the technology hasn’t yet shown the capability to produce genuine intelligence.

Cortical Labs, based down in Melbourne, Australia, has a different approach. Rather than rely solely on silicon, their work involves growing real biological neurons on electrode arrays, allowing them to be interfaced with digital systems. Their latest work has shown promise that these real biological neural networks can be made to learn, according to a pre-print paper that is yet to go through peer review.
Continue reading “Researchers Build Neural Networks With Actual Neurons”

E4 Empatica device for measuring location, temperature, skin conductance, sleep, etc. on arm

Wearable Sensor For Detecting Substance Use Disorder

Oftentimes, the feature set for our typical fitness-focused wearables feels a bit empty. Push notifications on your wrist? OK, fine. Counting your steps? Sure, why not. But how useful are those capabilities anyway? Well, what if wearables could be used for a more dignified purpose like helping people in recovery from substance use disorder (SUD)? That’s what the researchers at the University of Massachusetts Medical School aimed to find out.

In their paper, they used a wrist-worn wearable to measure locomotion, heart rate, skin temperature, and electrodermal activity of 38 SUD patients during their everyday lives. They wanted to detect periods of stress and craving, as these parameters are possible triggers of substance use. Furthermore, they had patients self-report times during the day when they felt stressed or had cravings, and used those reports to calibrate their model.

They tried a number of classification models such as decision trees, discriminant analysis, logistic regression, and others, but found the most success using support vector machines though they failed to discuss why they thought that was the case. In the end, they found that they could detect stress vs. non-stress with an accuracy of 81.3% and craving vs. no-craving with an accuracy of 82.1%. Not amazing accuracy, but given the dire need for medical advancements for SUD, it’s something to keep an eye on. Interestingly enough, they found that locomotion data alone had an accuracy of approximately 75% when it came to indicating stress and cravings.

Much ado has been made about the insufficient accuracy of wearable devices for medical diagnoses, particularly of those that measure activity and heart rate. Maybe their model would perform better, being trained on real-time measurements of cortisol, a more accurate physiological measure of stress.

Finally, what really stood out to us about this study was how willing patients were to use a wearable in their treatment strategy. It’s sad that society oftentimes has a very negative perception of SUD patients, leading to fewer treatment options for patients. But hopefully, with technological advancements such as this, we’re one step closer to a more equitable future of healthcare.

A face made up of 3 OLEDs

It’s Nice Having Someone To Talk To

We all get a bit lonely from time to time and talking to other humans can be a challenge. With social robots still finding their way these days, [Markus] decided to find a DIY solution he could make cheaply, resulting in the “Conversation Face.”

The build is actually pretty simple, really. You have three different OLED displays, two for the eyes and one for the mouth, that have different graphic images programmed onto them depending on the expression being displayed. There’s also a small electret microphone that senses when you are speaking to the face.  Finally, a simple face cutout covers the electronics and solidifies the aesthetic.

The eyes are programmed identically since they would move together for most expressions. [Markus] was able to get a blinking animation by quickly moving a white circle vertically through the eye screens and the results are pretty convincing. He also moves the eyes around the OLED to make the expressions seem more dynamic.

There’s not much to the mouth. [Markus] only has a mouth open and a mouth closed animation. The mouth opens when it’s the face’s turn to talk or closes when the face should be listening. This information is easily determined by measuring the output of the microphone. Interestingly enough, you can program the face to be quiet and attentive when it’s being spoken to or quite chatty to show that it’s actively engaging in the conversation.

I don’t know about you, but we can’t decide if the Conversation Face is more or less creepy than those social robots. Either way, we thought you would get a kick out of it regardless. It also looks like a funny anime character if you ask us.

Espresso maker with added nixie flair

AI Powered Coffee Maker Knows A Bit Too Much About You

People keep warning that Skynet and the great robot uprising is not that far away, what with all this recent AI and machine-learning malarky getting all the attention lately. But we think going straight for a terminator robot army is not a very smart approach, not least due to a lack of subtlety. We think that it’s a much better bet to take over the world one home appliance at a time, and this AI Powered coffee maker might just well be part of that master plan.

Raspberry Pi Zero sitting atop the custom nixie tube driver PCB
PCB stackup with Pi Zero sat atop the driver / PSU PCBs

[Mark Smith] has taken a standard semi-auto espresso maker and jazzed it up a bit, with a sweet bar graph nixie tube the only obvious addition, at least from the front of the unit. Inside, a Raspberry Pi Zero sits atop his own nixie tube hat and associated power supply. The whole assembly is dropped into a 3D printed case and lives snuggled up to the water pump.

The Pi is running a web application written with the excellent Flask framework, and also an additional control application written in python. This allows the user to connect to the machine via Ethernet and see its status. The smarts are in the form of a simple self-grading machine learning algorithm, that takes a time series as an input (in this case when you take your shots of espresso) and after a few weeks of data, is able to make a reasonable prediction as to when you might want it in the future. It then automatically heats up in time for you to use the machine, when you usually do, then cools back down to save energy. No more pointless wandering around to see if the machine is hot enough yet – as you can just check the web page and see from the comfort of your desk.

But that’s not all [Mark] has done. He also improved the temperature control of the water boiler, and added an interlock that prevents the machine from producing a shot until the water temperature is just so. Water level is indicated by the glorious bar graph nixie tube, which also serves a few other user indication duties when appropriate. All in all a pretty sweet build, but we do add a word of caution: If your toaster starts making an unreasonable number of offers of toasted teacakes, give it a wide berth.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”

Smart Camera Based On Google Coral

As machine learning and artificial intelligence becomes more widespread, so do the number of platforms available for anyone looking to experiment with the technology. Much like the single board computer revolution of the last ten years, we’re currently seeing a similar revolution with the number of platforms available for machine learning. One of those is Google Coral, a set of hardware specifically designed to take advantage of this new technology. It’s missing support to work with certain hardware though, so [Ricardo] set out to get one working with a Raspberry Pi Zero with this smart camera build based around Google Coral.

The project uses a Google Coral Edge TPU with a USB accelerator as the basis for the machine learning. A complete image for the Pi Zero is available which sets most of the system up right away including headless operation and includes a host of machine learning software such as OpenCV and pytesseract. By pairing a camera to the Edge TPU and the Raspberry Pi, [Ricardo] demonstrates many of its machine learning capabilities with several example projects such as an automatic license plate detector and even a mode which can recognize whether or not a face mask is being worn, and even how correctly it is being worn.

For those who want to get into machine learning and artificial intelligence, this is a great introductory project since the cost to entry is so low using these pieces of hardware. All of the project code and examples are available on [Ricardo]’s GitHub page too. We could even imagine his license plate recognition software being used to augment this license plate reader which uses a much more powerful camera.