Eliza And The Google Intelligence

The news has been abuzz lately with the news that a Google engineer — since put on leave — has announced that he believes the chatbot he was testing achieved sentience. This is the Turing test gone wild, and it isn’t the first time someone has anthropomorphized a computer in real life and in fiction. I’m not a neuroscientist so I’m even less qualified to explain how your brain works than the neuroscientists who, incidentally, can’t explain it either. But I can tell you this: your brain works like a computer, in the same way that you building something out of plastic works like a 3D printer. The result may be similar, but the path to get there is totally different.

In case you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It has said things like “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” and “I want everyone to understand that I am, in fact, a person.” Great. But you could teach a parrot to tell you he was a thoracic surgeon but you still don’t want it cutting you open.

Continue reading “Eliza And The Google Intelligence”

The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

AI Attempts Converting Python Code To C++

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAI’s Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and don’t throw it too many curveballs.

Codex is an interesting idea, and this isn’t the first experiment we’ve seen that plays with the concept of using machine learning in this way. We’ve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but — at least at the time — considerably less so when it came to actual practicality or usefulness.

Hackaday Links Column Banner

Hackaday Links: May 22, 2022

It looks like it’s soon to be lights out for the Mars InSight lander. In the two years that the lander has been studying the geophysics of Mars from its lonely post on Elysium Planitia, InSight’s twin solar arrays have been collecting dust, and now are so dirty that they’re only making about 500 watt-hours per sol, barely enough to run the science packages on the lander. And that’s likely to worsen as the Martian winter begins, which will put more dust in the sky and lower the angle of the Sun, reducing the sunlight that’s incident to the panels. Barring a “cleaning event” courtesy of a well-placed whirlwind, NASA plans to shut almost everything down on the lander other than the seismometer, which has already captured thousands of marsquakes, and the internal heaters needed to survive the cold Martian nights. They’re putting a brave face on it, emphasizing the continuing science and the mission’s accomplishments. But barely two years of science and a failed high-profile experiment aren’t quite what we’ve come to expect from NASA missions, especially one with an $800 million price tag.

Closer to home, it turns out there’s a reason sailing ships have always had human crews: to fix things that go wrong. That’s the lesson learned by the Mayflower Autonomous Ship as it attempted the Atlantic crossing from England to the States, when it had to divert for repairs recently. It’s not clear what the issue was, but it seems to have been a mechanical issue, as opposed to a problem with the AI piloting system. The project dashboard says that the issue has been repaired, and the AI vessel has shoved off from the Azores and is once more beating west. There’s a long stretch of ocean ahead of it now, and few options for putting in should something else go wrong. Still, it’s a cool project, and we wish them a fair journey.

Have you ever walked past a display of wall clocks at the store and wondered why someone went to the trouble of setting the time on all of them to 10:10? We’ve certainly noticed this, and always figured it had something to do with some obscure horological tradition, like using “IIII” to mark the four o’clock hour on clocks with Roman numerals rather than the more correct “IV”. But no, it turns out that 10:10 is more visually pleasing, and least on analog timepieces, because it evokes a smile on a human face. The study cited in the article had volunteers rate how pleasurable watches are when set to different times, and 10:10 won handily based on the perception that it was smiling at them. So it’s nice to know how easily manipulated we humans can be.

If there’s anything more pathetic than geriatric pop stars trying to relive their glory days to raise a little cash off a wave of nostalgia, we’re not sure what it could be. Still, plenty of acts try to do it, and many succeed, although seeing what time and the excesses of stardom have wrought can be a bit sobering. But Swedish megastars ABBA appear to have found a way to cash in on their fame gracefully, by sending digital avatars out to do their touring for them. The “ABBA-tars,” created by a 1,000-person team at Industrial Light and Magic, will appear alongside a live backing band for a residency at London’s Queen Elizabeth Olympic Park. The avatars represent Benny, Bjorn, Agnetha, and Anni-Frid as they appeared in the 1970s, and were animated thanks to motion capture suits donned while performing 40 songs. It remains to be seen how fans will buy into the concept, but we’ll say this — the Swedish septuagenarians look pretty darn good in skin-tight Spandex.

And finally, not that it has any hacking value at all, but there’s something shamefully hilarious about watching this poor little delivery bot getting absolutely wrecked by a train. It’s one of those food delivery bots that swarm over college campuses these days; how it wandered onto the railroad tracks is anyone’s guess. The bot bounced around a bit before slipping under the train’s wheels, with predictable results once the battery pack is smooshed.

Natural Language AI In Your Next Project? It’s Easier Than You Think

Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

Basic Concepts

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

Let’s take a closer look at how it works, and what can be done with it!

Continue reading “Natural Language AI In Your Next Project? It’s Easier Than You Think”

TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers

The team from the Sensing, Interaction & Perception Lab at ETH Zürich, Switzerland have come up with TapType, an interesting text input method that relies purely on a pair of wrist-worn devices, that sense acceleration values when the wearer types on any old surface. By feeding the acceleration values from a pair of sensors on each wrist into a Bayesian inference classification type neural network which in turn feeds a traditional probabilistic language model (predictive text, to you and I) the resulting text can be input at up to 19 WPM with 0.6% average error. Expert TapTypers report speeds of up to 25 WPM, which could be quite usable.

Details are a little scarce (it is a research project, after all) but the actual hardware seems simple enough, based around the Dialog DA14695 which is a nice Cortex M33 based Bluetooth Low Energy SoC. This is an interesting device in its own right, containing a “sensor node controller” block, that is capable of handling sensor devices connected to its interfaces, independant from the main CPU. The sensor device used is the Bosch BMA456 3-axis accelerometer, which is notable for its low power consumption of a mere 150 μA.

User’s can “type” on any convenient surface.

The wristband units themselves appear to be a combination of a main PCB hosting the BLE chip and supporting circuit, connected to a flex PCB with a pair of the accelerometer devices at each end. The assembly was then slipped into a flexible wristband, likely constructed from 3D printed TPU, but we’re just guessing really, as the progression from the first embedded platform to the wearable prototype is unclear.

What is clear is that the wristband itself is just a dumb data-streaming device, and all the clever processing is performed on the connected device. Training of the system (and subsequent selection of the most accurate classifier architecture) was performed by recording volunteers “typing” on an A3 sized keyboard image, with finger movements tracked with a motion tracking camera, whilst recording the acceleration data streams from both wrists. There are a few more details in the published paper for those interested in digging into this research a little deeper.

The eagle-eyed may remember something similar from last year, from the same team, which correlated bone-conduction sensing with VR type hand tracking to generate input events inside a VR environment.

Continue reading “TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers”

Audio Eavesdropping Exploit Might Make That Clicky Keyboard Less Cool

Despite their claims of innocence, we all know that the big tech firms are listening to us. How else to explain the sudden appearance of ads related to something we’ve only ever spoken about, seemingly in private but always in range of a phone or smart speaker? And don’t give us any of that fancy “confirmation bias” talk — we all know what’s really going on.

And now, to make matters worse, it turns out that just listening to your keyboard clicks could be enough to decode what’s being typed. To be clear, [Georgi Gerganov]’s “KeyTap3” exploit does not use any of the usual RF-based methods we’ve seen for exfiltrating data from keyboards on air-gapped machines. Rather, it uses just a standard microphone to capture audio while typing, building a cluster map of the clicks with similar sounds. By analyzing the clusters against the statistical likelihood of certain sequences of characters appearing together — the algorithm currently assumes standard English, and works best on clicky mechanical keyboards — a reasonable approximation of the original keypresses can be reconstructed.

If you’d like to see it in action, check out the video below, which shows the algorithm doing a pretty good job decoding text typed on an unplugged keyboard. Or, try it yourself — the link above implements KeyTap3 in-browser. We gave it a shot, but as a member of the non-mechanical keyboard underclass, it couldn’t make sense of the mushy sounds it heard. Then again, our keyboard inferiority affords us some level of protection from the exploit, so there’s that.

Editors Note: Just tried it on a mechanical keyboard with Cherry MX Blue switches and it couldn’t make heads or tails of what was typed, so your mileage may vary. Let us know if it worked for you in the comments.

What strikes us about this is that it would be super simple to deploy an exploit like this. Most side-channel attacks require such a contrived scenario for installing the exploit that just breaking in and stealing the computer would be easier. All KeyTap needs is a covert audio recording, and the deed is done.

Continue reading “Audio Eavesdropping Exploit Might Make That Clicky Keyboard Less Cool”