Ask Hackaday: The Turing Test Is Dead: Long Live The Turing Test!

Alan Turing proposed a test for machine intelligence that no longer works. The idea was to have people communicate over a terminal, with another real person and with a computer. If the computer is intelligent, Turing mused, most people will incorrectly identify the computer as a human. Clearly, with the advent of modern chatbots, that test is now broken. Despite the “AI” moniker, chatbots aren’t sentient or even pre-sentient, but they certainly seem that way. An AI CEO, Mustafa Suleyman, is proposing a new test: The AI has to take a $100,000 budget and earn $1,000,000.

We were a little bemused at this. By that measure, most of us aren’t intelligent, either, and it seems like this is a particularly capitalistic idea. We could probably write an Excel script that studied mutual fund performance and pull off the same trick, given enough time for the investment to mature. Is it intelligent? No. Besides, even humans who have demonstrated they can make $1,000,000 often sell their companies and start new ones that fail. How often does the AI have to succeed before we grant it person status?

Continue reading “Ask Hackaday: The Turing Test Is Dead: Long Live The Turing Test!”

What Do You Want In A Programming Assistant?

The Propellerheads released a song in 1998 entitled “History Repeating.” If you don’t know it, the lyrics include: “They say the next big thing is here. That the revolution’s near. But to me, it seems quite clear. That it’s all just a little bit of history repeating.” The next big thing today seems to be the AI chatbots. We’ve heard every opinion from the “revolutionize everything” to “destroy everything” camp. But, really, isn’t it a bit of history repeating itself? We get new tech. Some oversell it. Some fear it. Then, in the end, it becomes part of the ordinary landscape and seems unremarkable in the light of the new next big thing. Dynamite, the steam engine, cars, TV, and the Internet were all predicted to “ruin everything” at some point in the past.

History really does repeat itself. After all, when X-rays were discovered, they were claimed to cure pneumonia and other infections, along with other miracle cures. Those didn’t pan out, but we still use them for things they are good at. Calculators were going to ruin math classes. There are plenty of other examples.

This came to mind because a recent post from ACM has the contrary view that chatbots aren’t able to help real programmers. We’ve also seen that — maybe — it can, in limited ways. We suspect it is like getting a new larger monitor. At first, it seems huge. But in a week, it is just the normal monitor, and your old one — which had been perfectly adequate — seems tiny.

But we think there’s a larger point here. Maybe the chatbots will help programmers. Maybe they won’t. But clearly, programmers want some kind of help. We just aren’t sure what kind of help it is. Do we really want CoPilot to write our code for us? Do we want to ask Bard or ChatGPT/Bing what is the best way to balance a B-tree? Asking AI to do static code analysis seems to work pretty well.

So maybe your path to fame and maybe even riches is to figure out — AI-based or not — what people actually want in an automated programming assistant and build that. The home computer idea languished until someone figured out what people wanted to do with them. Video cassette didn’t make it into the home until companies figured out what people wanted most to watch on them.

How much and what kind of help do you want when you program? Or design a circuit or PCB? Or even a 3D model? Maybe AI isn’t going to take your job; it will just make it easier. We doubt, though, that it can much improve on Dame Shirley Bassey’s history lesson.

Why Did The Home Assistant Future Not Quite Work The Way It Was Supposed To?

The future, as seen in the popular culture of half a century or more ago, was usually depicted as quite rosy. Technology would have rendered every possible convenience at our fingertips, and we’d all live in futuristic automated homes — no doubt while wearing silver clothing and dreaming about our next vacation on Mars.

Of course, it’s not quite worked out this way. A family from 1965 whisked here in a time machine would miss a few things such as a printed newspaper, the landline telephone, or receiving a handwritten letter; they would probably marvel at the possibilities of the Internet, but they’d recognise most of the familiar things around us. We still sit on a sofa in front of a television for relaxation even if the TV is now a large LCD that plays a streaming service, we still drive cars to the supermarket, and we still cook our food much the way they did. George Jetson has not yet even entered the building.

The Future is Here, and it Responds to “Alexa”

An Amazon Echo Dot device
“Alexa, why haven’t you been a commercial success?” Gregory Varnum, CC BY-SA 4.0

There’s one aspect of the Jetsons future that has begun to happen though. It’s not the futuristic automation of projects such as Disneyland’s Monsanto house Of The Future, but instead it’s our current stuttering home automation efforts. We’re not having domestic robots in pinnies hand us rolled-up newspapers, but we’re installing smart lightbulbs and thermostats, and we’re voice-controlling them through a variety of home hub devices. The future is here, and it responds to “Alexa”.

But for all the success that Alexa and other devices like it have had in conquering the living rooms of gadget fans, they’ve done a poor job of generating a profit. It was supposed to be a gateway into Amazon services alongside their Fire devices, a convenient household companion that would help find all those little things for sale on Amazon’s website, and of course, enable you to buy them. Then, Alexa was supposed to move beyond your Echo and into other devices, as your appliances could come pre-equipped with Alexa-on-a-chip. Your microwave oven would no longer have a dial on the front, instead you would talk to it, it would recognise the food you’d brought from Amazon, and order more for you.

Instead of all that, Alexa has become an interface for connected home hardware, a way to turn on the light, view your Ring doorbell on models with screens, catch the weather forecast, and listen to music. It’s a novelty timepiece with that pod bay doors joke built-in, and worse that that for the retailer it remains by its very nature unseen. Amazon have got their shopping cart into your living room, but you’re not using it and it hardly reminds you that it’s part of the Amazon empire at all.

But it wasn’t supposed to be that way. The idea was that you might look up from your work and say “Alexa, order me a six-pack of beer!”, and while it might not come immediately, your six-pack would duly arrive. It was supposed to be a friendly gateway to commerce on the website that has everything, and now they can’t even persuade enough people to give it a celebrity voice for a few bucks.

The Gadget You Love to Hate

In the first few days after the Echo’s UK launch, a member of my hackerspace installed his one in the space. He soon became exasperated as members learned that “Alexa, add butt plug to my wish list” would do just that. But it was in that joke we could see the problem with the whole idea of Alexa as an interface for commerce. He had locked down all purchasing options, but as it turns out, many people in San Diego hadn’t done the same thing. As the stories rolled in of kids spending hundreds of their parents’ hard-earned on toys, it would be a foolhardy owner who would leave left purchasing enabled. Worse still, while the public remained largely in ignorance the potential of the device for data gathering and unauthorized access hadn’t evaded researchers. It’s fair to say that our community has loved the idea of a device like the Echo, but many of us wouldn’t let one into our own homes under any circumstances.

So Alexa hasn’t been a success, but conversely it’s been a huge sales success in itself. The devices have sold like hot cakes, but since they’ve been sold at close to cost, they haven’t been the commercial bonanza they might have hoped for. But what can be learned from this, other than that the world isn’t ready for a voice activated shopping trolley?

Sadly for most Alexa users it seems that a device piping your actions back to a large company’s data centres is not enough of a concern for them. It’s an easy prediction that Alexa and other services like it will continue to evolve, with inevitable AI pixie dust sprinked on them. A bet could be on the killer app being not a personal assistant but a virtual friend with some connections across a group of people, perhaps a family or a group of friends. In due course we’ll also see locally hosted and open source equivalents appearing on yet-to-be-released hardware that will condense what takes a data centre of today’s GPUs into a single board computer. It’s not often that our community rejoices in being late to a technological party, but I for one want an Alexa equivalent that I control rather than one that invades my privacy for a third party.

C64 Gets ChatGPT Access Via BBS

ChatGPT, powered by GPT 3.5 and GPT 4, has become one of the most popular Large Language Models (LLM), due to its ability to hold passable conversations and generate large tracts of text. Now, that very tool is available on the Commodore 64 via the Internet.

Obviously, a 6502 CPU with just 64 kilobytes of RAM can barely remember a dictionary, let alone the work with something as complicated as a modern large language model. Nor is the world’s best-selling computer well-equipped to connect to modern online APIs. Instead, the C64 can access ChatGPT through the Retrocampus BBS, as demonstrated by [Retro Tech or Die].

Due to security reasons, the ChatGPT area of the BBS is only available to the board’s Patreon members. Once in, though, you’re granted a prompt with ChatGPT displayed in glorious PETSCII on the Commodore 64. It’s all handled via a computer running as a go-between for the BBS clients and OpenAI’s ChatGPT service, set up by board manager [Francesco Sblendorio]. It’s particularly great to see ChatGPT spitting out C64-compatible BASIC.

While this is a fun use of ChatGPT, be wary of using it for certain tasks in wider society. Video after the break.

Continue reading “C64 Gets ChatGPT Access Via BBS”

AI Camera Imagines A Photo Of What You Point It At

These days, every phone has a camera, and few of us are ever without one. [Bjørn Karmann] has built an altogether not-camera, though, in the form of the Paragraphica, powered by artificial intelligence.

The Paragraphica doesn’t actually take photographs at all. Instead, it uses GPS to determine the user’s current position. It then feeds the address, time of day, weather, and temperature into a paragraph which serves as a prompt for an AI image generator. It also uses data gathered from various APIs to determine points of interest in the immediate area, and feeds those into the prompt as well. It then generates an artificial image that is intended to bear some resemblance to the prompt, and ideally, the real-world scene. In place of a lens, it bears a 3D printed structure inspired by the star-nosed mole, which feels its way around in lieu of using its eyes.

Three dials on the Paragraphica control its action. The first dial controls the radius of the area which the prompt will gather data about; it’s akin to setting the focal length of the lens. The second dial provides a noise seed value for the AI image generator, and the third dial controls how closely the AI sticks to the generated textual prompt.

The results are impressive, if completely false and generated from scratch. The Paragraphica generates semi-believable photos of a crowded alley, a public park, and a laneway full of parked cars. It’s akin to telling a friend where you are and what you’re seeing over the phone, and having them paint a picture based on that description.

Through their unique abilities and stolen data sets, AI image generators are proving controversial to say the least. As all good art does, Paragraphica explores this and raises new questions of its own.

Neural Network Helps With Radar Pipeline Diagnostics

Diagnosing pipeline problems is important in industry to avoid costly or dangerous failures from cracked, broken, or damaged pipes. [Kutluhan Aktar] has built an system that uses AI to assist in this difficult task.

The core of the system is a MR60BHA1 60 GHz mmWave radar module, which is most typically used for breathing and heartrate detection. Here, it’s repurposed to detect fluctuating vibrations as a sign that a pipeline may be cracked or damaged. It’s paired with an Arduino Nicla Vision module, with the smart camera able to run a neural network model on the captured radar data to flag potential pipe defects and photograph them. The various modules are assembled on a PCB resembling Dragonite, the Dragon/Flying-type Pokemon.

[Kutluhan] walks us through the whole development process, including the creation of a web interface for the system. Of particular interest is the way the neural network was trained on real defect models that [Kutluhan] built using PVC pipe. We’ve looked at industrial pipelines in detail before, too. Video after the break.

Continue reading “Neural Network Helps With Radar Pipeline Diagnostics”

Simple Cubes Show Off AI-Driven Runtime Changes In VR

AR and VR developer [Skarredghost] got pretty excited about a virtual blue cube, and for a very good reason. It marked a successful prototype of an augmented reality experience in which the logic underlying the cube as a virtual object was changed by AI in response to verbal direction by the user. Saying “make it blue” did indeed turn the cube blue! (After a little thinking time, of course.)

It didn’t stop there, of course, and the blue cube proof-of-concept led to a number of simple demos. The first shows off a row of cubes changing color from red to green in response to musical volume, then a bundle of cubes change size in response to microphone volume, and cubes even start moving around in space.

The program accepts spoken input from the user, converts it to text, sends it to a natural language AI model, which then creates the necessary modifications and loads it into the environment to make runtime changes in Unity. The workflow is a bit cumbersome and highlights many of the challenges involved, but it works and that’s pretty nifty.

The GitHub repository is here and a good demonstration video is embedded just under the page break. There’s also a video with a much more in-depth discussion of what’s going on and a frank exploration of the technical challenges.

If you’re interested in this direction, it seems [Skarredghost] has rounded up the relevant details. And should you have a prototype idea that isn’t necessarily AR or VR but would benefit from AI-assisted speech recognition that can run locally? This project has what you need.

Continue reading “Simple Cubes Show Off AI-Driven Runtime Changes In VR”