The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

Hackaday Podcast 171: Rent The Apple Toolkit, DIY An Industrial CNC, Or Save The Birds With 3D Printing

Join Hackaday Editor-in-Chief Elliot Williams and Staff Writer Dan Maloney for a tour of the week’s best and brightest hacks. We begin with a call for point-of-sale diversity, because who wants to carry cash? We move on to discussing glass as a building material, which isn’t really easy, but at least it can be sintered with a DIY-grade laser. Want to make a call on a pay phone in New York City? Too late — the last one is gone, and we offer a qualified “good riddance.” We look at socially engineering birds to get them away from what they should be really afraid of, discuss Apple’s potential malicious compliance with right-to-repair, and get the skinny on an absolute unit of a CNC machine. Watching TV? That’s so 2000s, but streaming doesn’t feel quite right either. Then again, anything you watch on a mechanical color TV is pretty cool by definition.

Direct Download link

Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!

Continue reading “Hackaday Podcast 171: Rent The Apple Toolkit, DIY An Industrial CNC, Or Save The Birds With 3D Printing”

AI Attempts Converting Python Code To C++

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAI’s Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and don’t throw it too many curveballs.

Codex is an interesting idea, and this isn’t the first experiment we’ve seen that plays with the concept of using machine learning in this way. We’ve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but — at least at the time — considerably less so when it came to actual practicality or usefulness.

Hackaday Podcast 170: Poop Shooting Laser, Positron Is A 3D Printer On Its Head, DIY Pulsar Capture, GPS’s Achilles Heel

Join Hackaday Editor-in-Chief Elliot Williams and Managing Editor Tom Nardi for a recap of all the best tips, hacks, and stories of the past week. We start things off with an update on Hackaday’s current slate of contests, followed by an exploration of the cutting edge in 3D printing and printables. Next up we’ll look at two achievements in detection, as commercial off-the-shelf hardware is pushed into service by unusually dedicated hackers to identify both dog poop and deep space pulsars (but not at the same time). We’ll also talk about fancy Samsung cables, homebrew soundcards, the surprising vulnerability of GPS, and the development of ratholes in your cat food.

Direct Download link

Check out the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!

Continue reading “Hackaday Podcast 170: Poop Shooting Laser, Positron Is A 3D Printer On Its Head, DIY Pulsar Capture, GPS’s Achilles Heel”

European Roads See First Zero-Occupancy Autonomous Journey

We write a lot about self-driving vehicles here at Hackaday, but it’s fair to say that most of the limelight has fallen upon large and well-known technology companies on the west coast of the USA. It’s worth drawing attention to other parts of the world where just as much research has gone into autonomous transport, and on that note there’s an interesting milestone from Europe. The British company Oxbotica has successfully made the first zero-occupancy on-road journey in Europe, on a public road in Oxford, UK.

The glossy promo video below the break shows the feat as the vehicle with number plates signifying its on-road legality drives round the relatively quiet roads through one of the city’s technology parks, and promises a bright future of local deliveries and urban transport. The vehicle itself is interesting, it’s a platform supplied by the Aussie outfit AppliedEV, an electric spaceframe vehicle that’s designed to provide a versatile platform for autonomous transport. As such, unlike so many of the aforementioned high-profile vehicles, it has no passenger cabin and no on-board driver to take the wheel in a calamity; instead it’s driven by Oxbotica’s technology and has their sensor pylon attached to its centre.

Continue reading “European Roads See First Zero-Occupancy Autonomous Journey”

Natural Language AI In Your Next Project? It’s Easier Than You Think

Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

Basic Concepts

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

Let’s take a closer look at how it works, and what can be done with it!

Continue reading “Natural Language AI In Your Next Project? It’s Easier Than You Think”

Need A Snack From Across Town? Send Spot!

[Dave Niewinski] clearly knows a thing or two about robots, judging from his YouTube channel. Usually the projects involve robot arms mounted on some sort of wheeled platform, but this time it’s the tune of some pretty famous yellow robot legs, in the shape of spot from Boston Dynamics. The premise is simple — tell the robot what snacks you want, entirely by voice command, and off he goes to fetch. But, we’re not talking about navigating to the fridge in the same room. We’re talking about trotting out the front door, down the street and crossing roads to visit favorite restaurant. Spot will order the snacks and bring them back, fully autonomously.

Spot’s depth cameras provide localized navigation and object avoidance information
Local AI vision system handles avoiding those pesky moving objects

There are multiple things going here, all of which are pretty big computational tasks. Firstly, there is no cloud-based voice control, ala Google voice or Alexa. The robot works on the premise of full autonomy, which means no internet connectivity for any aspect. All voice recognition, voice-to-text, and speech synthesis are performed locally using the NVIDIA Riva GPU-based AI speech SDK, running on the local NVIDIA Jetson AGX Orin carried on Spot’s back. A front-facing webcam supplies the audio feed for this. The voice recognition application listens for the wake phrase, then turns the snack order into text, for later replay when it gets to the destination. Navigation is taken care of with a Microstrain RTK GNSS module, which has all the needed robustness, such as dual antennas, and inertial fallback for those regions with a spotty signal. Navigation is no use out in the real world on its own, which is where Spot’s depth sensor cameras come in. These enable local obstacle avoidance, as per the usual spot behavior we’ve all seen before. But what about crossing the road without getting tens of thousands of dollars of someone else’s hardware crushed by a passing truck? Spot’s onboard streaming cameras are fed into the NVIDIA dash cam net AI platform which enables real-time recognition of moving obstacles such as cars, humans and anything else that might be wandering around and get in the way. All in all a cool project showing the future potential of AI in robotics for important tasks, like fetching me a beer when I most need it, even if it comes from the local corner shop.

We love robots around here. Robots can mow your lawn, navigate inside your house with a little help from invisible QR Codes, even help out with growing your food. The robot-assisted future long promised, may now be looking more like the present.

Continue reading “Need A Snack From Across Town? Send Spot!”