The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

Kipp Bradford Discusses The Entanglement Of Politics And Technology

Kipp Bradford wrapped up his keynote talk at the Hackaday Remoticon with a small piece of advice: don’t built bridges in the middle of the ocean. The point is that a bridge must connect two pieces of land to be useful and if technology isn’t useful to humanity, does it matter at all?

In reality we build bridges in the middle of the ocean all the time as each of us finds nonsensical reasons to learn new skills and try things out. But when it comes time to sit down and make an organized end goal, Kipp wisely asks us to consider the impact we’d like that work to have on the world. Equally importantly, how will we make sure completed work actually gets used? This is where the idea of politics in technology comes to play, in the sense that politics is a major mechanism for collective decision-making within a society.

Currently the CTO of Treau, and a Lecturer and Researcher at Yale, Kipp delivered this keynote live on November 7th. Kipp was an expert judge for the Hackaday Prize in 2017 and 2018. The video of his talk, and a deeper look at the topics, are found below.

Continue reading “Kipp Bradford Discusses The Entanglement Of Politics And Technology”

Ethics Whiplash As Sonos Tries Every Possible Wrong Way To Handle IoT Right

We’re trying to figure out whether Sonos was doing the right thing, and it’s getting to the point where we need pins, a corkboard, and string. Sonos had been increasing the functionality of its products and ran into a problem as they hit a technical wall. How would they keep the old speakers working with the new speakers? Their solution was completely bizarre to a lot of people.

First, none of the old speakers would receive updates anymore. Which is sad, but not unheard of. Next they mentioned that if you bought a new speaker and ran it on the same network as an old speaker, neither speaker would get updates. Which came off as a little hostile, punishing users for upgrading to newer products.

The final bit of weirdness was their solution for encouraging users to ditch their old products. They called it, “trading in for a 30% discount”, but it was something else entirely. If a user went into the system menu of an old device and selected to put it in “Recycle Mode” the discount would be activated on their account. Recycle Mode would then, within 30 days, brick the device. There was no way to cancel this, and once the device was bricked it wouldn’t come back. The user was then instructed to take the Sonos to a recycling center where it would be scrapped. Pictures soon began to surface of piles of bricked Sonos’s. There would be no chance to sell, repair, or otherwise keep alive what is still a fully functioning premium speaker system.

Why would a company do this to their customers and to themselves? Join me below for a guided tour of how the downsides of IoT ecosystem may have driven this choice.

Continue reading “Ethics Whiplash As Sonos Tries Every Possible Wrong Way To Handle IoT Right”

VW Engineer Pleads Guilty To Conspiracy

[James Liang], an engineer at Volkswagen for 33 years, plead guilty today to conspiracy. He was an engineer involved in delivering Diesel vehicles to market which could detect an emissions test scenario and perform differently from normal operation in order to pass US emission standards.

A year ago we talked about the Ethics in Engineering surrounding this issue. At the time we wondered why any engineer would go along with a plan to defraud customers. We may get an answer to this after all. [Mr. Liang] will cooperate with authorities as the VW probe continues.

According to information in the indictment, none of this happened by mistake (as we suspected). There was a team responsible for developing a mode that would detect a test and pass inspection after the company discovered the engine could not otherwise pass. It’s not hard to see the motivation behind this — think of the sunk cost in developing an engine design. The team responsible for cheating the tests went so far as to push software updates in 2014 which made the cheat better, and lying about the existence of these software “features” when questioned by authorities (again, according to the indictment).

Phoenix Perry: Forward Futures

There were a lot of very technical talks at Hackaday Belgrade. That’s no surprise, this is Hackaday after all. But every once in a while it’s good to lift our heads up from the bench, blow away some of the solder smoke, and remind ourselves of the reason that we’re working on the next cool project. Try to take in the big picture. Why are you hacking?

image5[Phoenix Perry] raised a lot of big-think points in her talk, and she’s definitely hacking in order to bring more women into the field and make the creation of technology more accessible to everyone. Lofty goals, and not a project that’s going to be finished up this weekend. But if you’re going to make a positive difference in the world through what you love to do, it’s good to dream big and keep the large goal on your mind.

[Phoenix] is an engineer by training, game-coder by avocation, and a teacher for all the right reasons. She’s led a number of great workshops around the intersection of art and technology: from physical controllers for self-coded games to interactive music synthesis devices disguised as room-sized geodesic domes. And she is the founder of the Code Liberation Foundation, a foundation aimed at teaching women technology through game coding. On one hand, she’s a hacker, but on the other she’s got her eyes on a larger social goal.

Continue reading “Phoenix Perry: Forward Futures”

No Sex Please, We’re Robots

There was a time when technology would advance and launch debates over ethical concerns raised by the technology. Lately, however, it seems ethical debate is (I hope) in advance of the actual technology. Maybe that’s a good thing.

Case in point: A paper at Ethicomp 2015 from De Montfort University warns that having sex with robots may have negative effects on par with prostitution. You might think that this is an isolated academic concept, but apparently there is a conference titled The International Congress on Love and Sex with Robots. There’s even a 2008 book titled Love and Sex with Robots that is neither science fiction nor pornography.

Second case: Softbank has created a robot called [Pepper] that supposedly can understand human emotions. You know the license agreements you get with everything you buy that you don’t really read? Here’s a translation of part of the one that comes with [Pepper]: ” …owner must not perform any sexual act or other indecent behavior.

Continue reading “No Sex Please, We’re Robots”

Ethics In Engineering: Volkswagen’s Diesel Fiasco

Every so often – and usually not under the best of circumstance – the field of engineering as a whole is presented with a teaching moment. Volkswagen is currently embroiled in a huge scandal involving emissions testing of 11 Million diesel cars sold in recent years. It’s a problem that could cost VW dearly, to the tune of eighteen Billion dollars in the US alone, and will, without a doubt, end the careers of more than a few Volkswagen employees. In terms of automotive scandals, this is bigger than Unsafe at Any Speed. This is a bigger scandal than the Ford Pinto’s proclivity to explode. This is engineering history in the making, and an enormously teachable moment for ethics in engineering.

Continue reading “Ethics In Engineering: Volkswagen’s Diesel Fiasco”