The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

41 thoughts on “The Ethics Of When Machine Learning Gets Weird: Deadbots

  1. This would provide an excellent opportunity for product sales! The emulated loved one could hawk products, and because they were loved/trusted, a purchase might be more likely.

    1. (Cue dubstep mix of ST:TOS music combined with “Twilight Zone” intro, incorporating hits and samples from Prodigy’s “Out of Space” and 2Unlimited’s “Twilight Zone” with some scratching of “Time warp again” sample just to oversaturate it completely.)

  2. Combine this with the deepfake style stuff they did to replace deceased actors in Starwars, and the new hologram tech they used for Tupac, and you’ve got a very interesting project. NK’s current Kim need never die…

    1. “New hologram tech”
      It’s Peppers Ghost, a technique that predates electricity. No holography, just a semi-transparent mirror reflecting an entirely 2D screen.

  3. This makes the philosophical argument that a person can be reduced to a fixed behavioral pattern like a recording of sorts, even before we get to the point of asking whether “intelligence” can be. Many others would argue that a person taken “out of context” like that would not be anything resembling a person – just a more complex Turing Test. A person is not a talking dummy that repeats phrases when you pull the cord at the back – yet that is exactly what today’s AI essentially is. Whether by traditional programming or neural networks, it is a special system built to find and repeat a given pattern.

    If a pattern does not exist, one will be created by the process, because a pattern of some sort is necessary for the programmers to test against to know that the system is operating correctly. The “discovery” of a pattern is then mistaken to mean that the person was or is a pattern, and therefore that the person has been successfully simulated. The programmers fall victim of their own Turing Test and begin to believe that they’ve actually constructed the real deal.

    Therefore the question becomes, is it ethical to construct an automated answering machine that dissimulates any person, dead or alive, existing or imaginary? Already we have such things as robot callers that are used to fool people into listening to advertisements by faking to be people, or robots calling restaurants making reservations.

    The present use and point of such machine agents is to mislead people.

  4. Deadbots are nothing, what you should be worried about is Meta et al mining your interactions with all of your “friends” so as to be able to build composite personalities that are better able to manipulate you, for a fee. What rights do you have over you own personality now? None. Currently your ownly protection is to avoid such social media environments.

  5. Morality in recent years appears to have been thrown out the window – any call to a definition of right and wrong has thoroughly been terminated. So what’s the problem?

    1. And there was the first few episodes of season 6 of “Buffy the Vampire Slayer” where they had the Buffybot stand in for Buffy who was dead at the time.

      The trope may be said to be quite older than this though, with stories of necromanced lost loves, who return soulless as automatons or are otherwise not the same any more.

  6. If one never exits the grieving aspect of ‘healing’, no real healing occurs, imo. Not sure how that can be considered helpful. Seems like feeding some sort of dilusion, no offense meant. It is my opinion we are already good at fooling ourselves on various items ( a serious gloss), I’m not sure it would be overall helpful to perfect that even more, where I would suspect a 180 degree turn in the opposite direction would be vastly more helpful. Throw someone profiting on top, I don’t think it even a question.

  7. I do remember a Star Trek TNG episode where one Reginald Barclay almost completely eschewed reality for a holodeck fantasy.
    The danger I see in this is people creating bots based on living people.
    The girl you wanted in high school and never dated, or being with a
    celebrity. At what point does it cross the line?

        1. Well then there’s the “roll your own reality” crowd, creating Tulpas of their favorite MLPs… there’s a rabbit hole you really don’t want to explore without a pocket pack of brain wipes and a spray bottle of eye bleach.

  8. Imagine using such a construct being used to administer or further clarify a persons final Will and Testament. What about having an AI copy of your personality make life decisions for you when you are in a coma.

    1. Let’s re-frame that slightly:

      One day you wake up and find that you’re actually an artificial copy whose continued existence depends on the original person remaining comatose in a hospital, or dead. If they wake up, you will be shut down. What would you do?

      1. Well, if it was “me”, I’d have to decide whether to enrich my re-awakened self with the knowledge (and cash) I made while they were out, or to just have it erased.

  9. In the movie “Her” a deadbot is made by the OS, using the the known recordings and papers of [Alan Watts?] who had died years before.

  10. Hey Donald, great read! I particularly enjoyed your in-depth discussion of data ethics, since it was something I hadn’t really thought of before. Being a fellow tech blogger myself, I also really appreciate how organized and well-formatted everything was – it definitely made the content much more digestible overall. Keep up the awesome work!

Leave a Reply to The Commenter Formerly Known As RenCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.