Ask Hackaday: Using CoPilot? Are You Entertained?

There’s a great debate these days about what the current crop of AI chatbots should and shouldn’t do for you. We aren’t wise enough to know the answer, but we were interested in hearing what is, apparently, Microsoft’s take on it. Looking at their terms of service for Copilot, we read in the original bold:

Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.

While that’s good advice, we are pretty sure we’ve seen people use LLMs, including Copilot, for decidedly non-entertaining tasks. But, at least for now, if you are using Copilot for non-entertainment purposes, you are violating the terms of service.

Legal

While we know how it is when lawyers get involved in anything, we can’t help but think this is simply a hedge so that when Copilot gives you the wrong directions or a recipe for cake that uses bleach, they can say, “We told you not to use this for anything.”

It reminds us of the Prohibition-era product called a grape block. It featured a stern warning on the label that said: “Warning. Do not place product in one quart of water in a cool, dark place for more than two weeks, or else an illegal alcoholic beverage will result.” That doesn’t fool anyone.

We get it. They are just covering their… bases. When you do something stupid based on output from Copilot, they can say, “Oh, yeah, that was just for entertainment.” But they know what you are doing, and they even encourage it. Heck, they’re doing it themselves. Would it stand up in court? We don’t know.

Others

Now it is true that probably everyone will give you a similar warning. OpenAI, for example, has this to say:

  • Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
  • You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
  • You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
  • Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

Notice that it doesn’t pretend you are only using it for a chuckle. Anthropic has even more wording, but still stops short of pretending to be a party game. Copilot, on the other hand, is for fun.

Your Turn

How about you? Do you use any of the LLMs for anything other than “entertainment?” If you do, how do you validate the responses you get?

When things do go wrong, who should be liable? There have been court cases where LLM companies have been sued for everything, ranging from users committing suicide to defaming people. Are the companies behind these tools responsible? Should they be?

Let us know what you think in the comments.

6 thoughts on “Ask Hackaday: Using CoPilot? Are You Entertained?

  1. This is a good portrait of sober AI use, sans evangelism and utopian idealism. I use Google AI studio, it is a beautiful web interface. Something like Gemini but more IDE esque imo. Gemini is not so useful, collating my prompt history data and plopping it arbitrarily into any old prompt is useless to me. I mainly use LLM for entertainment, I have it write scripts for me when I get bor3d. 5% of these scripts hit a margin of quality i am pleasef with, the othet 95% of the time I am reprompting to inch closer to the desired output.

  2. Not for anything production. Not for code.
    Just for faster research, like clicking through wikipedia (18 years ago).

    On topics that I actually am proficient, the results vary between too much babble and false information. So I assume it is just as wrong on any other topic.

    The only benefit is that you can ask questions in natural language and can get some inspiration in new directions. Anything else needs proper sources.

  3. this is for the “free” version by the way. the whatever the heck they call the one intended for “professional” use or whatever lacks this warning.

  4. AI is very good at summarizing console logs from my MacOS machine. It’s kind of amazing how well it works for this. It probably doesn’t hurt that, if it’s wrong, I am unlikely to notice the error…

  5. As more and more AI “slop” lies around on the internet, I wonder what is going to happen with future AIs that are trained on that slop. Could be a very sloppy future.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.