Getty Images Is Suing An AI Image Generator For Using Its Images

As per the Getty Images legal complaint, the Stable Diffusion AI seems to reproduce gooey versions of the Getty Images watermark in some of its output. Credit: Getty Images

Many AI systems require huge training datasets in order to achieve their impressive feats. This applies whether or not you’re talking about an AI that works with images, natural language, or just about anything else. AI developers are starting to come under scrutiny for where they’re sourcing their datasets. Unsurprisingly, stock photo site Getty Images is at the forefront of this, and is now suing the creators of Stable Diffusion over the matter, as reported by The Verge.

Stability AI, the company behind Stable Diffusion, is the target of the lawsuit for one good reason: there’s compelling evidence the company used Getty Images content without permission. The Stable Diffusion AI has been seen to generate output images that actually include blurry approximations of the Getty Images watermark. This is somewhat of a smoking gun to suggest that Stability AI may have scraped Getty Images content for use as training material.

The copyright implications are unclear, but using any imagery from a stock photo database without permission is always asking for trouble. Various arguments will likely play out in court. Stability AI may make claims that their activity falls under fair use guidelines, while Getty Images may claim that the appearance of perverted versions of their watermark may break trademark rules. The lawsuit could have serious implications for AI image generators worldwide, and is sure to be watched closely by the nascent AI industry. As with any legal matter, just don’t expect a quick answer from the courts.

[Thanks to Dan for the tip!]

Does Programming A Robot With ChatGPT Work At All?

ChatGPT has been put to all manner of silly uses since it first became available online. [Engineering After Hours] decided to see if its coding skills were any chop, and put it to work programming a circular saw. Pun intended.

The aim was to build a line following robot armed with a circular saw to handle lawn edging tasks.  The circular saw itself consists of a motor with a blade on it, and precisely no safety features. It’s mounted on the front of a small RC car with a rack and pinion to control its position. [Engineering After Hours] has some sage advice in this area: don’t try this at home.

ChatGPT was not only able to give advice on what parts to use, it was able to tell [Engineering After Hours] on how to hook everything up to an Arduino and even write the code. The AI language model even recommended a PID loop to control the position of the circular saw. Initial tests were messy, but some refinement got things impressively functional.

As a line following robot, the performance is pretty crummy. However, as a robot programmed by an AI, it does pretty okay. Obviously, it’s hard to say how much help the AI had, and how many corrections [Engineering After Hours] had to make to the code to get everything working. But the fact that this kind of project is even possible shows us just how far AI has really come.

Continue reading “Does Programming A Robot With ChatGPT Work At All?”

With ChatGPT, Game NPCs Get A Lot More Interesting

Not only is AI-driven natural language processing a thing now, but you can even select from a number of different offerings, each optimized for different tasks. It took very little time for [Bloc] to mod a computer game to allow the player to converse naturally with non-player characters (NPCs) by hooking it into ChatGPT, a large language model AI optimized for conversational communication.

If you can look past the painfully-long loading times, even buying grain (7:36) gains a new layer of interactivity.

[Bloc] modified the game Mount & Blade II: Bannerlord to reject traditional dialogue trees and instead accept free-form text inputs, using ChatGPT on the back end to create more natural dialogue interactions with NPCs. This is a refinement of an earlier mod [Bloc] made and shared, so what you see in the video below is quite a bit more than a proof of concept. The NPCs communicate as though they are aware of surrounding events and conditions in the game world, are generally less forthcoming when talking to strangers, and the new system can interact with game mechanics and elements such as money, quests, and hirelings.

Starting around 1:08 into the video, [Bloc] talks to a peasant about some bandits harassing the community, and from there demonstrates hiring some locals and haggling over prices before heading out to deal with the bandits.

The downside is that ChatGPT is currently amazingly popular. As a result, [Bloc]’s mod is stuck using an overloaded service which means some painfully-long load times between each exchange. But if you can look past that, it’s a pretty fascinating demonstration of what’s possible by gluing two systems together with a mod and some clever coding.

Take a few minutes to check out the video, embedded below. And if you’re more of a tabletop gamer? Let us remind you that it might be fun to try replacing your DM with ChatGPT.

Continue reading “With ChatGPT, Game NPCs Get A Lot More Interesting”

Now ChatGPT Can Make Breakfast For Me

The world is abuzz with tales of the ChatGPT AI chatbot, and how it can do everything, except perhaps make the tea. It seems it can write code, which is pretty cool, so if it can’t make the tea as such, can it make the things I need to make some tea? I woke up this morning, and after lying in bed checking Hackaday I wandered downstairs to find some breakfast. But disaster! Some burglars had broken in and stolen all my kitchen utensils! All I have is my 3D printer and laptop, which curiously have little value to thieves compared to a set of slightly chipped crockery. What am I to do!

Never Come Between A Hackaday Writer And Her Breakfast!

OK Jenny, think rationally. They’ve taken the kettle, but I’ve got OpenSCAD and ChatGPT. Those dastardly miscreants won’t come between me and my breakfast, I’m made of sterner stuff! Into the prompt goes the following query:

"Can you write me OpenSCAD code to create a model of a kettle?" Continue reading “Now ChatGPT Can Make Breakfast For Me”

Detecting Machine-Generated Content: An Easier Task For Machine Or Human?

In today’s world we are surrounded by various sources of written information, information which we generally assume to have been written by other humans. Whether this is in the form of books, blogs, news articles, forum posts, feedback on a product page or the discussions on social media and in comment sections, the assumption is that the text we’re reading has been written by another person. However, over the years this assumption has become ever more likely to be false, most recently due to large language models (LLMs) such as GPT-2 and GPT-3 that can churn out plausible paragraphs on just about any topic when requested.

This raises the question of whether we are we about to reach a point where we can no longer be reasonably certain that an online comment, a news article, or even entire books and film scripts weren’t churned out by an algorithm, or perhaps even where an online chat with a new sizzling match turns out to be just you getting it on with an unfeeling collection of code that was trained and tweaked for maximum engagement with customers. (Editor’s note: no, we’re not playing that game here.)

As such machine-generated content and interactions begin to play an ever bigger role, it raises both the question of how you can detect such generated content, as well as whether it matters that the content was generated by an algorithm instead of by a human being.

Continue reading “Detecting Machine-Generated Content: An Easier Task For Machine Or Human?”

The Voice Of ChatGPT Is Now On The Air

AIs can now apparently carry on a passable conversation, depending on what you classify as passable conversation. The quality of your local pub’s banter aside, an AI stuck in a text box doesn’t have much of a living quality. human. An AI that holds a conversation aloud, though, is another thing entirely. [William Franzin] has whipped up just that on amateur radio.  (Video, embedded below.)

The concept is straightforward, if convoluted. A DSTAR digital voice transmission is received, which is then transcoded to regular digital audio. The audio then goes through a voice recognition engine, and that is used as a question for a ChatGPT AI. The AI’s output is then fed to a text-to-speech engine, and it speaks back with its own voice over the airwaves.

[William] demonstrates the system, keying up a transmitter to ask the AI how to get an amateur radio licence. He gets a pretty comprehensive reply in return.

The result is that radio amateurs can call in to ChatGPT with questions, and can receive actual spoken responses from the AI. We can imagine within the next month, AIs will be chatting it up all over the airwaves with similar setups. After all, a few robots could only add more diversity to the already rich and varied ham radio community. Video after the break.

Continue reading “The Voice Of ChatGPT Is Now On The Air”

Hackaday Links Column Banner

Hackaday Links: January 22, 2023

The media got their collective knickers in a twist this week with the news that Wyoming is banning the sale of electric vehicles in the state. Headlines like that certainly raise eyebrows, which is the intention, of course, but even a quick glance at the proposed legislation might have revealed that the “ban” was nothing more than a non-binding resolution, making this little more than a political stunt. The bill, which would only “encourage” the phase-out of EV sales in the state by 2035, is essentially meaningless, especially since it died in committee before ever coming close to a vote. But it does present a somewhat lengthy list of the authors’ beefs with EVs, which mainly focus on the importance of the fossil fuel industry in Wyoming. It’s all pretty boneheaded, but then again, outright bans on ICE vehicle sales by some arbitrary and unrealistically soon deadline don’t seem too smart either. Couldn’t people just decide what car works best for them?

Speaking of which, a man in neighboring Colorado might have some buyer’s regret when he learned that it would take five days to fully charge his brand-new electric Hummer at home. Granted, he bought the biggest battery pack possible — 250 kWh — and is using a standard 120-volt wall outlet and the stock Hummer charging dongle, which adds one mile (1.6 km) to the vehicle’s range every hour. The owner doesn’t actually seem all that surprised by the results, nor does he seem particularly upset by it; he appears to know enough about the realities of EVs to recognize the need for a Level 2 charger. That entails extra expense, of course, both to procure the charger and to run the 240-volt circuit needed to power it, not to mention paying for the electricity. It’s a problem that will only get worse as more chargers are added to our creaky grid; we’re not sure what the solution is, but we’re pretty sure it’ll be found closer to the engineering end of the spectrum than the political end.

Continue reading “Hackaday Links: January 22, 2023”