A couple weeks ago, we had a kerfuffle here on Hackaday: A writer put out a piece with AI-generated headline art. It was, honestly, pretty good, but it was also subject to all of the usual horrors that get generated along the way. If you have played around with any of the image generators you know the AI-art uncanny style, where it looks good enough at first glance, but then you notice limbs in the wrong place if you look hard enough. We replaced it shortly after an editor noticed.
The story is that the writer couldn’t find any nice visuals to go with the blog post, with was about encoding data in QR codes and printing them out for storage. This is a problem we have frequently here, actually. When people write up a code hack, for instance, there’s usually just no good image to go along with it. Our writers have to get creative. In this case, he tossed it off to Stable Diffusion.
Some commenters were afraid that this meant that we were outsourcing work from our fantastic, and very human, art director Joe Kim, whose trademark style you’ve seen on many of our longer-form original articles. Of course we’re not! He’s a genius, and when we tell him we need some art about topics ranging from refining cobalt to Wimshurst machines to generate static electricity, he comes through. I think that all of us probably have wanted to make a poster out of one or more of his headline art pieces. Joe is a treasure.
But for our daily blog posts, which cover your works, we usually just use a picture of the project. We can’t ask Joe to make ten pieces of art per day, and we never have. At least as far as Hackaday is concerned, AI-generated art is just as good as finding some cleared-for-use clip art out there, right?
Except it’s not. There is a lot of uncertainty about the data that the algorithms are trained on, whether the copyright of the original artists was respected or needed to be, ethically or legally. Some people even worry that the whole thing is going to bring about the end of Art. (They worried about this at the introduction of the camera as well.) But then there’s also the extra limbs, and AI-generated art’s cliche styles, which we fear will get old and boring after we’re all saturated with them.
So we’re not using AI-generated art as a policy for now, but that’s not to say that we don’t see both the benefits and the risks. We’re not Luddites, after all, but we are also in favor of artists getting paid for their work, and of respect for the commons when people copyleft license their images. We’re very interested to see how this all plays out in the future, but for now, we’re sitting on the sidelines. Sorry if that means more headlines with colorful code!
Clearly labeled, and the courts sort out the copyright issue (it can be trained on legally acquired art).
Concur. Any license issue/court fight should be directed at the company that created and trained the AI, not the end user that wrote a prompt.
HAD’s stance is very conservative and nice to artists though.
If you know it’s dicey, using it is just as ethically dicey as making it.
Cool, so you don’t but any clothes, shoes or electronics right? Because those are made by exploiting 3rd world or Chinese labor.
You know that’s dicey, so of course you don’t do it…. Right?
Pfft joke’s on you… I buy completely unnecessary phones just to stick it to those Congolese kids. Fetch me more cobalt
The trouble is the lawyers (and lawyer-bots) don’t discriminate and as the owner of a website or business it’s not a fun (or zero-cost) process to receive legal threats or demands for money with menaces for using a picture that you believed in good faith was perfectly clear to use.
No. It has to be trained on properly licensed art.
I cant just hand out copies of a movie I buy. Anyone that’s claiming the models are “transformative” is just nuts. It’s just as transformative as encrypting it with “password” and then distributing it and saying “golly gee, someone figured out the password!”
We call it “training” and “model” but that’s anthropomorphizing it. Entropically, the data’s there.
“I cant just hand out copies of a movie”
But that’s exactly what many people did in the 80s with the VHS.
And the copyright wasn’t doing the majority of citizens a favor, but criminalizing them.
“But that’s exactly what many people did in the 80s with the VHS.”
OpenAI isn’t ‘people,’ it’s a commercial company.
“OpenAI isn’t ‘people,’ it’s a commercial company.”
Hm. And were does the majority of source material, the training material come from?
What large language models produce is the end result of what people post on the internet.
So it literaly is the embodiement of the people worldwide.
And the algorithm itself doesn’t care under which license it is being operated.
Things posted online are still copyrighted. Sharing something doesn’t mean I give up the rights to it.
What???
I should have been copyrighting all my witty, insightful comments on Hackaday all these years???
“I should have been copyrighting all my witty, insightful comments on Hackaday all these years???”
They are copyrighted. Copyright is the default.
Never met an artist that didn’t steal.
They steal inspiration.
They steal style.
They steal ______ to make ______.
Of course this is an exaggeration, except when it isn’t.
People quite literally don’t know where they get things from, and then when they analyze it they start getting offended with AI getting things in the same manner.
I’ve got so many things I’d like to attach some art to, and I don’t have the thousands of dollars it takes to pay an artist to do it. And, most artists suck at contract work.
“Good artists copy, great artists steal.” – Pablo Picasso
I for one am very comfortable using AI tool to create content, just as long as it meets standards and is both proofed, and meets some standards. I get the hate, but it’s not going away as a tool, and I embrace it fully, and use AI in a lot of my toolchains these days.
I do however have a strong distaste when clearly hastily slapped together content from poorly trained AI’s s published on sites without human oversight and proof reading / editing to improve it. It’s just laziness and makes the case against AI so much easier for the haters
That quote might be nonsense, though, despite being from a popular person.
Stealing means taking something away. Like taking an eatible fruit from a fruit basket.
Copying is more like searching the garbage can for a rotten fruit’s seeds and then use them to grow your own fruit tree.
Really, just because some insane artist from centuries ago made a statement doesn’t mean it’s true.
Also, as an artist (drawer, painter etc), you can’t really steal instead of copying.
It’s always some sort of copying, of varying degree. Or rather, deriving artistic elements.
It’s as if we would say that children “steal” their parent’s language when they are learning words.
Or let’s take cultur. People who are learning from other groups.
If they try to integrate, then they are being accused for cultural appropriation? 😂
Oh, wait. Some people really think that. 😑
Anyway, the whole copyright concept is an akward construct.
In nature, something like this doesn’t even exist.
It’s manmade nonsense, rather. No wonder machines have issues getting it right. Even we have issues with it.
“Stealing means taking something away. Like taking an eatible fruit from a fruit basket.
Copying is more like searching the garbage can for a rotten fruit’s seeds and then use them to grow your own fruit tree.”
What?!
If I spend hundreds of hours writing a book (not to mention the dollar investment in my time, materials, printing, paper, advertising, and distribution) part of the justification in risking that time and those dollars is the expectation that subsequent sales will at least offset my up-front losses.
Now suppose you come along and scan my copyrighted book to pdf and make it available for download, on the Internet.
Every unauthorized copy downloaded represents a lost sale for me and actionable damages that I could seek from you. It’s theft, even if it’s not you who pockets the revenue stolen from me. That’s the reality of things… not seeds of fruit abandoned in a garbage can.
Most books contain legal verbiage in the front to the effect of “this work is copyrighted and shall not be reproduced through any means without prior agreement from the publisher.”
That covers the case of the illegal pdf, but it should also cover the case of an AI being trained on my work. The disclaimer says ” shall not be reproduced by ANY means”. The fact that the resulting copy is not human-readable but instead been encoded into node weights (and mixed in with 1000 other books) does not change the fact that my work was still “reproduced” without authorization.
I speak to IP in the form of books because that’s a topic I know something about, but I think the same basic argument could be made for copyrighted music, performances, paintings, sculptures, or other copyrighted works used, without payment or authorization, for AI training purposes.
No it doesn’t. An unauthorized copy is essentially a sale at zero price, and demand responds to price. You cannot equate free copies with lost sales because asking any price above zero would reduce the number of copies sold – potentially to zero. The question is, whether the work is worth than the asking price to the customer.
The second point is that the true value of your work is in the effort you put into making it – what it takes to sustain you as its creator – not in whatever money you could possibly extract out of it through legal fiction such as copyright. You’re not really entitled to “infinite” money for doing something once.
The concept of copyright was invented as a means for publishers and distributors, not authors, to establish inventories of works that they could use as a monopoly and profit forever. It hasn’t got anything to do with any notion of “fair compensation” towards creators.
In other words, if the value of something was determined by how much you could sell it for, then it would make sense to sell you one dollar for two dollars, because that act would instantly re-define a dollar to be worth two. That would be a nice deal for both of us: you’d have a dollar less but it would be worth twice as much so you lose nothing, while I would have twice the dollars at quadruple the value so I would win as well. That is, if value was defined by money.
In reality, it would be a trick trade, because real value in the economy isn’t defined by money but by the real consumable materials and services available that the money can buy – so selling one dollar for two would simply shift some of your purchasing power to me, which should be obvious. That’s what people are doing when they’re gambling with investments and expecting interest, or when they’re making “art” and expecting people to keep paying for it regardless of what value was put into creating it.
The fundamental question is what are you really selling? Are you selling your work as a service of labor, or are you selling “copies”, which is to say, are you selling something real that you’re actually doing, or are you just attempting to trick everyone to give you money for nothing?
“Copyright” is merely the legal ability to do the latter instead of the former, so I wouldn’t be too proud of claiming that right.
Picasso was referring to ideas, not the actual implementation. For his time, it was a given that an artist had the actual practical skills to take someone else’s concepts and making them, like forging a painting and claiming it as their own. “Stealing” meant taking someone else’s creative idea and pretending it was novel by you.
It’s quite different to AI art which is literally just replicating and combining existing data without replicating the process that created it, or having any idea or creativity in the first place. The AI art is a sort of weighted average between given examples, which is only technically not a direct replication.
““Stealing” meant taking someone else’s creative idea and pretending it was novel by you.”
No, that’s plagiarism. Stealing is when you take my book–with or without my name on the cover– and then distribute copies without my permission.
It’s very evident by your remarks that you’ve never created any intellectual property… at least worth stealing.
Yes, but that’s what Picasso meant by “stealing” in the context of the quote.
Picasso gets to say that, but 99.99% of the people who say that don’t deserve to say that
Thousands of dollars?
I do patent illustration and only make $50-100 for most images and I get undercut by foreign artists all the time.
My daughter does childrens books, graphic novels and comic work, she does full pages for $100-200 and similarly struggles to book work with other artists from lower COL nations bidding a fraction of what she charges.
I wont argue against “most artists suck at contract work”, there are a TON of artists who struggle to do anything but “their style of art, at the pace of their own inspiration”. There are however a TON of professional artists who have learned to do the work, please the client, get paid, and move on to their next assignment.
Perhaps AI will finally be the thing that convinces kids not to take out a six figure loan for a degree that nets you thirty bucks for dozens of hours of work
AI poses as real a threat to a number of other fields as it does to artists, Programmers have as much or more to worry about.
Thirty bucks for dozens of hours of work doesnt reflect the reality of being a working artist/illustrator. While I dont land every job I bid on, I rarely accept jobs that pay me less than $25-50/hr. I still make a decent living with plenty of time free to work on personal projects.
PS my daughter has $0 in student debt. She lived at home and went to a state school that only cost her $10k a year.
“Never met an artist that didn’t steal.”
Oh, please. This is just wordplay. AI is absolutely not “getting things” in the same manner. I mean, in the absolute most basic sense, it’s not getting anything, it’s an algorithm being handed a curated set of data. But humans don’t store data. We don’t store images. We store things relationally.
The overall question is simple, but of course, lawyers and lawmakers make it complicated. With copyrighted materials, there are certain “base rights” which are granted, which get lumped into “fair use.”
Collecting the data into a dataset and selling it for commercial gain so that people can replicate similar outputs to the input is nowhere near what anyone has ever believed is fair use. The only way you could twist your morals enough to argue it is is by pointlessly anthropomorphizing the hell out of things. That didn’t work with Napster, and it shouldn’t work now.
The irony is that this debate goes back to the earliest days of computing.
What exactly consciousness is and how to prove it. Whether or not a machine can dream.
Considering how we can’t rationally explain human consciousness, emotions and the soul at this point, it may take a long time, still.
What’s very human and very thoughtful, however, is to assume that other “living” beings posses the same abilities as us.
It might be ethically more correct to simply opt in case of doubt that a lower life form, such as a plant, has feelings, too.
If we make this assumption and threat it with same respect as us, this thoughtfulness will bounce back to us humans and we will develop a greater understanding to the universe surrounding us.
It doesn’t, really.
Humans don’t copy directly. We have to learn to replicate, which is to say to re-create the process that created an image or a piece of text etc. whereas the AI just mashes existing datasets together. The only reason the AI isn’t outright copy/paste is because the algorithm is kinda-sorta interpolating between the data to create something that isn’t a direct copy.
The AI has to “learn” too. Prove the human synthesis process is fundamentally different
“The AI has to “learn” too. Prove the human synthesis process is fundamentally different”
Okay.
Humans learn in the actual Universe in real physical time and get feedback from their actions from the actual Universe.
We can have this conversation when LLMs are actually doing the same. They’re not. The only feedback they get is curated by humans. The only “learning” that’s actually happening is the people creating the training set.
Haiku, rhetoric, hyperbole, dismissive. Youre art with AI images attached has become half spam, diminished in quality. The messaging of in your creative medium stunted. Commercially inert as youve zero autonomy for copyright, trademark or correct attribution. Forgettable ultimately, like the output of a creepy crawler toy oven, an inverted Marxist rumination on consumption and the means of production. Capitalist propaganda at its least articulate
BEEP BOOP GET_A_JOB PINKO
Correct. And before artists get pissed off, do you know why artists suck at contract work?
Because contract work isn’t art. It’s work. Producing content isn’t what’s in your soul. You didn’t spend hours honing your skills because you wanted to sell your labor making the meaningless drivel that is demanded by most contract work.
You wanted to make art.
You may find a way to put part of yourself into a product, but the product is a cage you are kept in.
The good ending would look like artists making whatever they wanted because they don’t need to work, whilst the computers could digest all of our beautiful tasty art into the poop that is demanded by the machine. You don’t get a good ending without fighting for it though.
Instead the machine will just steal artists’ labor and never update the poop machine with fresh organic culture, so the poop just continues to get even more stanky, and the “starving artist” continues to be the norm.
A contract artist is called an “illustrator”. There are many people who are happy to learn the trade and work that part, building up expertise and skills to render whatever the customer desires.
An “artist” is a person who refuses to be told what to do, and suffers poor pay as a result.
Art is the most human of endeavours, and traditionally, arts and artists were supported by patrons in order to produce and maintain culture. It is only in very recent memory that the arts have become a gateway to poverty.
Art has always been a superfluous activity, practiced by people in their free time for their own amusement and on their own time and money. Patrons supporting arts were a rare exception rather than the rule in terms of the entire mass of people doing art, and very few people ever became famous as artists. “Culture” as you speak of was defined by a very narrow set of people with enough wealth to dictate what passes as art, and a handful of “superstar” artists who pleased the elites enough to get paid.
It’s only in relatively recent times that an “artist” has become a career in demand, with the rise of a middle class with enough surplus income to buy or invest into art and artists, and then with the demand for idle content for corporate media.
“An “artist” is a person who refuses to be told what to do, and suffers poor pay as a result.”
Andy Warhol had a net worth of 220 million, adjusted for inflation, when he died.
Sometimes the artist’s whims meet the public demand, most other times not. Point being, if the supply doesn’t even attempt to follow the demand, then you’re unlikely to find success on the market.
Then there’s also the conspiracy theory that people like Warhol were artificially propped up by the US government to troll the USSR.
He’d have lived longer if he hadn’t eaten off plates with lead glaze.
“Andy Warhol had a net worth of 220 million, adjusted for inflation, when he died.”
Warhol was more entrepreneur than artist.
https://www.artdex.com/andy-warhols-legacy-business-art/
Being in the UK I think we may have stumbled into “two countries divided by a common language” territory here!
Being in the US, it took me a moment before I got it, and now I can’t stop laughing. Thank you!
I’m not “getting” it, but haven’t given up on figuring it out either.
But this Winamp llama thing beyond me.
Hah! We are indeed.
Nothing was mentioned about the great damage AI does on the environment as energy consumption and electronic waste goes.
Also, I don’t understand this “both sides has equal weight” conclusion. Does the morals of using AI outweigh the little economic benefit? do we suddenly have a shortage of artists and a huge need for cheap art that for this technology to be justified?
And consider all the horses put out of work by the automobile!
No, they’re still there in the horsepower ratings!
This is exactly how people talked about bitcoin, something being new doesn’t mean it’s The Future. Even if the bubble doesn’t burst in like three months your cheap luddite jab sidesteps any discussion of what AI’s role should be. There’s no reason it has to seek totality in the way it has–cars did not replace walking.
“Nothing was mentioned about the great damage AI does on the environment as energy consumption and electronic waste goes.”
That might be right. But how does this differ from gamers with ther power hungry beasts of PCs?
With their huge heatsinks on the CPUs and their gigantic, power-hungry graphics cards that almost need their own PSU and a three-phase connection?
I’m all in for energy saving (if it makes sense and not make hardware age), but then all power consumers must suffer equally.
Just like both operators of automobiles and airplanes must be made equally being responsible for air pollution.
Guess what kind of hardware I use to run Stable Diffusion…
An automobile?
I think that it’s wrong for companies to use software to steal even a small amount from everyone–in an effort to put more people out of work. AI is after your dreamjobs.
Data Salami method
I appreciate HaD transparency on this…. issue (for some). I am justifiably offended by AI’s statistical fuzziness, and its potential for bullying, harrasment and dishonesty. Further it strangles an already difficult job landscape for many practical and creative professionals. Finally, the corporations championing this new fuzz-mind have no regard for environmentap concerns, legal liability, and copyright infringment. If we consume and grow these companies by feeding on their AI drip, they will inevatibly enshitify the product and leave a collapsed market in the wake of their upheavals.
Imagine if the typewriter was invented today.
Even if you are correct that AI is this useful advancement, the companies which built and own the infrastructure to provide AI to us, will run their ships ashore. Thus leaving you, dependent upon AI to live your daily life, up the creek without a paddle. You need not look at Uber or Meta, but AOL or Myspace. If this is indeed a golden age of AI, assuredly poor stewardship will tank this emergent market. Any value to you as a customer is merely transient. So perhaps it is just my leftist ramblings, but chatgpt is no different than a self driving Tesla, it is operated by humans making interventions to provide the semblance of automation. Snake oil and subterfuge, your culty gushing does not assuage my contempt for dehumanization via LLM.
” I am justifiably offended by AI’s statistical fuzziness, and its potential for bullying, harrasment and dishonesty.”
If you’re going to make such accusations against Hackaday writer Al Williams, you’d better be able to prove it!
B^)
Hi, I’m the rant guy you were expecting, here we go!:
I’ve made my living as a graphic designer for decades and I’m now studying art. A lot of my pals are on those kinds of jobs and yes, we’re worried about our income, and sad for the youngsters who are already having it far worse than we did (those entry-level jobs that are needed to gain experience and get hired? those are gone)
But we’re not so self-centered, this is far worse than our field getting screwed, it’s the whole AI deal: Natural resources, bias automation. Those two problems alone should be enough to scare all of us to death, the ecological and social consequences will be… not very fun. And for what? so some venture capitalists can get richer? Just use free stock images or give a chance to that nephew who tinkers with GIMP, nobody expects or needs you to hire pros for every internet thumbnail. You guys have a great blog already, just keep doing what you’re doing.
This isn’t just a tool, this is rooting for a hard kick on our planet’s balls, just when we need our tech to do the exact opposite.
This reminds me a bit of the advent of the camera, which caused a lot of portrait painters and landscape painters to go out of business.
That’s when abstract art was born as an art form, I vaguely remember.
Artists focused on things a camera can’t see.
OMG, always the same talking points. Just google it if you want, that “point” has been made and thoroughly dismantled again and again.
I don’t need Google, we had this topic in school, in art class.
And here I trust my old schoolbooks more than a reddit post or a Wikipedia page.
Sorry, I didn’t know you were that young. You’re right to trust your books over reddit bullshit or me.
And what you said about cameras and abstract art is true, but comparing that to what’s happening now… that’s the kind of bad analogy you’d find on reddit, written by adults that never got past school-level and think themselves smart and informed.
Keep learning and good luck.
“Sorry, I didn’t know you were that young. ”
The topic isn’t exactly new, I think.
It had been discussed in school every few year for about 100 years now or so? 🤷
“[..] written by adults that never got past school-level and think themselves smart and informed.”
I know that we know nothing.
“Keep learning and good luck.”
Thanks. Life is a never ending learning process.
“This reminds me a bit of the advent of the camera,”
Were the photographers taking pictures of the portraits that the painters made and selling them?
Photography is independent of painting: it’s a competitive relationship.
Using an AI model to generate images is not independent of people who actually made the image. It’s not a competitive relationship. It’s a parasitic one. Using an AI model to generate images still needs the people who created the images. They just don’t want to pay for it.
It’s all relative, really. We could provide arguments and counter-arguments all day long.
Problem is the vage definiton, also. Everything can be art. Paintings can consist of a blank canvas, even.
Or let’s take abstract art. A few chicken with paint on their feet make art.
Let them run over a canvas and It’s art. Or let a horse paint with a brush in its mouth.
Do these animals get their recognizition as an artist? No. Is it fair? No.
Seriously, artist should finally stop complaining all day long. Life simply isn’t fair.
It’s same as with musicans. Sounds of nature (rain and thunderstorm) sometimes make better music than the average “musican” on the radio.
“Using an AI model to generate images is not independent of people who actually made the image.”
Can you please tell me something that truely is independent of something else? 🤔
Because I can’t think of something right now that hasn’t been influenced by something else.
LLMs are not “influenced” by their training set. They are their training set.
If I got rid of portrait painters, nothing happens to photographers.
If I get rid of new inputs for an LLM, it breaks down. We know this. It’s been demonstrated a million times.
What about the very real technological advancements in medicine, technology, biochemistry or electrical engineering to name a few fields that have seen a benefit from the use of AI? We’re seeing the first steps of what could be an incredibly adventageous technology. Sure the environmental impact is high, right now. But we’ve already seen concepts for improving the efficiency of our current power grids by a not insignificant percentage with minimal modifications, that’s because of AI. We shut it down now we may very well stall as a species, and knowing how unwilling most of us seem to be to take the steps necessary, we may not have a choice on the matter in regards to the environment without some pretty significant intervention.
Define “real advancement” and “AI” – since you’re lumping in a whole category of stuff that doesn’t necessarily have anything to do with anything. There’s many different sorts of AI, depending on who’s defining it, and there’s “advancement” which is debatable.
Also, I’m the real slim shady here.
This is all no different than the “training” an “artists” mind obtains over years of observing other works. Until we agree on that point I won’t entertain any other kvetching about this.
Exactly.
There will always be special pleading of the sort you mention until we map the entire human brain and discover in detail what the mechanics of human synthesis and creativity are. In other words, we will have to hear about this slapfight for the rest of our damn lives
I personally see the current state of generative/large language model “AI” as a significant, and potentially permanent devaluation of all information humans have access to.
We’re generating more “content” that RESEMBLES information than has ever been recorded by humans. It’s pollution. It’s the burning of the library of Alexandria in a way that many people cannot perceive as destruction. And so far, it offers the most benefits to some of the worst people and worst use cases on Earth.
I also roll my eyes every time I see the comments in any LLM debate that are effectively:
“Oh, it’s ok to pick up and keep a cool rock you found at the beach? But somehow if a megacorporation uses a Bagger 293 to do the exact same thing to the whole coastline, that’s bad? Checkmate, luddite.”
“I personally see the current state of generative/large language model “AI” as a significant, and potentially permanent devaluation of all information humans have access to.”
Agree
Good point: information pollution, i.e., the disgorgement of AI regurgitation baxk into the human environment.
We need ppm standards : so many pieces of allowable AI datablend per actual human creation.
I’m reminded of that guy who just got arrested for fraud for creating “hundreds of thousands” of AI songs and then getting royalties for them from the listenbots he also created. https://www.justice.gov/usao-sdny/pr/north-carolina-musician-charged-music-streaming-fraud-aided-artificial-intelligence
I must admit I am curious to hear the AI songs. Were any of them good? memorable? not derivations of human creations?
Stop, you’re making Nick Land do that cartoonish evil wizard laugh he always does when you talk about this subject
While AI art is a potential quality issue, a blanket ban on it aligns you with an AstroTurf copyright maximalist movement through which disengenuous moral entrepreneurs are trying to fabricate entirely novel and unbalanced negative rights for a tiny minority.
I think there should be common-sense exceptions to this rule, but only if all the smart people in the room agree that it’s the right thing to do. One obvious situation is when the topic of the post is a specific piece of AI-generated art, and you are using that art editorially, not illustratively. This isn’t something that’s likely to happen here at Hackaday, but you never know, it might.
Who decides which ones are the smart people? Because some of these manager types are hella dumb
I’m the creative director for Ars Technica. I feel the pain of “we need an image for this code exploit”, and running Yet Another Skull Over 10101010101 image.
It’s challenging sometimes with these abstract things when you don’t have much to work with and you’re short on time.
At Ars we’ve adopted a policy of only using AI art when it’s specifically germane to the story. A piece about Stable Diffusion will probably have examples a writer generated using it to demonstrate it.
I’m glad to see Hackaday adopt something similar. I think it’s the right call. I will gladly take “more colorful code” in exchange.
How about, if you don’t have a good illustration, don’t add one?
It’s like, if you don’t have anything to say, don’t say it.
Our layout really relies on the headline/featured image, though. It just looks “empty” without it.
95% of the time (made-up number) we can put something informative about the project up there. Maybe 5% of the time, it’s just eye-candy, but we still like to have the site look good.
Hi Aurich, nice to meet you!
We have a similar problem with black-hat hacks: the image of the balaclava at the laptop is just so cliche, but it’s cliche for a reason — finding better art is hard.
Agreed also about running AI art when it’s germane. If that’s what the article is about, it just makes sense.
More colorful code it is, then!
As an artist and a former computer programmer, I am very sorry to say that there currently is no ethically generated AI art. I can absolutely understand , particularly in a blog format , the desire for quick, cheap, on demand art. Sadly, the creators of AI literally stole millions of works of art without the artists permission to create AI art. Had it been done with artist permission or compensation, I would have no ethical quandaries with the use of AI art…but that is not what they did. Multi-billion dollar companies did an end-around the law forming a non-profit , OpenAI, to scour the web downloading millions of artist works without their knowledge or approval …chopped them up into mathematical code which has no copyright …then let those same billion dollar companies who funded them use that code for their AI. Just because it is more clever than ripping a painting off a gallery wall to mix with other torn up works of art doesn’t make it anymore ethical or legal . Want art? Artist are all starving …they work cheap…pay someone.
Use AI art whenever you want to – why limit yourselves? The wilder the better. Accreditation to the AI source optional. Live free, breathe free.
And to think that I caused this. I didn’t even mean to cause a commotion, just offer my thoughts. The comment I posted was
The part where the comment instantly got removed is not part of the comment
You’re really doing the “we’re not Luddites thing”? The Luddites didn’t object to automation, they objected to specifically their shitty bosses replacing them with machines making shittier versions of what they produced, after being horrifically exploited. Like if you can’t see the similarities you’re blind.
People like to forget about that aspect of the luddites. However, they lost. And will lose again. The way out can’t be the same as the way in, history won’t work that way
Joe Kim is indeed a treasure! I’ve turned quite a few Hackaday illustrations into wallpapers!
What’s so bad about a colorful snippet of code when the context is code?
I find the moral high horse justification of this new policy to be incongruous with the overall spirit of using technology to hack together clever end results. Explicitly publishing this article comes across as virtue signaling to me.
No virtue, no signalling. Just thinking through a complicated subject in public.
We absolutely see pros and cons on all sides here, and as I said, we’re being conservative at the moment, but we’re still flexible.
For instance, you could take away the legal/moral issues by using something like Adobe’s image generators, which they claim to have properly sourced all of the training data for. But it would still have the “AI-art” look to it, which I’m willing to bet isn’t going to wear well. (But could totally be wrong!)
It seems obvious that the way an AI remixes other works, potentially including copyrighted works is wrong.
Then I consider the fact that our human brains do that as well! What is our creativity but a remixing of all the things we have seen regarding a subject including copyrighted information? Do any of us really invent anything totally new and ex-nihilo? And if so then how?
So, I think as long as it is modified enough from any original training material it is unrecognizable it should be good.
In my, definitely not a lawyer, definitely not legal advice opinion.
No doubt the courts will reach the opposite conclusion and stop legal AI dead sooner or later.
All morality, ethics, ecology, and economics talk aside; Generative Images continue to look like dog turds and should not be used on that ground alone.