In the world of digital art, distinguishing between AI-generated and human-made creations has become a significant challenge. Almost overnight, tool sets for generating AI artworks became commonly available to the public, and suddenly, every digital art competition had to contend with potential submissions. Some have welcomed AI, while others demand competitors create artworks by their own hand and no other.
The problem facing artists and judges alike is just how to determine whether an artwork was created by a human or an AI. So what can be done?
Put It To The Test
First of all, it’s crucial to understand what AI art generators can and cannot do. These algorithms, often trained on vast datasets of human-made art, excel in pattern recognition and replication. However, they typically struggle with conceptual depth and the nuanced, often irregular, elements that human creativity can produce. They’re great at mashing up weird combinations, like creating a cartoon picture of cats surfing off the coast of Neo-Hawaii, for example.
They’re less good at refining a singular style and many AI image generators also often suffer with nuance and detail. For example, they may generate humans with too many teeth, or weird hands, or generate cars with door shut lines that make no sense or tail lights beyond human comprehension. Regardless, these image generators are still capable, and the best can create images that are very difficult to catch out as non-human in origin.
Herein lies the problem. Just looking at an artwork may not be enough to determine whether it was created by a human or an AI. There may be clues of course, but they could also be misleading. For example, was this drawing of a construction worker created by an AI, because of the weird hands, or was that a stylistic choice by the artist? It can be impossible to say with certainty one way or the other.
Ultimately, documentation of creation may be key for artists to prove they really created their own works. AI image generators tend to spit out a finished image without taking any intermediary steps. By contrast, a human drawing an artwork on a tablet, for example, will have made thousands upon thousands of strokes, created layers, applied effects, and so on. By capturing the creative process, or even just capturing snapshots of the art in progress, is the perfect way to prove a piece was created by a human.
Of course, even this is an imperfect science. AIs aren’t just limited to producing still art anymore, for example. It’s plausible that an AI could be created to generate images that appear as progress shots of a final artwork; it could even generate a fake screen-captured video. Even if this would be difficult today, it’s well within the realms of possibility given what we’ve already seen AI tools to be capable of. That might push artists into recording themselves sitting down as they create their art from start to finish just to have proof that their work is their own.
This is all well and good for digital drawing or painting disciplines, but it can fall apart beyond that. Let’s say you’re a photographer. How do you prove that an image you submit is your own? Footage of you holding a camera and pressing the shutter doesn’t really go a long way in that regard. For matters like these, more advanced techniques may be required. Tools could theoretically be developed to look for telltale signatures at the pixel level that reveal a particular AI image generator was used, but by that point, you’re getting way off into the weeds. Suddenly a black box is in charge of determining whose images are legitimate, and whose aren’t, and there’s always the potential for false positives or false negatives to ruin somebody’s day.
Case Study
For digital artist [Mizkai], this problem has already become very real. [Mizkai] entered an illustration contest in October, penning a Halloween scene with a young witch. After submission though, things went awry. “At first they said they suspected I had traced AI work as my style was inconsistent,” [Mizkai] told Hackaday. Trying to rectify the issue in good faith, [Mizkai] elected to try and sort the problem out with the competition organizers. “I said I’m happy to cooperate and provide them with evidence,” says [Mizkai], noting that she provided an original Photoshop file with layers intact, indicating she’d created the piece from scratch. When that wasn’t enough, [Mizkai] provided a range of other artworks including pencil drawings and inked pieces to bear out her case that she was indeed a real artist. When that wasn’t enough, she stepped up to providing time lapse videos of character sketches to show her technique.
After that, [Mizkai] says the panel allowed her entry to proceed with the voting process, only to backflip a short time later. “Only after the voting had closed, they decided to contact me again to say that they were disqualifying my entry as there was insufficient evidence,” says [Miskai]. The experience left her sour on the whole competition. “Honestly it has left me feeling deflated that despite jumping through all their hoops and cooperating with them completely they would still arbitrarily decide that my work is fraudulent,” says [Mizkai], adding “I honestly don’t want to compete in competitions in future and have been feeling like it’s just far too mentally draining to have to prove myself multiple times.”
It’s something that competitions will have to get serious about, and quickly. There must be hard and fast requirements for proof of creation if it’s deemed so important, and they must be presented up front. It’s no good challenging an artist’s creation afterwards, when they haven’t previously been instructed to record their process during the actual creation of a piece. It seems likely many artists will begin recording their work just in case. Regardless, it’s only fair to state the rules up front such that all competitors can compete on an even playing field without having their art unduly called into question.
For now, most art competitions will rely on competitors to play by the rules and only submit their own creations, come what may. Despite this, high-profile competitions have already publicly fallen victim to AI submissions, even handing over prizes in some cases. It’s hard to know how to put Pandora back in the box when AI image generators are getting so good at mimicking real human art. It may ultimately be a battle the humans are going to lose.
Use an ai to check for ai
Call it Ainception
That’s one way to test the halting problem
But Can a quantum computer solve the halting problem.
The only what to know for sure is to run the halting solver against it’s self.
Nope, not going to happen, unless we are living in a simulation.
I threw the images from this article into “AI or Not” and it worked, it called out the AI and recognized Mizkai’s art as human. Another called “Is It AI” also worked.
the contest holders were kinda lazy
I do not think I would trust black-box “AI detectors” to work.
I work in a University and the question of AI and assignments is a bit of a nightmare. I was suspicious of one paragraph a student claimed to write, so I chucked it through five AI detectors. Two said it was probably AI generated, two said it probably wasn’t, and one said it couldn’t tell.
Granted, it’s not like I thoroughly researched these to make sure I was using the best detectors, but it still makes me wary. The company which makes our plagiarism detection software initially said they were close to having a detection tool and then backtracked and said they won’t have a solution for this. What we’re really settling on is A) heavier weighting on in-person assessments where they don’t have a chance to use AI tools, and B) if your essay question can be easily answered by ChatGPT then it probably didn’t involve enough thought, synthesis, and judgement anyway. That’s hard for first years who are still finding their feet, but assessments later in the degree have a far heavier weighting anyway, so our final year questions are getting a lot more difficult.
Well, write on ‘paper’ and in class room . That ‘should’ solve part of the AI problem :) … And prove the students are actually learning something rather than copying something.
I’ve spoken to a couple of educators who are also concerned. Of course plagiarism has always been a thing; but AI makes it trivial to do. Maybe it’s time to reduce dependence on handed-in solo assignments. Or maybe set tougher assignments, and allow use of AI as a research or organizational tool. A tough problem causes current AI to equivocate or spout reams of inconclusive BS, which is easier to tell from a reasoned, evidence-based conclusion.
For now I am relying on the fact that AIs so far aren’t good at logic. My philosophy undergrads are required to center their papers on a step-by-step numbered logically valid argument.
When I just asked AIs to give me a valid argument that the sky is blue, they produced things that are not good, but look similar to some student errors I’ve seen this term. Hmm.
Years ago, I read that if you return an essay to a student with every 10th word replaced with a blank, the student (if they truly wrote it) will be capable of reinserting the missing words with around 90 percent accuracy.
AI writes crappy, lazy prose. It’s _literally_ average.
And while teachers (and editors) can’t tell the difference between mediocre human output and mediocre AI output, we _can_ differentiate the insightful and well-written versus the just-turning-the-crank. That’s your case B, above.
But I totally agree that to teach people how to write well, you have to walk them through the average stage, and I believe the student has to do the work themselves. So how does the teacher know? I think this is unsolvable, and that’s lousy b/c it becomes harder to give early feedback.
I find, more often than not, that the act of writing helps me crystalize ideas and come up with new ones. Once the words are on the screen, I engage differently with them from when they were in my head. It’s easier to riff on ideas once they’re written down, but also easier to be critical.
Does the “have GPT write it, and I’ll fix it in post” rob the writer of the above? Or does it push it all into the prompt generation phase? Is that easier or harder?
Responding to KenN:
Can’t reply to comments deeper than this. The comment plugin is getting worse?
In my experience equivocal reams of inconclusive BS has been the standard way to get a passing grade in college/university courses for quite a while. I’m sure it depends on the prof and area of study but it’s pretty rampant.
Responding to Elliot:
LLMs can write fairly convincingly in the styles of specific people if prompted to. If unprompted its likely to be fairly average.
I’m an instructor at an art college. My college decided to immediately jump on the AI bandwagon because they thought they’d get ahead with marketing and enrollment last year. They immediately walked back their announcement of an AI/Concept art minor.
That being said, its not the tool I worry about – Its how society views the worth of artists. For context I work heavily with technology.
I tell my students every semester that they can take the easy route all they want, but it won’t magically grant them technique or design understanding. Also to be transparent when AI tools are being utilized and where in the process just like any of their other sources, references, inspirations.
It looks to me like they just didn’t want Mizkai’s artwork in the competition and used the AI clause as an excuse.
That was my exact thought.
What if it gets a false positive or a false negative?
Should people have their art kicked out of a competition because the AI says so?
We’ve already seen that this kind of thing can disproportionately effect some groups, like when the CatGPT director said non-native English speakers papers were AI because they had worse grammar.
In my experience of senior high school this is not the case. If a real student wrote the work, the grammar will be all over the place. AI work was easier to spot because the grammar would be correct, but “forced” or “canned.”
I’d argue that the bigger problem was autotranslate. Is it OK to translate a question into “your language” then answer the question, then autotranslate it back into the original language? For professional work, I’d argues that it’s probably OK, as long as you were open about it. But in an educational setting, I’d probably say no. Part of getting a certification from an institution (Whether Boston, Berlin or Beijing) is the implication that you are capable of interacting with the people there.
Oddly, this approach failed when trying to identify AI generated written work. I’m betting it would be more effective with images, but we won’t know until we look into the current efforts.
This setup is called an “adversarial network” and is a well-known technique for making networks better at deception.
You intentionally set up an arms race between generators and detectors, and the end state is always a much better generator and a detector which is 50/50 accurate.
Look at the hands even people who can’t do hands can do a better job then AI
The only answer is to *allow* AI entries. If an AI can make something better than a person, tough luck. It’s basically the same as film vs digital cameras – there was never any way to prove who took a digital image, or that it hadn’t been altered, etc – and now we embrace edited photos all day long. Yes, lots of people have invested many hours developing the skills needed to create art manually. And they tend to be very online people. But if someone produces something you enjoy using AI, cool! If you love tangibility and want to pay someone for an original, physical artwork – awesome! But all AI has done is to empower creators, as all technology has always done.
Lots of unexamined priors
Crazy how you can read my mind and know which assumptions I have and haven’t examined. Is there one in particular you’d like to share with the class?
I have read your mind and determined you are big mad
It hasn’t though. You sound like this one idiot on my Twitter timeline who plugged some words into an AI and started talking about how AI is “empowering creators”. You’re not a creator for issuing prompts to what amounts to a Google-enabled image blender. A lot of seriously great artists are currently under-selling themselves right now specifically because everyone wants to feel special by having custom art, but nobody wants to pony up the dough for the human touch.
Your comment “A lot of seriously great artists are currently under-selling themselves right now” is wrong. They are not “under-selling themselves”, they are adjusting to market demand. The value of their work is whatever people are willing to pay for it, so the only way they can under-sell themselves is to sell below what people are offering. If thy cannot compete with AI, then they are probably not as great as you imagine.
What advantage does the “human touch” provide if the person is perfectly happy with an A.I. generated result, be it an image, text or software code? If the A.I. generated output is good-enough and is cheaper than what a human can do, then that is the smart way to go.
The problem with AI is that it doesn’t really give you what you want. It’s close, 90% close, but it’s not intelligent and it’s very difficult to “explain” to it exactly what you need. That’s the realm of prompt engineering, manual touch-up, etc. and that’s not something just any customer can do.
So what happens is, when real artists are forced to compete with this 90% offering, they can’t. They just decide it’s not worth it and quit – people will never put the time in to train to become artists. That also means any manual touch-up and mixed processes go out the window. If the AI doesn’t spit it out, it’s a no-can-do. You get this gap where you cannot buy, even with cash in hand, the service you really want.
It’s the same transition as what I am starting to see with 3D printers. There’s now people who are “makers” but they don’t make stuff. The printer makes it, and if the printer doesn’t make it then they have no idea what to do. They never even tried, they just skipped ahead to the point of “press print”, and if that doesn’t work then it’s no-can-do.
“There’s now people who are “makers” but they don’t make stuff. The printer makes it, and if the printer doesn’t make it then they have no idea what to do. They never even tried, they just skipped ahead to the point of “press print”, and if that doesn’t work then it’s no-can-do.”
That doesn’t make them “NOT makers”, it just makes them shitty makers. They probably wouldn’t even have been shitty makers if it wasn’t for 3d printers. Bad example.
If that is so, everyone is a “maker” – it’s just a matter of degree.
It’s like the paradox of the heap – when you remove grains of sand from a heap, at some point it just doesn’t make a heap any longer.
>They probably wouldn’t even have been shitty makers if it wasn’t for 3d printers.
That’s a good argument about why the person shouldn’t be considered a “maker”. The 3D printer in this case is like trying to hold two grains of sand on top of each other and calling that a heap; take the support away and your “heap” collapses into a “not heap”.
I guess I wouldn’t call someone a maker if they just print designs other people make. But if they make new designs or modify others” designs, I’d call them makers even if 3D printing is the only maker skillset they have.
In that case I would compare the difference to a composer and a player. You don’t have to know how to play a violin to compose music for a violin, but that doesn’t make you a violin player – someone else has to play your music.
“The value of their work is whatever people are willing to pay for it.”
Only under capitalist ideology. If you want to leave the value of your artwork up to the whims of the masses that’s your prerogative, but don’t pretend it’s some sort of natural law.
That’s not even “capitalist ideology”.
“Capital” is productive assets, which have the power to renew and increase themselves through the workings of the economy. In other words, “capital” refers to the means of production, and capitalists are those who seek to own it.
A painting doesn’t help you to make a new painting, or anything else for that matter. If you judge art as capital, it has no value or negative value because making and consuming it consumes other productive assets without renewing them.
“Whatever people are willing to pay” is a blind neoliberal version of supply and demand that doesn’t differentiate between productive and consumptive activities, or a leftist caricature of the workings of “capitalism” that intentionally ignores parts of it.
It seems like artists often want to work within the capitalist system when it suits them. I don’t see a consistent message from the art community that capitalism should be dismantled or that we should be doing art for art’s sake. Most artists seem to want stable work under capitalism, not to disrupt the system. If you accede to the principles of capitalism willingly then you’re going to be subject to them, and that includes valuation being up to the whims of the masses. The problem isn’t fundamentally about AI, it’s that capitalism always exploits artists.
Came here to post this. If your art is worse than what a computer can do, you deserve to lose. All the kerfuffle about AI is just lazy artists getting their comeuppance.
It’s doubly ironic that the artist in question here is drawing in a “manga” style, which is a genre that is basically founded on cutting corners by imitating, simplifying, and being generic overall to speed up work.
There was no way to prove the photo wasn’t altered because unless you took it RAW it is always altered by the camera when it is compressed. Technically what we see is mostly generated by our own image processor anyway. So we have settled on AI image processing inside our ‘cameras’*. *Everyone is using a phone to take pictures now anyway, outside of professionals and enthusiasts.
In the hands of most prompters, AI art has a very easily identifiable aesthetic style. Which sucks for whoever’s artwork had the major influences on that style, since now their own art is irrevocably “in the style of” AI art.
Also if you simply don’t use square aspect then you will avoid being associated with 95%+ of prompters
Square aspect is not a good solution. Auto cropped photos will fix that no problem. I guess the AI checker will start looking for obviously cropped photos.
“Dodgy text is a sure-fire way to detect an AI image, but generators are getting better at avoiding these mistakes all the time. This by DALL-E.”
Yep. Better, but not perfect, unless you intentionally wanted that magazine to be from the 198*8*0s, or, about 18000 years into the future.
Did we start referring to AI as a thing? “An AI” versus just “AI”
It’s like how some car commercials refer to some cars without an article e.g. “Mustang is here, new for 2024” versus “The new mustang..”.
For my bit- let’s not personify inanimate objects anymore. Thanks.
-an Human
We have been personifying object for as long as we have had the concept of personification. Sorry.
Basically for as long as we have been talking about objects.
“AI” when referring to the concept, “an AI” when referring to a particularly computer program.
a* human, not ‘an’ surely.
This topic reminds me of something:
The other day I had the thought that if AI is trained by watching humans online then at some point I’ll likely see an AI use ‘would of’ in a sentence, and I was wondering if I/we at that time should abandon technology and go dwell in the woods :)
I’m not saying ‘a’ vs ‘an’ is the same mind you, I just remembered my thought.
“By capturing the creative process, or even just capturing snapshots of the art in progress, is the perfect way to prove a piece was created by a human.”
No.
This will identify as “potential AI art” as any art where the artist chooses or cannot provide intermediate steps.
Imposing an attitude of “you must provide…” or your work will be labeled “AI” isn’t an acceptable solution.
I don’t think you’re going to find much art that stands any chance of finding itself in the situation of warranting verification–winning contests, serving as coursework, noteworthy to academia, etc–that won’t have artifacts preceding the final work. That may take the form of preparatory sketches, reference photos, a photoshop file with the work broken into layers, or even just opening said photoshop file and hitting undo a bunch. I don’t think it’s *too* unreasonable for any sort of class or contest to have rules that say “you need to be prepared to cough this sort of thing up.” Obviously, there’s levels of proof, but I don’t think anyone reasonable is shooting for metaphysical certainty here.
Now, the techbros enabling the “typing a few words in the input box and using software to leverage stolen work makes me a cReAtOr” crowd will likely try to adapt their models to provide these kinds of artifacts. But there isn’t really a magic spell to stuff this genie back into the bottle.
I think the only way the Undo button would work is if you never closed Photoshop with the finished piece. When opening it from Photoshop starting up, the Undo is grayed out, because it doesn’t have a point to go back to.
Many programs actually save this information, sometimes in a separate file that sits next to the image.
What was the DALL-E prompt that produced that COMPUTER magazine image? I’ve been trying to get that sort of aesthetic out of DALL-E, with no luck
check the file name, it has the first chunk of the prompt
“A 1980s style computer magazine advertisement featuring a cat sitting beside a small computer printer. The image should have a vintage aesthetic” …
works decently in Bing Image Creator, although it’s not quite as “painted”
Try “in the style of Robert Tinney”. That image immediately made me think of Tinney’s iconic Byte magazine covers.
To comment on Mizaki’s experience, always ask for a commitment with responsibility.
The first round of supplying the original photoshop stack should be done in good faith. Done and done.
When they asked for more, the response should be: “If I give you this, will it be sufficient?”, and “so *you* are saying that this will satisfy the requirements?”, and know who the “*you*” is you’re speaking to.
This leverages a quirk in psychology. Getting people to say clearly that they will do something encourages them to actually do it. In society we keep track of how reliable people are, and so people unconsciously shy away from being an unreliable person.
By setting clear guidelines of what will be sufficient, you can avoid the dribbling-on. If they come back with more requirements, you can turn it around and claim that they are not doing what they said they would, that they said the 2nd round would be sufficient, and so on.
(Works really well for job-interview call-backs. Your time is not worthless.)
Secondly, people like to avoid responsibility, so get a responsibility commitment whenever possible. If the person won’t commit to being the responsible party, that’s a red flag and you should move to getting responsibility from a higher-up. In this instance, the contact person probably wouldn’t commit to being the responsible person, so Mizaki should have then asked for a ruling from the judges that the next round of evidence would be sufficient.
And if they can’t/won’t do this, then Mizaki can complain that their rules are not well defined. It puts the burden on the judges to be clear and consistent, which is what they should be from the outset.
(Anecdote: There was a sparking wire on the pole on the street near my house, and I recommended that the cop put out cones and shut down the block in case the wire came down and hit a car. The cop replied that he didn’t think the situation warranted this, so I said “Good. I’ll let the neighbors know that Officer Simpson decided not to shut down the block and everyone is safe” and walked away. Ten minutes later, the block was closed off with cones.)
Being nice and cooperative should clearly be your first response, but you always have to be on the lookout for problems.
Hope for the best, but assume the worst and figure out a way to avoid the worst ahead of time.
The problem with your approach with the policeman is it’s so abusable – is it’s essentially what shut down Gatwick over non-existent drones. Even though the claims had no evidence, it was above everyone’s pay grade to say so.
> It seems likely many artists will begin recording their work just in case.
I fear this will only move the goalpost until new MLAs are trained on said recordings.
-> you can get the image and the proof from the same MLA.
Sure they will be relatively easy to spot at first but even then it will take a lot more time to check every submission…
Yeah, that famous stock photo about how NOT to hold a soldering iron. :-)
I would say there’s still a reality problem with the “soldering guy” illustration in this post. Who solders one-handed with the other hand doing squat? If it isn’t holding solder, my other hand is holding tweezers or something to secure a part or wire, or holding and moving the circuit board. I’ve also gotten adept at holding a wire or cable AND solder in one hand, to tin the wire’s stripped ends.
As I soldered yesterday, I reflowed a few hole-thru joints with one hand. So it happens.
Sure, but it’s rare, right? Even in that case, I’d still have solder ready to add, or some wick to clean up with.
Yep… Rare. I agree :) . Just after I’ve soldered up a bunch of pins, I normally take a ‘close’ look at the ‘job’. If looks not quite right, I’ll touch with iron to ‘reflow’ that pin. Usually have plenty of solder there, so no need for more.
haha. Good point, I always need helping hands. Hand generation got fixed and now “people using tools” and “tools usable by people” is the next weak spot. I saw another fake recently, an “arms cache” in the Gaza conflict, and in the pic there was a double-sided AK with barrels on both ends!
Well, paint it manually on a ‘canvas’ then take a photo of it for the public’s enjoyment. Save the ‘originals’ No more proof problem.
I wouldn’t be so sure that it’s harder to verify whether a (modern) photographer really took a certain image than it is with drawn art. The final image will still usually be an edited version of an original generated by the camera, so the equivalent of showing the photoshop file with layers is to show the original file and the sequence of edits. Then you can scrutinize the original file, which has metadata indicating what camera was used with what lens and at what settings, focal length, focus distance, etc (Unless they used vintage lenses without electronics, in which case you lack the lens data). If you notice that the field of focus is wrong for the settings that you see, or that they claim to have used a slow shutter speed but there’s a flying bird in the background which isn’t blurry, or that there appear to be strange artifacts not consistent with what’s normal for the lens – a random lens flare or bokeh shape with the wrong number of sides for the number of aperture blades, etc – then you’ve caught them doctoring things, whether with or without AI.
Photography has _always_ had the “reality” problem plaguing it as an art form.
On the one hand you get folks who say it’s just pointing a lens — that it’s too literal — and on the other that it’s all Photoshop (or trick photography back in the day) — that it’s too fake.
Oh, absolutely – but fake photos get exposed (hah!) when the faker fails to get some details right. And that’s even just from eyeballing the published result, to say nothing of a detailed analysis on the original raw file or film. If an AI could get most kinds of photo perfectly right all the way back to the raw file, including the sensor and lens non-idealities, weird optical effects, and edge case upon edge case for every camera and lens combination ever… what would be the point of owning a camera? Just take a snapshot with your phone, and use that as a prompt to tell it to simulate how it’d look if you had used a completely different setup. CSI “Enhance”!
Of course, there’s times where optical physics and simple logic say you can’t do that. And you don’t need to fake every camera, just whichever one it’s easiest to claim you used to take your picture of bigfoot. Plus, most of the time you won’t need an amazingly good fake to be believed. You can photoshop an original into something almost completely different, but if anyone looks at the original they’ll see that. And if you are trying to get what could be the picture of the year published, going backwards from a final output to a very detailed raw that fits your story seems a lot trickier than having an AI give you images in a predefined style, and then tracing parts of them a bit at a time so that it looks like you drew something from scratch. Maybe I’m biased?
Right. Like the Cottingley fairies that so facinated Sir Arthur Conan Doyle (of Sherlock Holmes fame.)
https://en.wikipedia.org/wiki/Cottingley_Fairies
Really interesting comments here. I have mixed feelings on AI myself and of course it’s a quagmire of issues.
In proving art is made by a human, what about art made from collage of images or 3D models? Those may use computers exclusively already.
There are all levels of human made art out there from every era. Some are masterpieces, some are commercial graphics, some weak attempts at masterpieces and some are just mediocre or even bad. Also with photography, computer animation and other illustration done with computer software, AI assistance has existed for a while. For photography, the quality of the equipment can matter a lot, especially for more inexperienced photographers.
I think the need to go to extreme measures proof a human made art in a contest is misguided. Simple proof should be enough. Also to me, splitting hairs between Photoshop made art and art using AI is misguided, especially considering the new Adobe Firefly.
AI art is only going to grow. One solution could be to require AI art generators to embedded something showing they were made by AI. Problem is it will get easier and easier to make AI art generators and bad actors won’t label there art while everyone else has to.
I am both unnerved and excited by AI art and AI in general. I think any artist that uses technology at all needs to embrace AI somewhat and those that feel they can judge art should do so as well. Then there will likely be a counter movement celebrating traditional arts.
I do think that artists needing to go to long lengths to prove no AI is used is not really appropriate. Maybe in certain competitions, proof should be required. In general art contests t should be more open and AI is allowed.
Worth noting that writing a good prompt to create interesting art is not that simple and AI can be a great starting point for human created art – like starting with a photo.
Also their was a time when photography required dark room skills. Now digital cameras handle a lot. However I don’t notice any photographers giving their camera credit for the photo. It’s worth considering that AI is a tool just like a digital camera.
All of this will take time to sort out. However most important is to encourage human creativity and creating art – with and without ai assistance.
Arts are now required to use photoshop?
Show the process…
https://youtu.be/3uzxcl_d8uk?si=J6JOgF-oPVIyBsje
There has for a long time existed cameras that cryptographically signs a photo. This is required for use in a court of law or similar to prove that the photo hasn’t been doctored. But nothing is stopping the camera manufacturers from making this standard for all pro and semi-pro cameras. Not sure if hobbyists wants this feature since it can hit the other way too – prove that a youth was actually out making bad things and the photos published was taken by the youth.
So for a photographer, it would help to both show the post-processed photo, and a raw, signed, photo showing that the photographer did use camera body X to take the photo. This will obviously not tell if the photographer used AI during the editing process. And tools like Adobe Firefly will be used more and more in the creative processes.
I spent hours laboriously up-scaling the title image to 4k https://gitlab.com/hoolio/i3config/-/raw/bc32bbb8428825f82c928e6fe5f8e023b63dd79b/wallpapers/DALL-E.Hackaday.Cyberpunk.webp
Thanks!
I love the AI-wrencher, BTW. It’s got crossed bones underneath, but the vestigial other bones for ears. It’s the equivalent of the six-fingered hand.
That said, I’ll put the creativity of Joe Kim, certified real human being, up against AI any day.
Wasn’t this called a GAN in a distant past ? just compare the images, retrain generator/discriminator and repeat. The resistance of those puny humans is futile…
How do I prove an AI didn’t make my art? Easy, my art sucks, I feel sorry for artists though.
So a few years ago I was involved in a project at the called Artivity which was tracking the development of digital artwork by artists as part of documenting art practice, providing some really interesting insights. The code in open source although not maintained actively: https://artivity.io/ and https://www.ligatus.org.uk/project/artivity
That isn’t “Art”, it is just illustration or design work and in the commercial world the method does not matter, only the end result and the time/material costs are relevant to a business. Furthermore real Art does not concern itself with the medium, unless the message is about the medium, otherwise it is all about the message regardless of how it is communicated. Most people who use AI to produce media are not skilled enough to transcend the “default settings” local minima thus their work is confined to an increasingly crowded space of obviously AI produced work that all seems to come from the same “strange land” i.e. finite and distinct part of latent space which is a tiny part of all possible latent spaces as the AI is limited to its training set. Try creating an image of an octopus with the right number of legs, that are all properly contained inside a space suit and you will soon learn how limited current AI systems are due to the above described factors. There are a few artists who transcend the limits of the “default settings” trap but they are rare and I celebrate them when I find them.
My philosophic question is… does it matter? If the competition is one entry per person, best drawing wins, does whether the person drew it in photoshop or generated it with a series of prompts?
Depends on the definition of the competition. If there is a contest for best from-scratch car. To drive in with a store bought model you had to do close to nothing on would be seen as rightfully cheating.
Which i believe is the dillema at play here. These contests rapidly exclude AI generation and treating it as Cheating cause the artist does close to nothing but type in a description with maybe “in style of artist X”, but…how can you tell that being the case in the face of ever improving AI?
I think people just need a better understanding, left and right i see discussions about how AI generation is ‘unfair’ and ‘takes away jobs of real artists’ etc, but at the same time in 95% of cases we ourselves can just tell if an image was AI generated or not because the image will have some weird issues, even if some may take a second look to spot.
Add to that the fact that “AI” has been part of artist’s tool sets for years (just look at all those ‘content aware’ things in Photoshop etc) and i kind of wonder why some people appear to be so upset, for as far as i can tell nobody lost their jobs over those things either?
People need to accept that things change & not cut themselves so short, all these AI image generators are still not ‘perfect’ and imo any artist that’s worried should just learn to use these generators as part of their process, so they become aware of their shortcomings (and can use that to justify ‘you should hire me because’ to a customer) AI image generation is a TOOL, not a replacement for an artist by any means.
If you use an image generation tool and then apply your artist skillset you end up with something amazing in half the time compared to making the entire image yourself, try and see the value in that.
Eventually (still years from now imo) they will reach the point where they are ‘perfect’, but that’s just natural progression, sadly not all jobs exist forever, if they did i was still making flash applications 🤷♂️ Doesn’t mean i don’t have a job today either and the same goes for artists in the future, people will still want to hire actual artists, work might change a bit but that’s just how things go.
There are two problems. One is that as with any craft. Take away too much from the process and it stops being enjoyable. It is why AI tools are ok, but generative gets a hard line drawn against. Cause it takes away too much and makes the process dreadfully boring and impersonal. You ain’t an AI assisted artist, but an editor for an AI.
The second is the question: What value is there in getting “Amazing done quick” for the artist? Non-artists salivate at the idea of “More for LESS”. Which brings with it that nasty reality that the creators get LESS for more….
It is easy to say Progress stops for noone, but that is false. History has countless examples of progress grinding to a halt when people just don’t want it and so far the case for Artists to want is very lackluster. Its mostly just “But think about how much you can create FOR US”. Which isn’t a concern for art.
In the case of Mizkai, the onus is on panel judges to prove their claim that it was made with AI, not the artist to defend that it wasn’t DESPITE having done more than enough to do so. This is not the future we artists want. But since big money in backing the tech, I fear there’s little we will be able to do.
So a banana stuck on a wall with duct tape is art, but what an AI makes is not.
Much of what folks count as art, I would see in the same vein as the AI crap – just that, crap.
A banana duct taped to the wall could be art – if the artist has a goal and a meaning behind the construction. Far too many wannabe artists put no more meaning behind their works than AI does.
The output from an AI has no goal and no meaning. The programs have no understanding, no standpoint to express. The output is no more art than making a collage by allowing a machine to randomly place random pictures on a wall is art.
Some artists are to lazy to learn proper art. Drawing, painting, not stupid social commentary garbage.
The banana is more of an artist than the one who stuck it to the wall.
She proved it wasn’t ai. She should sue. Maybe they’d get their act together then. I hate contests anyway.
“generate cars with door shut lines that make no sense”
Teslas are perfectly fine cars. Stop the negativity.
In overall i am not worried about AI image generators. Their output is often boring derivative mimicry as it quickly turns out the age old tool problem still exists. Namely that giving an Amateur the greatest tool in the world, doesn’t make them a master. Most prompters don’t bother and most artists don’t like to use it cause it just takes too much fun out of the craft.
But in contests it does become a big problem. As it is one of the few instances where you might find people incentivized to try to use AI seriously to get an “easy” win at the cost of those who work hard. It seriously needs to adapt and take things serious if they want to keep it a contest about artists creating stuff. Not editors cleaning up AI output
Maybe Mizkai got detected as AI because name ends with ai?
I think the simple answer to the headline question is: AI-generated art is shit. There’s a certain “aesthetic” to it that just makes it stand out…
metadata leaves AI markers