These criticisms of AI are always amusing to me when you consider the sheer scale of human-created garbage on the internet, long before LLMs arrived on the scene, due to people parroting the bad information they heard or read from others.
These criticisms of AI are always amusing to me when you consider the sheer scale of human-created garbage on the internet, long before LLMs arrived on the scene, due to people parroting the bad information they heard or read from others.
so now it'll be done exponentially faster with absolutely no opportunity for critical insight because machines are fundamentally incapable of cognitive function? i can develop the skills to dissect what i consume and discern its value, but what capacity do LLMs have to do the same? they don't think.
I see it more like this: If you had to develop the critical-thinking skills to discern what's true or not in a pre-LLM internet age that was absolutely full of human-generated garbage, you'll still need those skills in a post-LLM age. Whether the garbage will be accelerated is yet to be seen.
not if we're priming people to become dependent on the technology because we're biased to believe machines are infallible in regards to objectivity. we've already seen the problems in existing technology (people refusing to acknowledge bias in analytical ai)
if LLMs can't even be trusted to not realize when they're making up sources, why should we be asking them for answers to anything?
It's still early days for the technology so it's hard to say how error-prone it's going to be. If the next-gen LLMs are still hallucinating sources then yeah people should be made aware the LLMs are not infallible. But ultimately it's a critical thinking / media literacy issue like in the past.
How do you know what's on Wikipedia is true or not? You shouldn't treat it as infallible, but using it as a starting point and considering what it has to say, and going from there is a reasonable thing to do. Same kind of thing IMO.
we were told from the get-go to approach wikipedia w caution and scrutiny and there's transparent threads to follow for the info there, but that's not how LLMs are, and the resource cost to have them unreliably perform what we already have tools for are disproportionate to their usefulness
also wikipedia not only allows for legitimacy through sourcing, but credit as well
My point was that, even considering what you said, you can't treat Wikipedia as infallible, and there have been many controversies in the past with unreliable/biased information published there. You have to develop your own critical thinking skills to interpret the information. Same with AI.
Many Wikipedia editors have pretty blatant biases so they'll go find a credible-seeming cherry-picked source for what they want to have published, but if you really scrutinize their contributions to an article it doesn't hold up to Wikipedia's stated ideal of neutrality.
(chuckle) fair point. And that's a genuine problem, for sure. But this LLM problem feels serious too.
It definitely is and I don't deny that. Hopefully we can steer it in a good direction.
It’s sad to see an “artist” who’s pro AI. I bet if you actually worked hard you would be able to actually produce art instead of having a mindless algorithm producing worthless slop for you
Oh I see, you saw "artist" and you saw a comment that's not kneejerk anti-AI (therefore I must be pro-AI) so you assumed my art is AI-generated. That's adorable.
That is how most folk will view it. Anti-AI is the only pro-art perspective. There is no middle ground here
Recently attended a session at GDC where the art director at Rovio (creators of Angry Bird), claim they utilize AI to help them with generating background art. They then showed their practice of generating backgrounds they felt fit the target they were looking for. You can attack them and say that
Angry Bird isn't really art 🤷♂️. But this is a case where the 'artists' wanted to focus on something they felt was more important/ more enjoyable, which to them, was rendering their key characters.
and their work is fundamentally diminished because they chose laziness over human expression. That is again, at its core, anti art
Their work 'diminishing' is your opinion to hold. The artists over there would argue that it let them express the parts they cared to focus on. Their background render approach allows them to instead create a better character expression.
If you see somebody who just renders out a character and no background, would you call them lazy are devoid of expression? That is not anti art.
Again the use of generative AI to cobble together slop trained off the stolen work of actual artists is inherently anti-art. The folk that use these programs to make things for them aren’t artists, they just typed prompts into a theft machine
Is your problem the use of other artist's work? If team Rovio or any other team for that matter trained the models they used to generate whatever assets they liked on completely in-house material, would you still have a problem with it?
I'm hearing similar kinds of stories. Artists as a whole aren't as anti-AI as some who are very vocal online want to make it seem. There's a whole spectrum from outright refuse to use it, to dabble with it and find it interesting, to use it in an auxiliary manner, to full-fledged "AI artists".
No such thing as an AI artist
Among the artists I respect there's many who are anti-AI and there's many who think it's interesting and might play around with it even if it doesn't end up in their professional work. I'd say there is a middle ground. AI has many problems but it's here to stay. It can't be shamed out of existence.
The technology is in its infancy. Honestly, it's irrelevant whether artists (or anyone else) are pro-AI or anti; it will eventually replace artists, writers, composers, etc. Human beings will still engage in creative expression, but it won't be possible to make a living from it anymore.
There's going to be huge growing pains for the arts but which professions go away will be very granular. For example, with photographers who shoot in their local community, how would AI replace that unless it's literally autonomous robots physically shooting those photos in those communities?
Generative AI is fundamentally anti-art. It is built on theft. Like NFTs we can absolutely shame away AI. It starts by making it clear that AI and artistry are inherently opposed. There can be no middle ground between artistic expression and algorithmically generated, stolen slop
Gen AI is worthless to professionals “The production pipeline animation doesn’t fit [with AI]. There’s no way to actually utilise it. All it will do is confuse things.” The Simpsons writers say AI is “stupid” and explain why it couldn’t help the show (Cameron Frew). www.dexerto.com/tv-movies/th...
Just because one TV show doesn't use gen-AI, which makes sense because it's a very long-running show and they have all their creative processes highly developed and locked down already, doesn't mean you can extrapolate to it being "worthless for professionals".
The anti-AI artists are treating it like NFTs 2.0 but those were a novelty, whereas ChatGPT was the most popular app in the world days after its launch. This tech is widely used. It makes more sense to try to steer it in a good direction than to try to shame it out of existence.
Also it's only a matter of time until an image-generation LLM is trained only on the 500+ million Creative Commons and pre-LLM public domain images available, so there wouldn't be any copyright infringement there. Will the AI art it outputs still be condemned as "built on theft" then?
In the imaginary world where they train them on non-stolen art they actually properly licensed then yeah we’d only have the fact that it is a waste of resources and creatively dead as critiques to fall back on
Resisting the urge to train a solar-powered ARM-based array of computers on the 500+ million CC and pre-LLM public domain images so that AI haters could only fall back on being the art police, "Well it's not Real Art, because I say so..." 😹
Btw when you say "imaginary world" the reason this hasn't been done is because the for-profit major players in AI are all prioritizing scale and quantity. But it's something a computer science post-doc would be able to create in a month or two as a side project.