My problem with GenAI

My principles and concerns regarding generative AI.

After years of reading discussions on Reddit, Bsky, X, seeing the good and bad impact of generative AI in the world, using and reflecting on it, I want to lay out a bunch of common arguments and my responses & principles based on how I currently feel about GenAI, sort of my “ethics” and reasonings behind why I find some uses of GenAI good and beneficial, and others (most of them) bad or detrimental for humanity.

I should point out that I’m still unsure about certain topics and I don’t feel comfortable “taking one side forever” when some situations contradict one side or the other. But I’m honestly more against GenAI now as I’m seeing the technology evolving in the wrong direction and in the hands of the wrong people.

Of course, I’m open to honest discussions and to change my mind as I’ve already done multiple times on these points in search of the “best perspective” on these advancements.
I’m a curious person who is into both the technical/programming side and the art world, so my points refer to both of those fields, even though sometimes they are stronger for one field and less in the other.

I’ve already talked about this matter in 2022, just when ChatGPT came out, while I was 17. I encourage you to look into it, but my views have changed and become clearer as I’ve grown up both physically and mentally ;)
I also wrote another post on AI and creativity if you are interested.
I’ll link some videos/resources at the bottom with interesting points of view on GenAI use for art and code.

info Note: when I say “AI”, for this post, I’m only referring to generative AI, and will sometimes use just “AI” for brevity.
The other types of beneficial AIs we all know about (spam filters, fraud detection systems, AlphaFold, …) don’t generally have so many ethical problems / they are not so divisive, and I don’t want to treat them in this post.
I’m aware that some of those AIs, like recommendation engines, search ranking algorithms, and more, still have their problems, especially if abused by the monopolies that control them.

Points

  1. Historically, new tools (digital painting, photography / computers, internet) shifted art / tech jobs. It’s just the same with AI now
    • I don’t believe it’s the same, as GenAI is seen not just as an automation of simple, predictable processes but a complete (imperfect) substitute for human intellect and creativity, which is what brought us here in the first place through years of developments and discoveries, and should not ever be offloaded to something else than our minds.
  2. AI now is the worst it’s ever been
    • I agree on that. And indeed, I generally try not to fight off “AI slop” by analyzing its visual or technical imperfections, as it’s been demonstrated multiple times by the quick advancements of GenAI that those problems mostly go away in a few months. But there are still some fundamental, inherent problems with AI, like hallucinations (yes, I don’t find that humanizing term appropriate either), training bias, the “black-box” and non-deterministic behaviour, or the legal gray area of ownership & attribution ambiguities on which all the most popular AI models are currently built on under the justification of “fair use”.
  3. AI is improving the availability of online content
    • AI is oversaturating and polluting the internet with mass-produced content which is, more often than not, low-quality one. While oversaturation already existed before AI, due to both bad actors and increased accessibility of digital tools, unwanted content could still be easily identifiable and avoidable, while the rate and ever increasing believability of AI content (which is often appealing at first glance but “broken” or “weird” when you actually look deep into it, yet) makes it impossible to keep the web browsable by a human. I’m not saying that all content on the internet should be high quality cause it was never meant to be that way, but making it harder to find good, useful, correct content should be prevented whenever possible, and AI doesn’t really help with that. There’s also the problem of AI feeding on itself if it starts training on internet content with its hallucination inside.
  4. AI creates original content
    • A GenAI model, at its core, is a statistical machine that exists only thanks to the data on which it’s trained to produce predictions guided by a prompt. It’s impossible to make it generate something completely new that never came before, as it relies on the patterns in the data from which it is derived. A human, with sufficient knowledge, experience and maybe even luck, can do that.
  5. The flooding of AI content will make human work more valuable (not just in a monetary sense)
    • I think that’s very true, but sadly, corporate people and ignorants would disagree. Once you try getting into the weeds of a certain skill, you start to appreciate the little details and the immense efforts that were put into the final work. You admire and try to understand how it was done. Many people just aren’t interested and don’t understand or value this. AI giving the illusion of doing something complete in a few prompts to replace an expert will only reinforce that mentality so detrimental to human workers.
  6. Vibe coding is fine
    • Yes, vibe coding is fine, as long as you don’t care. You don’t care about the project you are creating, you don’t care about future problems when maintaining it, and people don’t care about it working fine or not, and don’t have to deal with the code. I think my view on this is similar to Linus Torvalds’: https://www.theregister.com/2025/11/18/linus_torvalds_vibe_coding/
  7. AI art and human-made are just the same
    • Art’s value is very subjective. The term can also refer to basically anything, but I’d like to avoid getting into semantics. One can be fully satisfied having something visually appealing for their eyes, and that’s fine. Speaking for myself, I surely enjoy visually appealing works, but also, I believe art at it’s deepest meaning is about connection with the creator, the journey that led it the final work and the appreciation of their efforts (which youn understand better especially if you’ve tried it yourself), so in an AI work, I would only appreciate it superficially but not really be able to give it more value than that.
    • AI code reads pleasing to non-programmers, AI music is pleasing to non-musicians, AI art is pleasing to non-artists. But generally, humans make works that “hit harder” than AI ones and can stand out from previous ones. And I hope that this won’t be recognized only by people “in the weeds”.
  8. AI doesn’t steal from artists / programmers, it behaves just like a human artist looking at references or a programmer using StackOverflow, and creates new content based on what it has seen
    • Aside from the scale mismatch between the millions to billions of content pieces on AI’s memory and the limited human lifetime experiences/acquired knowledge, and the unmatchable timing in creating the content, I would normally agree that AI doesn’t “steal”, but its training requires systematic ingestion of a massive amount of copyrighted work. Work for which the original authors receive no compensation. As long as we live in a capitalistic society that rewards labor and creation of value, that use, even if “fair” is not actually fair unless everybody involved is given appropriate credit and compensation. If the owners of these commercial models earn money with them, why wouldn’t content creators be compensated too? It is currently almost impossible to do, both economically and technically (it’s often hard to find the sources of an AI output, both for art and code), so that means, functionally, today’s AI systems are appropriating value from creators.
    • The “stealing” sentiment is especially strong with art & music online, while for code I see less anger among programmers and I’m personally more condescending on this, but I can’t really explain why (maybe I value code less than art? But it depends on so many things…). Probably because code snippets, especially the common ones, are less copyrightable than artistic works, they are just instructions to the computer. But a whole program and/or the way it’s written can still be considered art. Maybe I’m biased: this tweet by Andrew Price brings an interesting observation that I agree with:
      https://x.com/andrewpprice/status/1994103016308261192?s=20
  9. AI democratizes creation / AI increases accessibility to create
    • Democratizing creation doesn’t mean offloading the work to someone or something else to do it for you based on a few instructions. That’s more like delegation. I think true democratization of creativity comes from providing open and accessible tools that allow individuals to express their unique perspectives and ideas in an easier, frictionless way.
    • Already before AI, there has never been a better time to become an artist, musician, programmer or whatever else, thanks to both the internet and increased availability and reduced costs of tools to do those jobs (this post by Sophie Cat Blake I first saw on X is a great example of that). The only real barrier has become time and willingness to learn a skill, and AI would substitute that. Is that bad? I think so, because now everybody learning from and relying entirely on AI would become empowered in the short term, but limited by it. The users of GenAI wouldn’t develop their own personality or style (both artistic and technical), they wouldn’t reason through their mistakes and learn from them, they wouldn’t put emotions or intention in what they do aside from functional purpose (they can’t as prompts to the AI can’t reflect our internal human emotions). They would not enjoy the challenges of problem-solving and the satisfaction of seeing their own improvement in the skill they are practicing, but instead become annoyed every time AI makes a mistake or gives a bad output. Not to mention, they would be clueless about what to do without the support of AI. Yes, you could use AI for improving your critical thinking and force yourself to learn a skill, but, as I experienced myself, having an all-knowing, immediate assistant always at your fingertips is a huge temptation to skip straight to the results.
    • I’d argue AI actually makes it harder to create, and especially to learn the skill required to create better and better things. As pointed out by many artists and programmers, it’s becoming more and more difficult to find entry-level jobs as they are either automated by AI or replaced by senior workers with the help of AI. And in general, AI already makes good enough results to surpass, superficially, what many artists or programmers are able to do now. It can be extremely demotivating for aspiring new generations to do a certain job to know and see that they will have a hard time getting it or reaching the level of senior positions as AI demolishes entry-level job opportunities and rapidly makes better content, while those were the necessary positions for a junior worker to become better while sustaining economically. AI can’t replace those who have deep knowledge, but one needs to have the opportunity to get it while being sustained financially as a junior position would allow.

In the end, the deepest problem AI creates (in any field) is its erosion of human autonomy in intelligence. That’s the issue beneath all the others, and the one that makes me reluctant to accept its spreading and feel bad every time I use it.

“I have found that the reason a lot of people are interested in artificial intelligence is the same reason a lot of people are interested in artificial limbs: they are missing one.”” — David Parnas

AI is a nice shortcut. It offers convenience, and that convenience has many advantages initially, but it also comes at the cost of knowledge, control, and ultimately the ceiling of what a person can achieve.

AI makes doing something more accessible and easy, sure, but using it will come at the cost of never being able to push human knowledge, skill, creativity or expression a notch further in whatever niche you’ll end up in.

As GenAI can only remix what it was trained on, if everyone comes to depend on it by default, the best-case scenario is stagnation. There will be no one able to produce the kind of crazy, obsessive, unpredictable breakthroughs that determined, expert humans have produced throughout history.

Finally, with AI simply existing, younger generations (who have no guarantee that they’ll make breakthroughs of their own) will feel everyday the pressure to rely on it, and resisting that default requires an unusual level of determination and autonomy (not at the level is was before).

Plus, they will feel even more behind on every field they want to study, a feeling that before AI wasn’t as strong since people weren’t outpaced by pseudointelligent machines that could produce results faster and at scale, creating an insurmountable gap and making the learning process less rewarding in the short term. At least, that’s what I’ve been experiencing since 2022.

AI Slop Is Destroying The Internet
Generative AI is a Parasitic Cancer
AI is ruining the internet
The AI art situation
"Generative AI" is not what you think it is
Why is Everyone So Wrong About AI Water Use??


Other posts: