My problem with GenAI (updated)
AI, Life-lessons, Motivation ·22 mins read
My principles and concerns regarding generative AI.
I will update this post from time to time whenever there’s something relevant I want to add and/or when/if I change my opinion on some points (or even just clarify/refine existing ones) as I mature a more complete mentality on this morally and ethically complex topic. The updated parts will probably be at the bottom and not always labeled as new.
After years of reading discussions on Reddit, Bsky, X, seeing the good and bad impact of generative AI in the world, using and reflecting on it, I want to lay out a bunch of common arguments and my responses & principles based on how I currently feel about GenAI, sort of my “ethics” and reasonings behind why I find some uses of GenAI good and beneficial, and others (most of them) bad or detrimental for humanity.
I should point out that I’m still unsure about certain topics and I don’t feel comfortable “taking one side forever” when some situations contradict one side or the other. But I’m honestly more against GenAI now as I’m seeing the technology evolving in the wrong direction and in the hands of the wrong people.
Of course, I’m open to honest discussions and to change my mind as I’ve already done multiple times on these points in search of the “best perspective” on these advancements.
I’m a curious person who is into both the technical/programming side and the art world, so my points refer to both of those fields, even though sometimes they are stronger for one field and less in the other.
I’ve already talked about this matter in 2022, just when ChatGPT came out, while I was 17. I encourage you to look into it, but my views have changed and become clearer as I’ve grown up both physically and mentally ;)
I also wrote another post on AI and creativity if you are interested.
I’ll link some videos/resources at the bottom with interesting points of view on GenAI use for art and code.
Note: when I say “AI”, for this post, I’m only referring to generative AI, and will sometimes use just “AI” for brevity.
The other types of beneficial AIs we all know about (spam filters, fraud detection systems, AlphaFold, …) don’t generally have so many ethical problems / they are not so divisive, and I don’t want to treat them in this post.
I’m aware that some of those AIs, like recommendation engines, search ranking algorithms, and more, still have their problems, especially if abused by the monopolies that control them.
There’s a spectrum of opinions on things that you can do with generative AI (even when it seems like it’s not involved) that can be considered more or less acceptable depending on context, use and personal beliefs. For example, generative AI used for describing an image for visual impaired users is generally considered acceptable, but that also replaces a human labeler and can make hallucinations in the description, so it’s suddently less acceptable. Grammar and spell check tools seem fine but the more advanced ones can completely change the original text. And so on.
Usually, the more advanced and different from the input the generated output gets, the less that work is considered human-made. But there’s not always a clear line that separates the two, and the more society will accept GenAI intervention on human work, the more the boundary will shift along the spectrum towards greater acceptance of AI involvement.
Points
- “Historically, new tools (digital painting, photography / computers, internet) shifted art / tech jobs. It’s just the same with AI now”
- I don’t believe it’s the same, as GenAI is seen not just as an automation of simple, predictable processes but a complete (imperfect) substitute for human intellect and creativity, which is what brought us here in the first place through years of developments and discoveries, and should not ever be offloaded to something else than our minds.
- “AI now is the worst it’s ever been”
- I agree on that. And indeed, I generally try not to fight off “AI slop” by analyzing its visual or technical imperfections, as it’s been demonstrated multiple times by the quick advancements of GenAI that those problems mostly go away in a few months. But there are still some fundamental, inherent problems with AI, like hallucinations (yes, I don’t find that humanizing term appropriate either), training bias, the “black-box” and non-deterministic behaviour, or the legal gray area of ownership & attribution ambiguities on which all the most popular AI models are currently built on under the justification of “fair use”.
- “AI is improving the availability of online content”
- AI is oversaturating and polluting the internet with mass-produced content which is, more often than not, low-quality one. While oversaturation already existed before AI, due to both bad actors and increased accessibility of digital tools, unwanted content could still be easily identifiable and avoidable, while the rate and ever increasing believability of AI content (which is often appealing at first glance but “broken” or “weird” when you actually look deep into it, yet) makes it impossible to keep the web browsable by a human. I’m not saying that all content on the internet should be high quality cause it was never meant to be that way, but making it harder to find good, useful, correct content should be prevented whenever possible, and AI doesn’t really help with that. There’s also the problem of AI feeding on itself if it starts training on internet content with its hallucination inside.
- “AI creates original content”
- A GenAI model, at its core, is a statistical machine that exists only thanks to the data on which it’s trained to produce predictions guided by a prompt. It’s impossible to make it generate something completely new that never came before, as it relies on the patterns in the data from which it is derived. A human, with sufficient knowledge, experience and maybe even luck, can do that.
- “The flooding of AI content will make human work more valuable (not just in a monetary sense)”
- I think that’s very true, but sadly, corporate people and ignorants would disagree. Once you try getting into the weeds of a certain skill, you start to appreciate the little details and the immense efforts that were put into the final work. You admire and try to understand how it was done. Many people just aren’t interested and don’t understand or value this. AI giving the illusion of doing something complete in a few prompts to replace an expert will only reinforce that mentality so detrimental to human workers.
- “Vibe coding is fine”
- Yes, vibe coding is fine, as long as you don’t care. You don’t care about the project you are creating, you don’t care about future problems when maintaining it, and people don’t care about it working fine or not, and don’t have to deal with the code. I think my view on this is similar to Linus Torvalds’: https://www.theregister.com/2025/11/18/linus_torvalds_vibe_coding/
- “AI art and human-made are just the same”
- Art’s value is very subjective. The term can also refer to basically anything, but I’d like to avoid getting into semantics. One can feel fully satisfied simply by engaging with something visually appealing, and there is nothing inherently wrong with that. I, too, appreciate aesthetically pleasing works. However, I see that kind of appreciation as only one layer of what art can offer, and such a superficial perspective can in the broad scope damage the relationship between humans: I believe art at it’s deepest meaning is an act of human connection, whether among audiences who share in the experience, or between the audience and the creator. It involves engaging with the journey that led to the final work and recognizing all the efforts behind it (which you only understand properly when you’ve tried doing something related yourself, or you’ve seen it in some way). Because of this, in an AI work my appreciation tends to remain at the surface level. I would only appreciate it superficially but not really be able to give it more value than that. Also, if nobody put effort in it, why would I put my human effort in caring about it?
- AI code reads pleasing to non-programmers, AI music is pleasing to non-musicians, AI art is pleasing to non-artists. But generally, humans make the works that “hit harder” than AI ones, can stand out from previous ones and that are actually influenced by human emotions to be shared with other people. And I hope that this value difference won’t be recognized only by people “in the weeds”.
- “AI doesn’t steal from artists / programmers, it behaves just like a human artist looking at references or a programmer using StackOverflow, and creates new content based on what it has seen”
- Aside from the scale mismatch between the millions to billions of content pieces on AI’s memory and the limited human lifetime experiences/acquired knowledge, and the unmatchable timing in creating the content, I would normally agree that AI doesn’t “steal”, but its training requires systematic ingestion of a massive amount of copyrighted work. Work for which the original authors receive no compensation. As long as we live in a capitalistic society that rewards labor and creation of value, that use, even if “fair” is not actually fair unless everybody involved is given appropriate credit and compensation. If the owners of these commercial models earn money with them, why wouldn’t content creators be compensated too? It is currently almost impossible to do, both economically and technically (it’s often hard to find the sources of an AI output, both for art and code), so that means, functionally, today’s AI systems are appropriating value from creators.
- The “stealing” sentiment is especially strong with art & music online, while for code I see less anger among programmers and I’m personally more condescending on this, but I can’t really explain why (maybe I value code less than art? But it depends on so many things…). Probably because code snippets, especially the common ones, are less copyrightable than artistic works, they are just instructions to the computer. But a whole program and/or the way it’s written can still be considered art. Maybe I’m biased: this tweet by Andrew Price brings an interesting observation that I agree with:
You're protective of the skills that you've acquired, and less of those you haven't. It's why writers like image generators but hate LLMs, and artists like LLMs but hate image generators.
— Andrew Price (@andrewpprice) November 27, 2025
- “AI democratizes creation / AI increases accessibility to create”
- Democratizing creation doesn’t mean offloading the work to someone or something else to do it for you based on a few instructions. That’s more like delegation. I think true democratization of creativity comes from providing open and accessible tools that allow individuals to express their unique perspectives and ideas in an easier, frictionless way.
- Already before AI, there has never been a better time to become an artist, musician, programmer or whatever else, thanks to both the internet and increased availability and reduced costs of tools to do those jobs (this post by Sophie Cat Blake I first saw on X is a great example of that). The only real barrier has become time and willingness to learn a skill, and AI would substitute that. Is that bad? I think so, because now everybody learning from and relying entirely on AI would become empowered in the short term, but limited by it. The users of GenAI wouldn’t develop their own personality or style (both artistic and technical), they wouldn’t reason through their mistakes and learn from them, they wouldn’t put emotions or intention in what they do aside from functional purpose (they can’t as prompts to the AI can’t reflect our internal human emotions). They would not enjoy the challenges of problem-solving and the satisfaction of seeing their own improvement in the skill they are practicing, but instead become annoyed every time AI makes a mistake or gives a bad output. Not to mention, they would be clueless about what to do without the support of AI. Yes, you could use AI for improving your critical thinking and force yourself to learn a skill, but, as I experienced myself, having an all-knowing, immediate assistant always at your fingertips is a huge temptation to skip straight to the results.
- I’d argue AI actually makes it harder to create, and especially to learn the skill required to create better and better things. As pointed out by many artists and programmers, it’s becoming more and more difficult to find entry-level jobs as they are either automated by AI or replaced by senior workers with the help of AI. And in general, AI already makes good enough results to surpass, superficially, what many artists or programmers are able to do now. It can be extremely demotivating for aspiring new generations to do a certain job to know and see that they will have a hard time getting it or reaching the level of senior positions as AI demolishes entry-level job opportunities and rapidly makes better content, while those were the necessary positions for a junior worker to become better while sustaining economically. AI can’t replace those who have deep knowledge, but one needs to have the opportunity to get it while being sustained financially as a junior position would allow.
In the end, the deepest problem AI creates (in any field) is its erosion of human autonomy in intelligence. That’s the issue beneath all the others, and the one that makes me reluctant to accept its spreading and feel bad every time I use it.
“I have found that the reason a lot of people are interested in artificial intelligence is the same reason a lot of people are interested in artificial limbs: they are missing one.” — David Parnas
AI is a nice shortcut. It offers convenience, and that convenience has many advantages initially, but it also comes at the cost of knowledge, control, and ultimately the ceiling of what a person can achieve.
AI makes doing something more accessible and easy, sure, but using it will come at the cost of never being able to push human knowledge, skill, creativity or expression a notch further in whatever niche you’ll end up in.
As GenAI can only remix what it was trained on, if everyone comes to depend on it by default, the best-case scenario is stagnation. There will be no one able to produce the kind of crazy, obsessive, unpredictable breakthroughs that determined, expert humans have produced throughout history.
Finally, with AI simply existing, younger generations (who have no guarantee that they’ll make breakthroughs of their own) will feel everyday the pressure to rely on it, and resisting that default requires an unusual level of determination and autonomy (not at the level is was before).
Plus, they will feel even more behind on every field they want to study, a feeling that before AI wasn’t as strong since people weren’t outpaced by pseudointelligent machines that could produce results faster and at scale, creating an insurmountable gap and making the learning process less rewarding in the short term. At least, that’s what I’ve been experiencing since 2022.
Highly recommended videos
I recently came across this debate+commentary on GenAI on Freya’s YouTube channel (I’ve never heard of the other interlocutor but it doesn’t really matter), and there are many interesting points of reflection both on the practical conseguences of AI usage but also more philosophical questions concerning meaning, creativity, and human purpose.
Many counter arguments are also brought up, which help to understand the nuances of this topic and test the limits of one’s opinion (or better, where the line is crossed when GenAI is being used, and the contrast with traditional automation tools).
Here are some additional notes from me on the video:
- The critique on Adam Neely’s video (see above) mirrors the reaction I had when I first watched it, particularly regarding the fact that the argument “Music in the future might shift from something mostly shared online to something mostly performed live (which is more valuable and human) as AI takes the online space” does not generalize well to many other industries that are inherently digital, such as game developement. It sounds so dystopic having humans forced to resort to board/physical games as AI takes over the creation of digital ones. That said, I want to reinterpret (or slightly reframe) Neely’s point: rather than a migration from digital to physical, what if generative AI pushed creators to show more of the process behind making their work, as a way to demonstrate its authenticity and the time, efforts and choices put into it? For example, record live music production sessions or make tutorials about it, show the timelapse of a drawing, share unused concepts of some artistic creation, document iterative design decisions and so on. Emphasizing process would highlight human intentionality, labor, and creative judgment, completely opposite of AI outputs. One obvious limitation is work that must remain confidential until release, and honestly I don’t know a solution for that. But for the rest, I think that if this was the widespread reaction to generative AI, I wouldn’t be as harmful for creators. I could actually encourage “consumers” (the audience) to be more involved and perhabs even more inclined to action while less prone to default to AI to create something.
- Many people exclusively value the end result, attributing little to no value to the process that led to it. I also tended to do this in the past, especially regarding things very distant from my knowledge/experience, but the more you experiment and attempt to create something (whether art, music, code, architectural design, or even cooking) the more you come to appreciate the effort, intentionality, and craft that someone has put into making it. Unfortunately, most people never encounter enough variety to appreciate all human creations, though it’s also true that appreciation doesn’t always depend entirely on direct experience. There’s also the bias to prefer/value most what one personally practices or understands over the rest (generally, artists defend art, programmers defend problem-solving, etc., particularly in contrast with AI). In any case, I believe all societies should do a better job at teaching in schools to the younger generations (starting as early as possible) the importance of recognizing artistic value and actively practicing creativity, especially as AI-generated content begins to saturate many of our cultural and informational channels. In my experience, schools try to convey this message primarily within humanistic subjects (never STEM), and in and often in a manner that feels prescriptive and forced. This approach tends to produce disengagement, it leads to students begrudgingly complying academically without really internalizing the underlying principle, being interested or captivated by that. It just become an abstract message, disconnected from thir interests, that is immediately lost while they mostly continue on with their life with a superficial consumer perspective. Good teachers and programs can make a difference in that, but something more structural would be even better. Also, forming students specifically on creativity and problem solving skills would make them more immune to job replacement (AI mostly replaces junior/entry level positions).
- Yes, there’s something lost when embracing speed and efficiency for every aspect of one’s life, and the downsides might not be immediately obvious.
- Yes, transparency and honesty around AI usage is appreciated but does not resolve concerns about the erosion of creativity and trust that GenAI causes.
- Freya argues that AI is best used as a better search engine, and also that the prompt (the originating idea) is the only genuine creative input from the user, but the output is too emergent and far removed from the input to be attributed to the person who wrote the prompt. This framing led me to a kind of unusual but probably accurate analogy for GenAI: querying an AI model (let’s say a multimodal LLM) is fundamentally like searching for/picking the best match of that query in the trillion of trillions of configurations in the “latent space” of the model (with some noise added to avoid deterministic outputs), and therefore an LLM resembles a search engine within itself, as it allows to pinpoint specific “configurations” of its training data. And just like it sounds nonsense to search Google “for a picture of a bird and then taking that picture and saying that you made that picture”, maybe it’s the same with prompting an AI. You could argue that it’s the same as when using arpeggiators (like they mention in the video) or pre-made MIDI patterns or long samples when making music in a DAW. There’s some overlap in there: in both cases the creator is outsourcing creative decisions, and that can “degrade the value” of that creative work. The difference though is in how much agency does the creator retain: if you use an arpeggiator you still have to decide what instrument it’s arpeggiating, in what key, what rhythmic pattern and so on, and of course figure out how to fit that sound in the track you are creating; if you use AI you don’t decide any of that, you lose all agency on the creative choices that could be made and that make a work truly yours and unique.
- The creative process to get to the final result can (sometimes inevitably) feel unexciting and frustrating, but it’s necessary for the sense of accomplishment you’ll feel later and better than letting someone/something else do it if you want to improve yourself.
Other posts:
-
2025 end-of-the-year post
Annual recap of 2025 and what’s next.
-
How I make my Anki decks
A set of tips & tricks and other useful info I learned while creating my Anki decks (and my university workflow).
-
2024 year review, university and what's next
A very late annual review of what happened in my life during 2024, and what my goals are next.