LLMs Are Taking the Joy Out of Our Lives
Table of Contents
This website does not, will not, and has not willingly used AI for its creation. I write every word myself and rely on traditional methods to publish it to the internet.
That means I correct my spelling using Hunspell in Emacs with its convenient flyspell-mode checker. I do not generate pictures for my articles, I either create them myself or take them from the Internet with proper credits.
Obviously, I use tools like LSPs and linters gratuitously, but I write the code I present in my articles by hand. I don't even look at AI output.
If I don't know something, I go on the internet and search. If that doesn't work, I go to a community and ask. You'd be surprised how kind and helpful people are, if you show them a modicum of politeness.
Note: This is a largely unedited rant. You've been warned.
1. Job
I don't like AI. Not because I feel like it's endangering my job???even though for many others, including people I hold dearly???it is a very real possibility. I am embedded nicely into my team and my manager is level-headed enough to understand you cannot pass off our job to a machine and expect things not to collapse eventually.
I don't like it, because the code produced using LLMs are at best laughably bad tangles of over-complicated boilerplate and cookie-cutter repetition of code that could very easily be DRY-d. At worst they generate a simulacrum of what a person might write, causing your intuition (trained on how real people think and what real people write, and which would otherwise help you catch bugs and issues) to become unreliable.
And with how much less effort it takes for an AI to spit out a thousand lines, than it takes a person to review said thousand lines, we have no chance of inspect code with nearly as much care and scrutiny as before.
Projects, both professional and hobby-driven, are suffocating under vibe-code pushed onto them both by people meaning well and people intending to profit. Because of the aforementioned ease of generating more code and how convincingly AI can masquerade as a person, the ever-increasing load of dealing with the crap-deluge is added atop the already thankless workload of these contributors, wasting valuable hours and manpower. cURL may have been the first, but it sure as hell won't be the last.
The solution? Just use AI to filter out the AI! This is untenable. Models made specifically to suss out LLMs written text aren't at all reliable. With code you have even less to rely on, because professional code generally doesn't have "character".1
So we're left with rudimentary tools, like:
- "All PRs above X thousand lines are blanket banned",
- "PRs committed by an LLM are blanket banned",
- "PRs where the author, after a lengthy questioning, cannot answer questions in a satisfying way are banned".
It's an awkward cat-and-mouse game of developers chasing down more and more convincing models and conniving people, while the reliability of code isn't improving at nearly the same rate.
And this is just programming. In many companies CEOs and boards are using AI as an excuse to lay people off:
Oh, who needs a new generation to replace the old one? We'll just do it with the AI!
And when that old generation ages out and there's nobody to train new people, what will happen then, huh?
Oh, but AI won't make everyone jobless, you'll just be able to do 20 people's job alone!
First, that's bullshit. Second, you can be grateful it's bullshit, because the job market is already terrible. Imagine if suddenly there were 20x less jobs. It'd be catastrophic.
A friend of mine studied visual arts. After many years in university, he entered a job market, that no longer had a need for him. He wasn't a "pro" (despite being skilled), so he had no foot in the door. And he couldn't become a "pro", because all the jobs he tried to apply to declined, saying they'll just use GenAI to make stuff! How do you build a portfolio that way?
He had spent a couple years living off odd jobs and now is re-educating himself as a programmer. While I'm glad he found something else that also interests him, I think it's still tragic he wasn't able to get work doing his dream job. Potential greatness dying at the feet of "eh, good enough."
Or there is that funny story where a business owner didn't bother to hire customer support. Their AI offered a customer an 80% discount. In this instance, the owner was extremely lucky, because they weren't obligated to fulfil the order, but imagine if they were. We might intuitively think a chatbot's responses aren't legally binding, but why not? If companies think they can offload customer complaints to these bots, surely they are created to offer only company-sanctioned information.
2. Hype-driven death march
I don't like AI. Not because I'm a Luddite2, I'm not. For most of my childhood and early adult years, the march of technology was something I beheld in awe.
I was born right before smartphones as we know them today were a thing. I saw mobiles go from bricks to hand-held supercomputers. While this too has its own detriments (unchecked social media exposure has made the childhood experience much more stressful than it needs to be), it also brought with itself undeniable benefits.
A minimal smartphone nowadays is affordable to just about anyone, who isn't completely destitute. It won't be fast, won't do most of the stuff a top-of-the-line model can, but it can give you access to a lot of things. Banking, applying to work, handling your communication (both synchronous and asynchronous), entertainment, even security through 2FA/MFA.
Smartphones are indeed so convenient, the idea of life without them is almost unthinkable to many of us. Whether or not that's a good thing is arguable (and you can probably argue towards "it's not" easier), but what nobody can deny is that smartphones are an utter financial and cultural success.
With AI, meanwhile, companies are still figuring out how to even make it profitable. You know, when it's actually AI and not just An Indian. It's not particularly good at coding, you cannot trust important choices on it, because someone has to be responsible for them, it cannot really innovate, just repeat variations. Last year OpenAI lost $11.5 billion. Things are fine, trust us. And all that keeps this bubble of kinda-sorta-works-trust-us cycle alive is five or so companies slipping money into each others' pockets.
When this pops, and I frankly see no possibility that it won't, everyone will suffer. Maybe it will be another 2008 even. My tech ETFs are 25% up since I bought them a year ago. That kind of growth is unnatural and I dread the day the debt catches up or the investors wisen up to the ploy. I'm sure many of them already have, but it's more profitable to keep up the charade.
3. Art-robbery
I don't like AI. Not because I'm good at art. I can barely draw stick figures. For people like me, Generative AI is supposedly a boon and the great equaliser that allows even us to materialise our dreams.
Except all of these models were trained on artists who did not consent to their works being chewed up and regurgitated in a thousand different, yet eerily similar ways, losing all the emotion and originality, that these people poured into each and every piece.
And then there's all the authors, whose books were scraped and fed into LLMs. That resulted in a lawsuit or two, yes, but the damage is already done. You cannot exactly "untrain" a model. You can only put safeguards and rules on it, that clever people will always be able to dodge.
And even if you could selectively remove all this knowledge from them, who in their right mind would? In this rat race, only the smartest, most knowledgeable AI wins, everyone else loses big. Sans comprehensive governmental crackdowns, no company would willingly cripple their own chances. And governments won't crack down on it, because then other, less copyright-concerned countries' AIs will take the lead. That does not make the line go up.
Art is an inherently sentient thing. I originally considered saying it is a "human thing", but there are instances of animals making art too. The loftiest art-piece down to the raunchiest fetish-art is all made because someone wanted someone else to feel something. The actual art-piece may not appeal, but the sentiment is universally beautiful.
So when you delegate this to a machine, that has no understanding of anything more than "what pixel usually goes next to what other pixels", everything you make is hollow. It might be very technically impressive (though, it wasn't you who impressed, but the machine), but there is no intent behind it.
I feel nothing, but disgust when I look at AI "art". It's always the same. Shrimp-Jesus, hyper-sexualised big tiddy anime waifus, diabetes-inducing sugary-cute animals with eyes as big as fists, comically obese people falling through glass bridges, buff gigachads with glowing red eyes telling you that you're not sigma. I will not bring examples, I'm sure you've seen all of these already. Just writing this list down makes me exasperated.
4. Post-truth society
I don't like AI. Not because I'm not tired of Google giving terrible results or because I enjoy digging into ancient threads that much. I don't like it, because you can never be quite sure if what you're reading is true or not without putting in your own research and that the LLM will go out of its way to convince you to trust it.
I can begrudgingly accept that chatbots can be a good springboard, if you have no idea how to start researching something. By demanding sources and justifications, you can get by, perhaps even achieve more than if you were left to use Google alone.
However, many people think it's a miracle-machine and trust its output blindly, leading to at best embarrassment, at worst death. AI struggles with the 'r'-s in "strawberry". LLMs will swear functions that never existed are parts of API-s. People, places, events, just about anything can be hallucinated. Chatbots melt into sobbing messes, if you ask about seahorses. It drove a man to drink Bromine and nearly end his own life.
Hell, forget about "nearly", there is now a list on Wikipedia to count the amount of suicides and deaths attributed to LLMs. At the time of writing fourteen people could be still alive. AI is not just a tool to kill people, but also to make the living more miserable.3 But trust us, just 50 billion more parameters and it'll be perfect!
And even when things don't end so grimly, AI is still a tool for radicalisation. It has never been easier to make up legit sounding hogwash. Claude, give me a 200 page PDF about how my political opponent will bankrupt old people, raises taxes to unprecedented levels, and send off all young men to war. You think that's a ridiculous example, it happened in my country.
Old and/or tech illiterate people not only have no idea if what they're looking at is legit or not, they might not even realize it's a question that needs to be asked. Ten-fifteen years ago something happening on video was either legit or very obviously fake. The few hoaxes that reached the mainstream were memorable exceptions. Today generating stuff that's not outrageously obvious and can easily fool many is an everyday occasion.
Even I get occasionally fooled and that terrifies me, because I'm supposed to be tech savvy. Technology is my livelihood. I've been banging bits together since I was eight years old. And yet every once in a while my judgement lapses and I read a comment or watch a video, and I don't spot the obvious. The whiplash I feel when other comments inform me that I was engaging with slop without knowing is always stinging.
5. We're burning our planet for??? this?
I don't like AI. Not because, it's the sole cause for climate change. It existed before AI. Arguably AI isn't even the biggest contributor, global shipping and the private flights of billionaires probably cause magnitudes more problems than Joe Schmoe asking the magic answering machine if he should invest into NFTs in 2026.
But the fact that we're wasting perfectly good land and, far more importantly, perfectly good water to waste on dystopian giga-complexes for AI loads that haven't even materialised yet, while also shipping off nearly all future RAM and GPUs for these same hypothetical AI loads. Now that both baffles me and infuriates me.
This is pure line-go-up mindset. Tomorrow be damned, today we have to shovel a few extra cents into $NVDA. Meanwhile, from June to August, we can barely go outside for a few hours, because temperatures are more often closer to 40??C than they are to 25??C. When I was a child (and I'm in my mid-twenties at the time of writing this, I'm not exactly old!) winters had snow for weeks. Summers were balmy, but never asphalt-melting hot.
Is providing more means of generating zero-views slop really our first priority? Is that really what we need right now?
6. I'm not fully innocent, but at least I care
I don't like AI. And yet, strictly at work, I do occasionally use AI-enabled code completion. Or sometimes make the AI generate me boilerplate for unit tests, that practically any templating system could also generate, just perhaps a bit more rigidly. And every couple of months I toy around with the freely available models online and run a few queries out of curiosity. And for that, it's fine.
I cannot hate people for wanting to use a tool, nor would I want to. It's a shiny toy, with some uses (albeit with caveats). My problem is with this dogged insistence on growth towards some vague faraway goal that seems to shift between AGI, replacing workers, enhancing workers, enhancing processes, replacing processes, etc. ad nauseam. It's gross, it does nothing but generate profits for a very tiny fraction of people out of thin air, while also causing pain on several scales. And it's just plain emotionally and mentally draining.
And for that, as long as I'm able to, I'm keeping AI out of my hobby. Thanks for reading.
Footnotes:
Of course, this isn't universally true, there's definitely eccentric and unusual coding styles out there, that AI has no hope of replicating. However, such style is generally frowned upon in professional contexts, because you're not writing for yourself, rather for a team, that might one day contain people, who you'll never meet.
The actual historical Luddites get a really bad rep. While their methods were destructive, it's hard to seriously blame them. It was never about "uhh, machines bad!" More "this new technology is rapidly causing mass unemployment and that's endangering tons of livelihoods."
Considering how similar this is to today's situation (though one may wonder if AI truly brings as much benefit to society as the mechanic looms did), maybe I am a bit of a Luddite. At least in spirit, since I cannot exactly bring a hammer against a program.
Yes, this is intentionally mimicking the usual AI slop marker of "It's not just X, it's Y". If you're gonna think this whole text is AI generated too just because of this, I'm going to send you a very mean stare.
