Brewing...

Skip to content

Groundhog Day: AI Art Edition

Tech Talks

Published on 31 March 2025

CJ meme at OpenAi fiasco.

Rant on GenAI and OpenAI IP thefts.

Am I crazy, or did this exact same rodeo happen like, two years ago? The internet exploding with AI-generated images in a beloved, distinct art style (this time, Studio Ghibli's turn again), the immediate and fierce backlash from artists and fans, the debates about copyright and ethics... it feels like I'm losing my mind because we've been here.

Remember when people rightly pointed out Hayao Miyazaki's famous hatred for AI art (him calling it an "insult to life itself"?). It felt disrespectful, cheapening decades of human craft. Even Zelda Williams called it "technological piracy," which hits the nail on the head for a lot of folks.

But here’s where the déjà vu really kicks in. This isn't the first time AI spat out Ghibli-esque stuff. Back around 2022-2023, tools like Midjourney, Stable Diffusion, and even earlier OpenAI models (DALL-E 2/3) were doing it too. People were experimenting, posting results, and yes, the same ethical alarm bells were ringing then. It just wasn't quite this... viral. This time felt different because the tool was better, easier, and pushed by the biggest name in the game, OpenAI.

Okay, Now Let's Talk About Leadership and That Tweet...

This is where the déjà vu gets really strong, especially regarding the captain of the ship, Sam Altman. Because yes, he absolutely did wade into the Ghibli craze, changed his profile pic, and then dropped this gem on Twitter on March 26th:

>be me
>grind for a decade trying to help make superintelligence to cure cancer or whatever
>mostly no one cares for first 7.5 years, then for 2.5 years everyone hates you for everything
>wake up one day to hundreds of messages: "look i made you into a twink ghibli style haha"

So, yeah. While the internet is freaking out about the ethics of his company's tool effectively strip-mining a beloved art style, and artists are feeling understandably pissed off, the response is... this? A sort of self-pitying, greentext-style post that juxtaposes the "serious business" of building world-saving AI with the perceived silliness of being turned into a Ghibli character.

Let's be brutally honest: This isn't just tone-deaf; it's a masterclass in manipulative deflection. It's an attempt to shield OpenAI from criticism by invoking grandiose, unrelated goals. The casual "or whatever" doesn't make it humble; it makes the supposed life-saving mission sound like a convenient shield he pulls out when the heat is on. It centers his struggle, his decade-long grind, over the immediate, tangible concerns about artistic integrity, copyright, and the actual use of the technology he's unleashed.

And this gets to the heart of the rot, not just with image generators, but with the entire Generative AI hype machine, especially (LLMs) like ChatGPT. Altman, and the whole industry chorus, keeps dangling these world-changing carrots – curing diseases, solving climate change, 10x workforce efficiency, achieving AGI nirvana. But let's call bullshit.

ChatGPT is not going to cure cancer. No current LLM is. Why? Because fundamentally, these models are incredibly sophisticated mimics, pattern-matchers operating on probabilities derived from gargantuan datasets of existing human text and images (majority of which are obtained through questionable means). They are autocomplete on steroids. They can generate plausible sounding text, code, or images based on patterns they've seen, but they don't understand context, truth, or causality in the way required for genuine scientific discovery. They don't reason or form novel hypotheses based on evidence; they regurgitate and remix what they were fed. Worse, they "hallucinate", they confidently spew convincing-sounding nonsense, which is the polar opposite of the meticulous, verifiable work needed to, say, develop a new cancer therapy.

Comparing these LLMs to something like DeepMind's AlphaFold is almost insulting to the focused scientific effort involved in the latter. AlphaFold was meticulously designed to tackle one specific, complex scientific problem (protein folding) and has yielded verifiable insights. LLMs are being marketed as near-sentient polymaths while struggling with basic consistency and truthfulness.

And what's the cost of this hype train? It's not just ethical hand-wringing.

Monumental Resource Drain: Training these giant models consumes obscene amounts of electricity, enough to power small countries. Data centers guzzle millions of liters of fresh water for cooling. We're talking about a significant, measurable environmental footprint for... what, exactly? Better chatbots and image filters?

Unsustainable Economics: The computational cost isn't just for training; running these models (inference) is also incredibly expensive. Companies are burning through billions in venture capital and cloud credits. Is there a sustainable business model here beyond licensing tech to giants like Microsoft to slightly improve search results or Office features, while hoping for some future AGI jackpot? The cracks are showing, constant quests for more funding, questions about profitability, and the sheer cost limiting widespread, complex use.

Foundation on Shaky Ground: The entire edifice is built on training data scraped from the internet, often without permission, leading to a growing number of massive copyright lawsuits from authors, artists, and news organizations. If the data foundations are proven illegal or unethical, the whole house of cards could wobble.

So, is this the GenAI playbook now? Unleash tech with obvious ethical tripwires, watch the drama explode, deflect with grand pronouncements about saving humanity, and maybe issue a non-apology about melting GPUs? It certainly feels calculated. You have to wonder if stirring the pot is part of the strategy, afterall any publicity keeps the hype train rolling, right? Especially when maybe, just maybe, the truly groundbreaking leaps are getting harder to come by. Remember GPT 4.5?

And the whole "we're building AGI to cure cancer or whatever" schtick while navigating self-inflicted PR dumpster fires? Altman's performance – the lofty mission statements mixed with Silicon Valley power plays, it doesn't just rhyme with Elon Musk; it's practically the same tune played in a slightly different key. Maybe they bonded over it back in the day? You know, before the lawsuits.

It's almost comical. One minute they're co-founders dreaming of altruistic AI, the next Musk is suing Altman, accusing OpenAI of betraying humanity (and their founding agreement) for sweet, sweet Microsoft cash. Talk about founder drama! It really puts the "saving humanity" claims from both sides into perspective, doesn't it? Who knew world salvation involved so much litigation between billionaires?

Maybe there's a secret playbook they both got: Chapter 1: Announce plan to save the world. Chapter 2: Generate chaos. Chapter 3: Tweet cryptically. Chapter 4: Blame critics/media. Chapter 5: ??? Chapter 6: Profit (or sue former friends). Chapter 7: Get directly involved in the government. It's less about curing cancer or colonizing Mars, and more about building personality cults fueled by impossible promises and the sheer gravitational pull of capital.

Ultimately, this Ghibli mess feels less like an isolated oopsie and more like a symptom of an industry high on its own supply, burning cash and goodwill chasing god-like ambitions while delivering tools that mostly just amplify existing human creativity... and controversy.

CJ, say the line again!

Share this post on:
Share this post on Bluesky Share this post via WhatsApp Share this post on Facebook Share this post on X Share this post via Telegram Share this post on Pinterest Share this post via email