Tuesday, January 7, 2025

Reel Talk: The AI Story I'm Sick Of

AfrAId

Movies about AI have irritated me to no end. Here's why.

Listen at the podcast providers of your choice.


One of the most irritating things about Hollywood movie making is the delay between something being relevant and the movies being made about it. As an example, by the time a movie with named actors was made about the GameStop meme stock craze, 2023's Dumb Money, there had already been multiple documentaries made and countless hours of YouTube video commentary (some good, most bad). You can also tell that said films were made without a firm grasp on what the story says about us as people, often because the story isn't "over" as the film is being written and produced.

This is why a bunch of films from 2023 into this year made casual references to bitcoin, cryptocurrencies, and NFTs or made them integral plot points. Even if the general public wasn't that enthusiastic about it, investors were. Now we're at the point where the Hawk Tuah gal is being publicly shamed for running an NFT scam. If you want to make a movie about a hot button technology aspect, you should either be prescient or just wait to let everything play out so you can make something good and insightful.

Which is why the surge of AI-related movies coming out in 2024 have irritated to me no end, including two trashy ones that ended up on Netflix, AfrAId (see what they did they) starring Jon Chu and Subvervience starring Megan Fox. Why do they irritate me so much? A lot more reasons than you might think. But let's start with the easiest one. The one story every single one of these movies tells...

Self-Aware Technology

Back when I was working in legal marketing, I attended multiple conferences and panels discussing the use of machine learning or AI tools in the legal profession. Something you should know right out of the gate is that the legal profession, by and large, is suspicious or incorporating technology into their work.

There's a concern that technology or tools will "replace" attorneys or that making your clients job easier will mean less billable hours which means less money.

And while they certainly have a vested interest in promoting technology's safe use (the folks on these panels were almost always involved in the tech industry), they were quick to point out that the best uses of technology were to ease up tasks like form writing or research so attorneys could dedicate more time to developing relationships or working their case and coming up with new ideas.

They were sick to death of the image of Terminators coming to destroy everyone's job and life on earth.

Unfortunately, the Terminator model is all studios seem interested in. AfrAId and Subservience are both "technology goes ary" movies involving a new form of AI being integrated into a family's home and causing different kinds of strife, despite some initial appeal. In both cases, the AI becomes sentient and begins to attack our nice family who either made a mistake by trusting the tech or by having sex with an AI-driven robot that looks like Megan Fox (Fatal Attraction but with AI).

The arc for these movies are exactly the same. New tech comes on board. Initial skepticism. Things go well for a bit. AI does something, or multiple things, out of pocket. Things get messy and there are bodies left in the wake.

These are simple, straight-forward moral stories about the human experience being better than technology so be aware. 

And this fixation on self-aware AI irritates the crap out of me.

Because those aren't the main issues with AI as we know it.

The Limitations and Hidden Costs of AI

I'm a big fan of the CoolZoneMedia podcast Better Offline where our host Ed Zitron dives into new developments, problems and bastards of the modern tech world. Zitron is an experienced tech world journalist who, unlike a lot of other publications, views the entire industry with a very skeptical eye...and in particular AI.

As portrayed by Zitron, AI isn't just a modern buzzword that companies like to use to sell themselves and their products to people, it's also a rapidly growing bubble that is completely incapable of delivering on its promises. 

The way it's portrayed is as this life changing technology that will make every aspect of your life easier. "When you're fully integrated with our AI, you can just say you're going to make a pizza and an UberEats driver will arrive in 20 minutes with all the ingredients you need from a recipe we know you'll like, and start up all the devices you'll need to make said pizza in your house...for just $200 a month."

In reality, most of AI's usefulness is in two categories. Clearing up busy work (aka transferring items from forms you've seen over and over again) or creating imitations of work that's been created before. 

In a vacuum, that's fine. But it's not in a vacuum.

First and foremost, these simple functions are not what AI products/producers are actually selling. They're selling the "runs your entire house without you doing anything." And people are dumping millions of venture capital into projects where the most useful thing they can do is order a pizza less effectively than you could either by going online or making a phone call. Which in turn means a bunch of more deserving and actually innovative projects aren't getting funding.

Likewise, a number of the issues AI can solve, are more of problem because of AI. Phishing scams are at all time highs thanks to AI driven messages and now there's the terrifying proposition of being scammed by an AI replica of a loved one's voice claiming they've been kidnapped. Yes you might want/need a piece of technology to prevent that, but this wouldn't be an issue if the tech didn't exist in the first place.

And now we get to the costs.

The reality of almost every massive machine learning or processing hub, whether it's AI or mining for Bitcoin, requires an unseemly amount of electrical energy, just as we need to be cutting down on CO2 emissions to prevent an even worse climate catastrophe than the one we're already careening towards.

ChatGPT is the best example, since a single search takes up nearly 10 times the energy of a single Google search. Extrapolated out, and the energy used the fuel ChatGPT is enough to run an entire small country for a day.

The other, perhaps even more demoralizing aspect of AI, is the information AI tools ingest to create writing, images, audio, and video.

The fuel behind every AI tool is human creation. Years upon years of works that required a ton of human effort to make. If you're talking about a government form that you don't want to fill out again the tool is appealing. I don't need creative answers right now, I need the info to go into the form so I can send out the form and go from there.

But as soon as AI is "creating" something, it's pulling from every bit of information it has at its disposal...with or without permission.

AI tools to "create" images or text are built on information and that information is every bit of writing, art, film and music they can get their hands on. The more information it has the more effective it is at mimicking styles, sounds and visuals from a certain perspective.

But what about the artists that created those materials? Are they being compensated for their contributions? More often than not, no.

Case in point, OpenAI being sued by Scarlet Johansson for creating a voice that sounded exactly like hers, or the dozens of YouTubers finding out their work has been fed into AI tools by Google. That second really stands out since YouTube will almost always side with "copyright" holders when it comes to fair use on their platform but saw no need to protect the people that bring folks to the site and earn them money.

Artists in these system are not viewed as artists, they are viewed as material for the content mine. Which brings me to the studios.

AI and Hollywood

During the recent Writer's and Actor's strikes a lot of the emphasis was put on compensation for the new TV and movie economies. Namely money from streaming services. As actors like Sean Gunn pointed out, despite Gilmore Girls being one of the most watched programs on Netflix for years, he personally hadn't seen any compensation from it. Definitely something that should've been rectified so that actors, writers and directors can get money for their work as their work continues to make money for people.

What was less common, was a series of studio requests for AI-related applications. These included scanning background extras (who are paid a day rate at best) getting scanned into a system to be used in perpetuity after being paid for one day of work and writers not being brought in to write scripts, but to touch up scripts written by AI tools. Because this would cut costs for their nearly bankrupt...oh I'm sorry incredibly profitable businesses.

Thankfully, most of these were rejected outright.

However, the idea lingers and seems to permeate the uncreative folks who just want to regurgitate the same five stories and franchises over and over again.

As much as I think that it's completely possible to make great art in the current studio system, almost every studio views "IP" or intellectual property aka IDEAs as their most valued resource. Looking at the top grossing movies of the year and its clear that studios did not want to venture into any new or unproven ground. There are plenty of audience members and studios who just want to tell the same story with the same characters, over and over again. Without change. Forever.

And not only that, but they want to use the creativity of artist to fuel these simple unengaging "creations" that don't know why its characters talk like that, or why the director chose that shot, or why that actor made that choice, or why the Willhelm Scream is so damn funny. 

If mindless sequel making is soul-less cash-grabbing, then AI art is counterfeiting. You're stealing from artists to create an imitation of what they do, and then selling it to people for real money...who will probably wonder why the 12th Star Wars movie they've seen that includes lines a lot like "No, I Am Your Father" don't hit right.

The core of why these AI movies irritate me is that the majority are disingenuous scare-mongering about the terrifying abuses of AI, by itself. And not the reality. Which is AI being used, by people, to abuse people.

Because right now, every misuse of AI has a person on the other end, gleefully using that tool to steal, to scare or to cheat. And so many of the people, who would be that person on the other end, want us to blame the tool, not the user.

No comments:

Post a Comment