AI isn’t what they pretend it is

Do what thou whilt shall be the whole of the law.

The ridiculous ways tech companies have been talking about using AI would be laughable if it wasn’t so destructive towards real human jobs and their ability to make a livelihood to support them and their families. In short, AI isn’t what they want you to think it is.

AI uses probability to decide what’s next in the output. That’s why it’s able to fake human speech and art. However that technology is already a thing under a different name. It’s called autocorrect, and pretty much everyone knows how bad that is. All these tech companies hyping AI are essentially talking about shoving a supped-up version of autocorrect into everything, and are placing real human workers’ jobs in jeopardy while they beta test this disaster waiting to happen by unleashing that it on the general population to beta test. That’s why AI art is as wonky as it is and why it can only output text based on other text. It can only output art based on other art. It can’t create anything wholly unique beyond just something akin to the uniqueness of your kid exhibits placing letter magnets on the fridge before they learn to read. Sure, what they line up looks like a word you might put on the fridge, but “cafatw” isn’t an actual word.

When I’ve tried to put in a description of the art I want it to generate, I couldn’t get it to actually output my request*. It was a simple enough request any human artist could have understood and created without issue, but the AI failed despite making multiple attempts on my part to rewrite the original request statement in an effort to try to get any of them to generate what I was actually asking for. I tried various different AI art portals with the same disappointing result. In fact, the image it generated closest to the query were grossly malformed.

The problem is, if it doesn’t have something similar in its database to plagiarize, it just selects elements it does have that it determines is the next most probable element without any of the nuance or context that an actual human can understand with relative ease. The researchers call this phenomenon “hallucinating” despite having nothing to do with the kind of hallucinations a real human could experience. They’re trying very hard to humanize the thing to try to sell it to human investors, but it is fundamentally not human.

This capability of human conceptual understanding is what our brains seem to do best and what other species of animals seem to struggle with. It’s that ability that gave rise to the development of language, technology, writing, poetry, mythology, ritual and beyond. Without that capability, our society would never have been built. Sure, other species can communicate to each other through various means, but it doesn’t ever seem to be anywhere close to the kind of communication humans are capable of. After all, it’s why our species came to dominate the world rather than the myriad other hominid species roaming the Earth. It’s what gave us the advantage and the capability to survive in an ever-changing mostly hostile world. Without the power of language, it’s hard to coordinate anything at all. Imagine a group of dogs trying to build a skyscraper.

The human brain is a complex organ evolved to be very good at conceptual understanding. It’s not something a machine running probability algorithms to predict what words or images to use next is likely ever going to be good at. Unless something fundamental changes dramatically in the way this sort of technology is created and functions, it almost certainly never will be. Human biology isn’t even fully understood by humans in the first place and neuroscience is still in its relative infancy. (Though it’s amazing what human scientists and doctors can already do with what we do know.) Given that, and the complexity of such a task, how do you expect a bunch of IT nerds to be able to replicate that capacity in its entirety using machines? Why do we assume it’s even possible to create a functioning synthetic brain capable of the kinds of tasks humans do with relative ease? And beyond that, why even try to do that at all? Human artists and writers aren’t going away anytime soon. If paid well and properly valued as people, human creatives are capable of producing some of the best things this life has to offer. It’s the kind of things that make life hold meaning. Why would we even want to give such an important task to unthinking and unfeeling machines in the first place? If they could make mathematical computation processors behave in a convincing enough way to trick humans, we’d be robbing it of its inherent beauty and the wonder of the accomplishment.

Love is the law, love under will.

With a toast to human creatives,
Vanessa

*I was trying to get it to generate an image of a woman in a full suit of armor emblazoned on the front with a unicursal hexagram holding a lance and the holy grail while the sun shines brightly behind her. Some of the best results are shown above. Some are unquestionably beautiful images, sure, and if it was made by a human artist rather than plagiarized from one, I’d be impressed, but it wasn’t, and it wasn’t what I asked for. The one closest to holding what I actually described is horribly malformed. Interesting that despite using several different AI art portals, most of them have come up with images that seem on the surface to be of the same person until you start looking closer at the details.

PS: If you want to know more about how AI works put in layman’s terms, checkout this page: ChatGPT Explained: A Normie’s Guide To How It Works

Inspired by A.I. is B.S.