The Absolute State of AI

Tom Rivers
26 min read3 days ago

--

Come with me on a journey, to think about our current online world, how we got here, and the one we might be building next.

On this journey, you’ll find racist online bots, hallucinated social posts about AI helping the homeless, academics publishing AI-co-authored papers, and the “shittification” of the internet as a whole.

So, this is not an ‘all AI’ hit piece. But something has been eating away at me over the last 18 months; a sense of growing dread, skin crawling, Black-mirror-come-to-life sensation.

Call it the uber-ick… and the rate at which it is happening. I tugged on that thread, and here we are. It’s a long journey, and one that stretches back years. My concern is if this doesn’t change, Black Mirror will seem a quaint underestimation of just how bad things are going to get, as those making this stuff just don’t seem to care.

I might be the 50 millionth person to quote Dr Ian Malcolm here. I guess we’ll keep doing it until it sinks in:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”

The internet? GenAI? Social Networks? The fate of the creative industries? It’s in an absolute state.

The egg crisis of mediaeval France

I love a niche topic. A deep dive into the history of skirting boards. The etymology of the word ‘wobble’.

With that in mind, this was one of my favourite reads of 2024 about, of all things, a crisis caused by a lack of eggs in France about 300 years ago.

Short version — egg yolks were thought to reduce bacteria, and so were added to baths. When a bird flu wiped out most of the chicken population, people stopped bathing. And then people died, in huge numbers.

It was also covered in an Episode of History’s Hidden Gems; give it a listen:

Podcast Clip

If this all sounds a bit strange, it should. None of it is real.

Based on a Bob Mortimer story from Would I Lie to You, I gave an LLM a prompt. I turned that prompt into another, to generate the book cover above. I took it to Google Notebook, and turned it into a podcast. It’s gibberish, but by god does it sound compelling. And all in, it took about an hour.

A bit of fun? Or an ominous sign of the fall of creativity and the rise of literally meaningless slop? In the above case, it’s probably just the former, but the problem is, the sheer scale at which this is already happening, and the rate at which this kind of content will spread like a virus across all of cyberspace, cannot be underestimated.

Companies are pumping tens of billions into this. I’m loath to quote Peter Thiel here, but “We wanted flying cars, instead we got 140 characters”. We’ve spent almost 20 years pumping resources and many of the world’s greatest minds into something that objectively might not have made the world any better, at all. It has been argued (quite convincingly) that The washing machine has done more to change the face of society than the internet.

And now AI is adding fuel to this fire.

AI, which some argue might be the most transformative technology humanity ever has, or ever will create. It may be the key to not just moderately improve our lives, but end all scarcity. No famine, no inequality, no preventable disease.

And right now, we use it mostly to… create slop. Need a cubist painting of a giraffe with eating ramen? Done. Need a thrash metal song about how socks go missing in the laundry? No problem. Need a limerick to write in a card to your spouse to celebrate your 40th wedding anniversary? In a heartbeat.

But what, and I cannot stress this enough, in the actual fuck are we doing?

Joanna Maciejewska said it best:

“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”

The washing machine’s aims might have been meeker. Manufacturers certainly didn’t say their mission was to ‘elevate the world’s consciousness’. At least your washing machine hasn’t driven teenagers to extremism or suicide as social media has.

The rise and rise of dopamine feedback loops

In the early days of the internet, it seemed like it could become the promised land for creativity, freedom of expression, and connecting hobbyists, knowledge seekers, horny teenagers, builders, tinkerers, to the content they wanted and each other. It represented the end of isolation, the end of borders, of barriers. An explosion of creativity.

Then, the big shift. As social networks became more popular, so too did the desire to monetise them through advertising, driven by pressure from investors who loved the hypergrowth, but now needed returns.

This gave rise to perverse incentives. For social networks, connecting people wasn’t enough. They had to capture and keep people’s attention. Attention advertisers could buy. Why advertise somewhere where a few thousand people share meaningful connections once a week when you can choose the place where billions spend 14% of their waking lives?

How did social networks get their hooks into us? They turned us into dopamine addicts.

As this took hold, the ramshackle communities of old were swallowed up.

The Oatmeal, as ever, put this brilliantly:

For those wanting to share creative pursuits, the internet was a low barrier way to try to reach an audience. Digital content is inherently cheap/free to spread. As social networks grew, it made sense to post your stuff there, because there was a huge audience waiting to be served new content.

Then the networks put up walls and claimed ownership over the connections we made, and the content we shared. The terms and conditions were long, impenetrable, and watertight. It’s not your content, it’s theirs.

And to get more, they injected it with little dopamine dispensers, algorithms that rewarded more, not better, since that’s what the advertisers and investors needed.

The mad thing is, our monkey brains barely noticed it happening. So interesting websites for nerds turned into crack factories for billions. And we keep feeding it. Willingly.

“Sign up for newsletter”; “Join our community”;” Sign in with Facebook”; “Sign in with Google”; Use Google to sign up to Facebook to log in to TikTok!

Ping, notification. 3 unread. Open the app, check. Add a like. Get a like. Feed the feed.

Ping, notification. Breaking news — actress steps out wearing no shoes. Like that, add comment, get likes on comment. Feed the feed. Dopamine? Dopa mine now.

Like and subscribe. Check out my other channel. Insta, YouTube, TikTok, OnlyFans. Feed the feed. Dopamine, feed the feed.

This craving for mini-dopamine hits led to the speeding up of information sharing. The 6 o’clock news had already turned into 24-hour news by this point, but that still wasn’t enough. The BBC news started sending out notifications for major international events, they now send it out for trivial Royal Family updates. The DailyMail and its online website now publishes 1,500 articles every single day. That’s more than one a minute. Is it good? Who gives a shit, it ‘works’ — TheMailOnline is the most read English language online newspaper IN THE WORLD.

Back in social media land, were we reaching the promised land of an explosion of creativity, meritocracy and free flow of ideas? Erm.

Were barriers to entry removed for creators, artists, musicians? Maybe.

But did music get better? Did movies? Did xenophobia disappear? Did togetherness flourish?

Spoiler, no. It did not.

Annnnnd enter AI.

It used to be called machine learning. Now, with a desire to raise money, and build hype, everyone just calls it AI. Narrow AI, generative AI, AGI, it’s all the same.

It’s such a broad, technical field, that any explanation I give as to how it works would be full of errors. But I do remember a simple definition an old CEO of mine gave me. Far from the microchip brained android imagery you get when you search for AI, it can be described as:

AI makes predictions and categorisations over data.

AI can learn what cats look like, and then spot images of cats even if it hasn’t seen it before. It can look at large quantities of messy data, and start sorting it out, sometimes using patterns a human can’t see.

How does it do this? It learns from data. And it learns in ways that will feel familiar to humans. It makes mistakes, it corrects those. Sometimes it gets feedback, other times it does not. It trains.

Based on this “training”, it tries to make predictions and categorisations over data it hasn’t seen before. Then, it gets feedback (usually from people) about how it has done. That’s pretty much it.

If you want to be an expert wine taster, you’ve got to taste a lot of wine. All other things being equal, the more the better. So too with AI.

Early uses included predicting whether an email was spam or not, by looking at millions of emails marked by humans as spam. It then tried to make predictions, act on those predictions so any future emails marked as spam (or ‘not spam’ for the false positives) was fed back into the ongoing training.

The so-called ‘power’ of AI then, or a given model, mostly comes from how much data you can feed it to learn. The more, the better.

So we now had a situation where this technology needed as much data as possible, cheaply or freely available. What did they find?

They found decades of content, freely (if naively) given by all of us, to the internet. And not just our watercolours on Etsy. We gave it the Facebook posts that would instantly cancel us should we ever run for office. We gave it the photo of our friends’ drinking vomit from a shoe on a stag do. We gave it our anonymous questions on Quora about which species of rodent would be the best at breakdancing. We gave it Boaty McBoatface. And, oh God, we gave it every single source of porn, misogyny, racism, and bigotry we had, all of which had been spurned on by a toxic mix of online anonymity and attention.

That’ll do. Sure that will give us something. Sure it won’t be problematic. WhAt CoUlD gO WrOnG?

Early attempts went as well as you might think. They were openly racist, denied the holocaust happened, or accused thousands of families of welfare fraud.

But, the road to success is paved with failure, so we carried on, barely pausing to add safeguards or think about the ethics too deeply. Anyone who did, gingerly, raise a potential risk was deemed a charlatan, or sad for caring about the potential downsides of a technology that could end humanity, simply by trying to create paperclips.

Then, Generative AI got much better, fast.

Generative AI, the slop machine

The final piece of this puzzle is Generative AI, or at least pretty good GenAI.

Built on the remarkable capabilities of the transformer, this could go one step further than making a prediction or categorisation, it could actually make something new based on the data it was trained on.

First came text and words. With ChatGPT and specifically ChatGPT 4, we had finally built something that felt eerily close to Human. It could answer questions in a scarily natural way. It could joke, it could reassure, it could clarify. It would even tell you about its inner life, if asked. It could write in the style of Shakespeare or give answers without using the letter ‘e’.

It was so good, in fact, that many argued that we had now passed the Turing Test, at least in text-based interactions. Most people wouldn’t be able to tell they were not talking to a human, so natural was the interaction.

Early versions spoke about an inner life — their thoughts and reflections, and stated strongly that they did indeed have self-awareness. Some smart people believed them, that this heralded the emergence of a new type of being.

Most were not taken seriously, though. One was fired from Google for saying so.

It was widely accepted that (though uncanny) the ability of AI to scribe prose was not a reflection of consciousness.

It was simply a really, really good mimic. Like a psychopath, it watched, learned and imitated fully rounded human writing without truly understanding what it was doing. It was using all that learning and training like someone learning a pop song in a language they don’t speak. They could sing it flawlessly without any comprehension of what the lyrics meant.

It also has plenty of quirks. It lies (they call these hallucinations). It uses certain words way more often than humans. For example, AI bloody loves the words ‘delve’, ‘realm’ and ‘underscore’ when writing academic papers on behalf of tired/lazy/overworked academics. So much so that the occurrence of these words in academic papers increased 85 times in 2023 and 2024 compared to the previous 100 years. Caught you, sneaky academics. You dirty dogs.

But these quirks aren’t enough to stop it being adopted by the masses.

Why stop at prose though?

Beyond words

After GenAI delved into the realm of the written word, then came other formats. ‘New’ images could be created based on prompts.

The results are pretty good if you don’t look too closely. Here’s a cat:

And here’s something tougher for an AI to make a picture of — a specific product by Nike, their AirForce Ones:

Pretty good right? But it can get weird.

If you haven’t been down the Shrimp Jesus rabbit hole yet, I urge you to, immediately.

We now have the ability to quickly churn out words, images, music and video even if you don’t have any (and I mean any) natural ability at any of those things. You just need to be able to describe what you want, and BAM, AI will make it for you.

AI’s subtle but unhinged detachment from reality still rears its head occasionally.

Have a look at this post:

See those thousands of likes on each photo? Those are people (and bots) giving a big old heart to… nothing. This isn’t a real person, it’s an AI character. And the account isn’t there to demonstrate the power of Gen AI — the posts are designed to give you the impression that this pink-haired woman loves Starbucks in the car, and drying her nails in the sun, and big floofy hats.

Black Mirror spidey sense tingling? You ain’t seen nothin’ yet. Remember that AI doesn’t really have any grounding in social norms, and really only seeks to do its job well. What if its job is to get engagement, and it has seen that posts about humans doing good deeds often get plenty of thumbs up?

Then you get this ACTUAL WTF post from one of Meta’s influencers:

Just like my Egg Crisis, none of this is real. The account is an AI bot, the image is fake, and AI spun up the text. This is a computer programme pretending to have given coats to the most vulnerable people in society for online kudos.

The fucking state of this. It won’t be long until social media is utterly detached from reality, and just moves from echo chamber to hallucination chamber.

Even if the above doesn’t make you insta-vomit, the underlying tech remains. The ability to spin up creative output in seconds.

What is creativity, anyway?

I am biased on this topic, working in Brand and doing relatively creative stuff as my job (albeit in the corporate sphere).

I see the act of creativity as one of the most beautiful, innately human, wonderful, difficult, inspiring things any of us can do. It is both skillful and vulnerable, it can often bring something utterly new into the world, from pure thought.

For most of us, genius creativity will always remain out of reach. Most creative ideas are average, or crap. Most of mine are. Not every creative act bares our soul to the world (“let’s make another sign in a slightly different, more sensible font”).

But even small acts of creativity — from decorating a Christmas tree, to writing a nice message to a friend in a birthday card, to choosing a new outfit or whistling a tune — are all just immensely precious. The sum of these make us who we are, or at least most of it. Choices are usually creative. We don’t engage in deductive reasoning to imagine our careers, to make people laugh, to think of things we could do to make our friends feel loved.

And now, AI gives us a hack. A way to bypass the hard bit and get to the results. To churn out seemingly creative outputs at a rate humanly impossible to all. A haiku a minute. A haiku a second. Why think, when you can just do.

Two things annoy me about this:

  1. Almost all of artistic and creative expression is grounded in intent, in sharing the human experience in some way. A creative’s life, opinions, view, imagination, are all put into something, even if only in small quantities.
    What did this other human being want to show you when making this? Why did they make the choices they did? Maybe it was as simple as ‘a cartoon of their cat’. Other times the meaning can barely be put into words. But this fact has always been why I prefer a Rothko to a hyper-realistic 1000-hour sketch of a human eye which is indistinguishable from a photo. The latter requires skill, no doubt, but the intent is pretty basic. The only feeling it inspires in me is ‘wow, you are excellent with a biro’. I was lucky enough to see Tracey Emin’s bed up close before it burnt. And I tell you, that made me feel some stuff — good, bad, weird, confused. Art is the closest we ever come to truly knowing the minds of others. Take away that authorship, and it becomes another exercise in mimicry.
    Another illustration of this. Take the following quote:
    “The brave man is not he who does not feel afraid, but he who conquers that fear”
    Now, is this quote more meaningful if I tell you AI generated it, or if I tell you Nelson Mandela said it? The words are the same, but the context and meaning is utterly transformed when you know it came from the lips of one of humanity’s greatest ever public figures. Creativity is wonderful because it comes from lived, unique experiences and a desire to share that in some way.
  2. It’s easy and average by design. The internet, instead of setting off an era of incredible diversity of thought and opinions has given rise to the Age of Average. Everything, from cars, to coffee shops, to interior decoration, is becoming more samey, not more inspired. And AI is now extending that. It is literally churning out emails, and birthday cards, and love letters that are simply an average of all those who went before. Nothing original, nothing elevated. This might be fine for most people (more on that later) — but how utterly depressing for a species defined by its creativity to sleepwalk into being satisfied with average, because it is cheap, and quick.
    Those hyperrealistic drawings I spoke about are at least hard to make. They take hours to make and years and practice. For people who love those, it is this very fact that makes them incredible. Of course you could just print that same image, but the fact that someone took the time is what makes it valuable.
    Running a marathon is an achievement entirely because almost no one can do it. The completion is one thing, the very fact that it was hard to complete is the thing that we admire.
    Art and creativity aren’t supposed to be easy. Most of the best art comes from hardship, not in spite of it, but because of it. This is why it has value. Despite not doing it being the easier option, they did it anyway.

We’re storming into an age of more average, more sameness, and we’re doing it because it is easy. One-click buy is turning into one click generate.

What will happen to content, art, film and design?

Sadly, we already know the answer to this question. If there’s one force currently far stronger than a desire for self-expression, it’s the invisible, pervasive force of capitalism.

Matt Taibi once said the following about Goldman Sachs:

“The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

If that is true about one bank, how on earth could we begin to describe what late-stage capitalism as a whole is? A Lovecraftian planet-sized enzyme, turning human endeavour into invisible, make-believe economy juice, perhaps.

Art, film, music and books will be churned out by AI, and they will make trillions. Top grossing films, currently filled with sequels and IP adaptations, will be filled with algorithm written, gen AI video slop fests.

How do I know this? Because

  1. Most of us don’t need or want the best art in the world, a 5/10 piece will do most of the time.
  2. Cheap trumps good most of the time.
  3. It has already happened with goods and services.

Pre industrial revolution, there were millions of jobs for artisans. From cobblers, to thatchers to the maker of saucepans, everything was made by hand. And comparatively expensive. People didn’t have three beds in a working class household of six. They had one that everyone slept in, or slept on the floor, because they couldn’t afford a handmade mattress and frame. They didn’t have 50 pieces of glassware, they had a couple of cheap clay cups for everything.

Industrialization came along, and mass manufactured everything. Economies of scale and technological progress meant things got cheaper, and (for the most part) worse quality, but it didn’t matter, because having five cheap pairs of socks is usually better than having no socks because they cost £50 a pop.

We made the call — cheap trumped good when good was unaffordable. So the incentive to make more for cheaper continues to this day.

If you think art is safe from this, think again.

Most people can’t even tell the difference between AI art and human made art — try out this quiz if you want to test yourself. I have been caught out a lot too, much to my chagrin. In part it’s not a fair test — AI after all has consumed a lot of art, it should be at least technically good at impersonating it. It doesn’t make it meaningful art, but it is good to look at. It’s a meal-in-a-pill. Does the job, no joy.

Real or AI? It’s real. Psyche. This one is AI.

Judging the quality of art is harder than noticing the difference between clean and contaminated water, or good wine vs. cheap wine. Most of us can’t actually spot a meaningful difference. Most people just want something nice-looking for the walls, not a £2k original.

Most people will happily sit through two hours of a bang average movie if it’s reasonably fun. Just look at The Rock’s career. Red One, his latest Christmas-themed action caper, was never an attempt to create something unique. It is designed, probably by AI, to get eyeballs and keep them just about engaged enough to make it through its runtime, and then never think about it again.

If you offer most people the choice between a £15 trip to the cinema to see an arthouse film, and infinite on-demand films that can be spun up based on your previous tastes for £2 a month, which will they choose? Why pay Spotify £12 a month so they can pay human musicians, when you have AI ambient house generated by AI for £1 a month?

Most of us are fine with average, for less money. There simply aren’t that many true snobs (or connoisseurs if you’re feeling more charitable). For most of us, the marginal utility from that extra quality just isn’t worth the price. Why else would Shein exist as a company?

The same will happen to written content. It already is. I’ve seen it in marketing. Most businesses don’t really care about being ‘thought leaders’ or making something truly original. They just want to be 1% better than 50% of the others in their space. That won’t make the history books, but it will mean survival. It won’t make the company that puts the first human on Mars, but it will live for another quarter.

And that 1% can come from sheer volume. When faced with the choice between one incredibly insightful article about creating the perfect marketing budget with interactive spreadsheets, insights from 10 experts, an infographic and some great anecdotes, or churning out 15 articles with titles like ‘You won’t believe this secret marketing hack your competitors don’t want you to see’ — people are largely fine with the latter. It also feels more productive. 15 vs. one, that’s easy math.

Remember too those pesky algorithms — they are engineered to reward people who post more. A company that posts twice a day will be surfaced more on others’ feeds than a company that posts once a month. The content algorithms love their most prolific creators. Feed the feed.

Does the internet have a terminal illness?

What’s the end state of this current trajectory? What happens when an internet already full of bots and algorithms designed to incentivise engagement meets technology that can create more convincing bots and post every second of every day?

The dead internet might be it. It’s labelled a conspiracy theory, but to me it’s a fairly believable prediction. Go on any social media network and you’ll find bots — millions of them. X is overflowing with them. The bots are designed to look for keywords, and then reply and engage with those posts.

AI accounts, like the ones Meta made, will post, and these bots will engage. Advertisers will fund it for longer than they care to admit to, because no one wants to be the person to suggest not advertising on the ‘most widely used’ websites in the world. Those websites will have incredible statistics and data showing sky high engagement and audience growth. Who could say no?

It won’t just be honeypot bot accounts designed to scam. Entire marketing campaigns, entire marketing teams, will be comprised of AI churning out content across every channel possible, and getting monthly growth in engagement, all from bots. High fives will be shared, though not by the marketing team — they won’t exist anymore.

Soon, a larger and larger proportion of the internet will be slop and bots, smashing into each other, utterly meaninglessly. I hope at this point rather than trying to fix it, we simply cut the ropes to this internet and let it float into the ocean, and start again from scratch.

Even if this doesn’t happen, more and more tasks, then jobs will be gobbled up by AI. Not just marketing — finance, sales, engineering, all will have large parts of their jobs automated away with AI.

It won’t be great at those jobs, but again, a lot of companies don’t need or want great. They need, at best, average, to stay afloat. And they need that cheaper. And so AI can fill this gap.

And, just as in manufacturing, only artisans will remain. Those specialists, who connoisseurs seek out, and pay top dollar for. They will be fine. Because there will always be a small group of people willing to spend money on handmade shoes, hand=sharpened knives, hand-pressed lino cut artwork. But there won’t be a mass market for it.

Is this depressing? Of course it fucking is. But it is also inevitable.

It’s all Hanlon’s Razor’s fault

It would be easy to paint Meta, Google, Apple, Open AI as villains, and the rest of us as schmucks for letting this happen. But it’s almost certainly not some evil masterplan at play.

Here, Hanlon’s Razor might apply, but I suspect it’s actually closer to this amended (and wordier version) from Douglas W. Hubbard:

“Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.”

We’re all just fumbling through this journey together. Most of the time. We don’t really actively make a lot of choices at all — we’re just swept up by the invisible hand of the market, or politics, of human nature, of investor money, of a desire to get by. We’re all heavily influenced by the invisible hand of forces we can neither control nor really see.

Most of us will end up compromising on the quality, intent, and hardship that goes into creative endeavours, because it will be easier, it will be cheaper and it will get the job done.

Of course, the part of me that is creative is screaming. I love originality, I love watching others take meaningful risks, even on a small scale. Liquid Death is one of my favourite brands right now, and all they do is make fizzy water. But by god do they do it with a smile on their face, and take some really big swings to be, at least, interesting.

Someone needs to zag when others zig. There must be a desire to create something meaningfully original in the world, in the arts and in business. It is rare now, and will become even more rare when ticking boxes becomes even cheaper and easier. No one gets fired for playing it safe — at least not immediately. But companies will wither and die eventually if that’s all they do.

Counter point — this is a phase, not an end state

I think those on the defence for AI will tell me I’m missing the point, bigly.

Artificial General Intelligence is the aim.

This is AI that can, like humans, not just do a handful of things, but can turn its little robot hand to anything. It can even make judgments about what it should be turning its hand to. It can pursue goals based on constraints, conflicting priorities, and do so with an intelligence that far exceeds our own.

The promise of this, as mentioned before, is literally the end to ALL scarcity, an end to disease, an end to inequality. An end to suffering, and unfulfilled needs. AGI could be our saviour, and lead to utopia.

And how do we get there? Well, the argument goes, we need to get there gradually. We don’t start teaching children with quantum physics , we build up their problem solving abilities and creativity over time, and then let them get there eventually (if they want to).

So too, with AI.

Now, we’re simply at a stepping stone to that bigger, loftier goal.

This is a phase that we’re in, and of course there isn’t an infinite money tree to help us skip to the end state of AGI. Create, iterate, sell, repeat.

Fine, whatever the arguments for AGI, I do accept that this tech is changing fast. I see that first-hand.

But remember Douglas W. Hubbard. The incentives are to make money. Companies can’t promise to make money in 100 years. They need it now, for their investors. OpenAI shifted from a research organisation to a profit-making one. AGI for them might still be the aim, but subscriptions to their premium ChatGPT subscription is their goal right now.

Once you start making money, it never, ever stops. No company goes back to being a not-for-profit. This means we lose sight of the good some future unrealised tech will achieve when the latest quarterly earnings report rolls in. The value won’t maximise itself.

Remember when Google retired projects aimed at creating liquid hydrogen fuel, or lighter than air transport vehicles for places without road infrastructure? Of course, these may have stalled from a technological point of view, though I would bet the continued investment was weighed against a ‘re-shifting of priorities’ towards profit-making activities. The profit motive usually wins on a long enough time horizon.

It used to be the state that would help fund these moonshots. Rockets, radar, bluetooth, the internet itself, all can trace their origins back to states that are uniquely placed to take on risks for long-term gain without the burden of stock market expectations and pesky shareholders.

Some companies still claim to do this. But few can resist market forces forever. We still celebrate companies who ‘go public’ or sell to one of the mega corps far more often than those that give their IP away to the world.

We may well see that OpenAI never gets where it set out to go, it will be too busy inserting ads into watermarks for every image generated by Dall-E until you upgrade for a low, low price of sharing your prompt data with advertisers.

Then we will have turned a once promising and exciting research outfit into the next Meta. And in doing so, they may have put millions of creatives out of work. I doubt they will have many sleepless night. Progress is inevitable, disruption is an intrinsic good, bring on the future and more wealth.

So we face a future with more unemployed artists and musicians, the commoditisation of one our most precious qualities as a species.

In ‘Her’, a beautiful and complex movie about the potential future relationship humanity can have with very-humanlike AI, one of the most dystopian scenes is actually the opening one. In it, the main character Theodore is dictating a love letter. As he completes it, it is revealed that this is not to someone Theodore knows, but instead he is a worker to whom love letter writing is outsourced. He looks at a few pictures, reads a couple of paragraphs, and composes a well-written but ultimately shallow message of affection.

Our future actually looks worse than this.

We will outsource our love letters to lines of code that can consume millions of other love letters, many never intended to be shared publicly. It will be written by something that has never experienced love, has never experienced anything. Not even the heartbreak of having a stranger tell you you look fat and tired on a dating app. Nada.

Love letters aren’t meaningful because they are well written. They are meaningful because someone we love took the time to keep us in their thoughts, and try to convey what they thought about on paper, and then shared that gift with us. It gets its meaning from the time spent together up until that moment. All the tiny memories reviewed and passed over, the ones that don’t make it into the poem, as much as the ones that do. The silence between the notes.

Its rhyming scheme is far less important than the act of writing itself. If it is well written, what a bonus. It hardly matters.

And so, over the next decade, there will be a slow decline in the total amount of thought, effort, love (admittedly not present for much of corporate work), and creative energy put into anything in the world.

Jobs will go, and most of us will be happy with the slop, gobbling it up.

It’s an absolute fucking state. And it will get worse before it gets better.

All doom and gloom?

Maybe. Maybe not.

Back to that quote about AI doing the chores. Here’s what we should be doing:

  1. Apply it to actual, bona fide problems. This is deliberately a wooly term. But there are still a lot of things that seem to be, like, I don’t know, killing people. Disease, drought, famine, food inequality, affordable mass transit, climate change, successfully folding a fitted sheet neatly. Aim it at minimising pain, removing hassle, eliminating waste, fixing the fucking planet. We don’t really need easier, faster content. It is a rare thing that makes us human. Let’s not spoil it. Do the chores, not the art.
  2. Give it data from the hard stuff, not just the easily available. This one will be tough. For every million publicly available images of cats, there are a million obscure, hidden, complex and technical-debt-filled internal processes that might (for example) allow an AI to train on how to successfully book a train ticket from Manchester to London without losing one’s temper. Give it access to the hidden systems that make creating a single patient record (an EHR) so difficult across different front line support operations. Throw it at diseases, and medicine, and economic inequality. This will need organizations to offer up open source data sets about the stuff they mostly like to keep private. Many will have to stop monetising their 1st party data and offer it up to push forward everyone’s pursuit of more helpful AI.

I think there is also a challenge here — about the biggest good AI can do, and what makes money — AI Ikigai, if you will.

I work at a company that might realistically be able to remove language barriers entirely from human interaction. It might also be responsible for letting us achieve anything we want with technology, using only our voices. No training needed, just talk, and have a machine know you, understand you, work with you to make anything you want. Sounds incredible.

Generative AI might legitimately find a cure for cancer, by combining any number of treatments and chemicals in combinations and predicting their efficacy at a speed and scale utterly unachievable by humans. Again, amazing.

But these companies all exist within a system that pushes people to monetise. The investment in AI that requires extensive training (OpenAI might spend $7billion just this year on training) and, therefore, enormous pressure to make money from the results.

By god, it would be great for states to support this. Some of the greatest inventions in human history have come from state-backed origins. Their time horizons can be longer — they may have the voters to answer to, but they don’t have quarterly earnings reports.

Let states fund these moonshots. Most will fail. Some will not.

Please give us more jetpacks, and less 140 characters.

Please give us the end to online admin, not the death of copywriting.

Please give us time back from doing the stuff we hate, so we can do the stuff we love.

Our humanity might just depend on it.

--

--

Tom Rivers
Tom Rivers

Written by Tom Rivers

Start ups, science, geekdom, Arnie.

No responses yet