Article

The Rise and Fall of the AI Apocalypse

It's not all not sunshine and roses
An AI generated cartoon robot sitting at a desk stacked with piles of paper and several coffee mugs. The robot is saying "When the AI Colilile akes all the coffee breaks"

A little over a year ago (let's be honest, I'm lazy and it's been quite a bit more than a year), I wrote about the dangers of AI. It wasn't the danger that AI posted directly, but the danger that an unequal distribution of its utility would have on an unprepared economy. The tl;dr is that those currently with power would control the AI, the AI would replace human white-collar workers, and the rest of humanity would be consigned to the complex low-wage physical labor that our robots cannot yet do. And I predicted that this could happen within a year.

Well, it's been (more than) a year. What happened? Why didn't the economy collapse and drive us into a warped Twilight Zone remake of a poorly written DUNE prequel?

Conventional wisdom among the pundits is that AI simply isn't as good as it was made out to be, that LLMs are not yet capable of the complex reasoning required to fully replace human workers.

That's bullshit.

Where I work we've been implementing AI since long before ChatGPT blew up the internet. We integrated a GPT3 model into our workflows to analyze and stage incoming customer tickets, performing the work of a human dispatcher. We also have it doing some basic technical work, though the integrations simply aren't there yet to have it close more than a handful of tickets. We've recently upgraded to a GPT4 model and its capabilities have only grown. What's holding us back is not the AI, but our RMM and PSA tools not supporting it.

No, AI definitely *is* capable of replacing most of the human workforce. (We haven't replaced anybody with it, but instead are using it to augment our team and improve our productivity. This does raise the minimum skill floor for the people we hire, but we're still hiring.)

So why hasn't AI replaced us all?

Cost. LLMs are expensive. The token cost for GPT4 is insane, so much so that we've built our own middleware database to minimize token usage (which also reduces the flexibility and effectiveness of the AI). Even with AI-assisted coding, it takes hundreds or thousands of person-hours to write new integrations, and those integrations are limited by the utility of the APIs we have access to.

The cost of integrating an LLM is so high that instead of building complex "GPT4-like" AIs to replace accountants and IT technicians, we've instead used last-generation AI for barely-better customer service chatbots and dank memes.

Image source: Stanford EDU study on RNNs and meme generation. https://arxiv.org/pdf/1806.04510

Though, to be fair, the memes are pretty dank.

Oh, and disinformation. AI is used for lots and lots of disinformation. Estimates in 2018-2019 put the number of bots on social media platforms around 15-20% of all users. Metrics for the post-GPT surge are still hard to come by, in part because those bots are so much harder to detect, but some preliminary estimates put it close to 50% of all social media traffic. See an aggravating political comment on facebook? Probably AI. (Disinformation bots are perhaps the biggest actual direct AI threat to our society than any other and that probably deserves attention of its own... but other people with much wider audiences and better editors are already giving it that attention.)

On that note, there's a seedy underbelly of the internet that thinks it's far worse than even that. If you're not familiar with the Dead Internet Theory, you should be; bots make up 99% of my readership. Of course I don't subscribe to conspiracy theories, especially this one. But there's a substantial amount of quantifiable data supporting some of the basic premises of the Dead Internet, even if its subsequent conclusions are fallacious bullshit.

So is that it? Crisis averted? Have we saved the world from an AI apocalypse by the Power of Dankmeme?

Not quite.

We're still seeing AI move in on knowledge workers around the world. It's just moving slower, taking longer. This could be good or bad.

The good? If it takes just long enough but not too long, it gives us time to prepare. To pass legislation to regulate the use of AI or maybe build social safety nets to cushion the fall. It gives people time to retrain for other jobs that aren't (yet) AI-compatible. Just slow enough and AI taking our jobs could finally be the push we need towards prioritizing wellness over GDP.

The bad? If it takes too long, if AI is too slow to take over our economy, we'll lose sight of the problem. We'll forget that it's a threat looming in the background. We'll become complacent and we won't build those safety nets and then it'll be too late.

It's not all doom and gloom, however. To be clear, there's a lot of dark clouds on the horizon; automation has always been a driver of unemployment and the accompanying social and civil unrest, and AI definitely accelerates that to a whole new level, threatening jobs that previously we all thought were secure. But what comes after those clouds, that's perhaps not so dark. Automation, unemployment, and civil unrest have also always been catalysts for change - and with a strong enough catalyst, maybe we can make some big and long-overdue changes.

And despite these doomy gloomy diatribes, AI isn't the problem. I love AI and embrace it wholeheartedly. The problem is how we, as a society, choose to use it. Dank memes and frustrating customer service chatbots are okay, but if we're really clever, AI could do so much more - for all of us.

Coming up: Are we in the Star Trek timeline?

Category: 

Radioactivity: 

FBCommentswarning: 

This site uses the Facebook Comments plugin. You may need to turn tracking protection off to view or add comments.
This website uses the Facebook Comments plugin, which requires 3rd party (Facebook's) cookies. You may need to disable tracking protection in your browser to view or add comments.

© 2015 KT-Boundary
Powered by Drupal
Built with TB Sirate by ThemeBrain.com