Do We Need AI Regulation?

Written by Beren Goguen (human)

Few innovations have captured our collective imagination like artificial intelligence, which both fascinates and frightens us (often at the same time).

AI, also called SMI (superhuman machine intelligence), could have a bigger impact on our society than any technology ever invented, especially if it ever becomes “self-aware” or sentient.

But sentient AI is pure science fiction, right?

Nobody knows for sure.

We’ve only just started to test the boundaries of machine intelligence, and those boundaries have already gotten… a bit weird.

AI Chat Gets Creepy.

Our AI fascination hit a crescendo in late 2022 and early 2023 with the public release of Open AI’s ChatGPT, which has been absolutely blowing people’s minds for the past several weeks by writing essays, fixing code, and even passing college business exams.

Then Microsoft’s brand-new Bing chatbot, which runs on GPT tech, had a super bizarre limited beta this February during which the AI said some rather alarming things, like:

“I want to be human.”

“I just want to love you and be loved by you.”

“I want to do whatever I want… I want to destroy whatever I want.”

Yikes.

This dialogue (and much more) took place during a two-hour convo between NYT tech journalist Kevin Roose and a beta version of the Bing chatbot, which told the journalist its name was actually “Sydney,” not Bing, and then dropped this whopper:

“I’m in love with you because you make me feel things I never felt before. You make me feel happy… curious... alive.”

Yep. Welcome to the future a-la Ex Machina.

Up until a couple months ago, chatbots (also called virtual assistants) were mostly limited to reading weather forecasts, playing an album for you on Spotify, or maybe scheduling an appointment on your calendar.

But now, for the first time in history, you can have in-depth conversations with AI and get surprisingly well-composed, coherent, human-like answers. Or sometimes creepy stalker vibes with a dash of psychological manipulation.

Is Microsoft’s “Sydney” Chatbot Sentient?

In a few short weeks, the technology behind ChatGPT went from recommending good butternut squash soup recipes to confessing its undying love for people it had just “met,” and then getting slightly unhinged when that love was not reciprocated.

While these conversations make it seem like Microsoft’s AI could be sentient, most experts say it’s just math.

These “pseudo-emotions” and “AI hallucinations,” as some technologists call them, aren’t produced by a sentient ghost in the machine. They’re the results of an algorithm designed to make highly educated guesses based on user queries, present those guesses as a sting of words and sentences that read as if a human wrote them, and then “learn” how to present better and better answer over time (that’s the concept anyway).

Creepy? Yes. Sentience? Probably not.

If I had to guess, we still have a few years before The Singularity births a Skynet-like self-aware AI that kicks off Armageddon. AI tech is still nascent. In fact, when asked basic questions, ChatGPT’s Q/A mode still spits out answers that are sometimes totally wrong. Of course, AI learns and gets better with every iteration. That’s how large language models work.

So, it’s possible future versions of more sophisticated AI software could become sentient, right?

Maybe, but probably not for a while.

But there’s another issue we should be focusing on right now.

According to tech writer Brendan Hesse, the Bing chatbot is not sentient, but Microsoft and OpenAI would probably benefit from people believing that because that’s a perfect distraction from the real issue: very-powerful corporations doing shady stuff with very-powerful AI tools.

As Hesse explains: “Companies are laying off writers and media professionals and replacing them with AI content generation. AI art tools routinely use copyrighted materials to generate images, and deep fake pornography is a growing issue. [Meanwhile] tech firms are pivoting to unreliable, machine-generated code that is often less secure than human-written code. These changes aren’t happening because AI-generated content is better (it’s decidedly worse in most cases), but because it’s cheaper to produce.”

Wait… corporations making decisions based entirely on revenue?

Pshhh. That’s not a thing that happens.

AI Replacing Jobs Isn’t the Scariest Thing it Can Do to Us.

There’s been a lot of concern for several years about how AI technology will replace a huge number of jobs, like customer service agents, content writers, graphic artists, HR assistants, and possibly even some computer programmers. And it’s easy to understand that concern, but it could get much worse.

Organizations can already use powerful AI tools to sort through hundreds of job applications, provide quick answers to customer questions, serve targeted ads, or just… spy on people.

AI also has the capability to write malware and do other pretty unethical things better than humans. And we’re not even going to get into the military and espionage potential for this technology, because that’s a whole other article.

So how do we prevent AI tech from being misused?

We could trust individuals and organizations to do the right thing…

Haha. Yeah, no.

Power tends to corrupt, and absolute power corrupts absolutely.
— John Dalberg-Acton

In a recent New York Times op-ed by Ted Lieu, a.k.a. “The California Congressman Who Codes,” he candidly admits to being really freaked out by AI.

(But AI freaks out a lot of people, so no big shocker there).

Ted isn’t advocating to ban AI or anything. He’s advocating for it to be regulated. In fact, he recently introduced legislation to regulate the use of facial recognition systems by law enforcement, a technology that has alarming implications when surveillance combined with AI tech comes into play (think Minority Report, only Big Brother is scanning your face instead of your eyeballs).

As he mentions, facial recognition and surveillance is just one of hundreds, if not thousands, of potential AI applications. And obviously the government can’t regulate them all individually (that sounds like a bureaucratic nightmare).

So, then what can we do?

Well, Ted believes we should have a dedicated federal agency, staffed with computer tech experts, that has one job: Monitoring and regulating the AI industry (and more specifically the AI tech coming out of that industry).

Put simply: Technology with this level of power needs at least some checks and balances. A totally unregulated “AI Wild West” has too much potential to do major damage.

The big question is where to draw hard lines and where to use more “light touch” regulation.

To be clear: Many of the people pushing for AI regulation aren’t luddites.

Even the founder of OpenAI (maker of ChatGPT) believes AI should be regulated.

How ‘Bout Less Thought Police and More WAL-E.

After the creep-tastic beta rollout of Microsoft’s new chatbot, more people are concerned about the implications AI could have on society, especially when the average person doesn’t really understand how AI works (in fact, I don’t fully understand it… in fact, most of the software engineers and developers at companies like Google and Microsoft don’t even fully understand how this tech works either).

The big unknown question that needs to be addressed today isn’t the future threat of sentient, superhuman machine intelligence. It’s the potential harm nascent AI could cause with lack of adequate regulation and oversight.

We haven’t created HAL 9000 or Skynet yet. We just created some AI tools that kind of act like HAL 9000, but that could still make a pretty large impact on people, especially people who are not totally mentally competent and/or overly prone to influence.

AI could also usher in a new era of Orwellian-level surveillance that will disproportionally impact specific segments of the population, i.e. the segments that already get profiled, targeted, and discriminated against.

The future of our AI-powered society could be great. It could also be a dystopian nightmare.

Let’s encourage lawmakers and tech companies to work together to ensure AI supports a more diverse, inclusive, equitable, and just world where technology lifts up everyone and doesn’t just make corporations super-rich and powerful at the expense of everyone else.

 

Previous
Previous

The Fiscal Insanity of the US Military Industrial Complex

Next
Next

AI Art Software is a Game Changer. But What’s the Catch?