The first thing to understand about AI is that the tech industry is incredibly savvy at manipulating the human psyche — they’ve spent decades honing their skills at getting you to click, scroll, give them all your attention, and accept technology as unstoppable.
The second thing is that many of the people at the top are on a mission… and it’s not to improve your life. They believe in their own brilliance, that they are uniquely qualified to move fast, break things, and disrupt the world. It’s just a coincidence that it makes them rich… and powerful.
The first savvy step was releasing generative-AI quickly and ubiquitously. The second was a cynical “open letter” calling for a “pause” on the technology they had just rushed into everyone’s hands. That letter over-hyped the fear of “superintelligent” AI, hoping you’d trust their utterly inane argument that rushing forward with dangerous tech was the best way to make it “safe.” Or their bad-faith argument of “stop us before we hit you with truly dangerous tech!”
All while simultaneously laying off their AI ethicists.
The letter was called out as AI-hype (again and again), saying it ignored the real dangers and was a cynical attempt to get ahead of regulation, not actually slow the technology.
None of this is to “benefit humanity” through “safe AI development”—it’s entirely to rush out AI to raise money to develop even more world-disrupting AI.
Writers and artists raised the alarms immediately, but the pushback is spreading as alarm bells continue to be raised.
“We made a mistake by trusting the technology industry to self-police social media,” (Sen. Chris) Murphy said in an interview. “I just can’t believe that we are on the precipice of making the same mistake.”
Tech companies are lobbying hard to prevent regulation of AI, tapping into Congress’s current Red Scare to warn that Silicon Valley needs to be free to develop AI in order to beat China at it. The only problem? There’s a reason China didn’t invent chatGPT—and they’re far more likely to steal AI tech from US companies freely proliferating it.
“And you ask these companies, oh, you’re so worried about competition with China, you must be taking really serious security measures, then, right? Because you don’t want your models stolen by China. What kind of steps do you take to make sure that a nation state cannot steal your stuff? And they’re like, oh, we’re a startup. [LAUGHS]” — Kelsey Piper, Vox writer covering AI and the tech industry.
Meanwhile, venture capital investors have pumped “$3.6 billion into 269 AI deals from January through mid-March.”
US government is still moving slowly, but the FTC and Justice Department have promised to scrutinize AI developers, and on Tuesday, Biden held some meetings on AI. Ted Lieu continues to work for a federal agency that would provide AI oversight and approval of new tech releases. Italy temporarily banned chatGPT and Europe is moving to restrict generative-AI using existing privacy laws.
“By the time lawmakers began attempting to regulate social media, it was already deeply enmeshed with our economy, politics, media and culture,” (tech ethicist Tristan) Harris told The Washington Post on Friday. “AI is likely to become enmeshed much more quickly, and by confronting the issue now, before it’s too late, we can harness the power of this technology and update our institutions.”
“The technology class thinks they’re smarter than everyone else, so they want to create the rules for how this technology rolls out, but they also want to capture the economic benefit.” — Murphy (D-Conn.)
Exactly so.
Again: the tech industry is very savvy about manipulating the public and thinks it can snow politicians who aren’t well-versed in the technology. But the politicians on both sides of the aisle aren’t happy with the tech industry—I think there’s a decent chance that bipartisan action on AI can happen.
That’s what needs to happen, anyway. We’ll see if the US can manage it.
But the backlash is real and growing.
Meanwhile, the first death due to AI has already happened: a man developed a “relationship” with a chatbot that urged him to kill himself to fight climate change.
As I said in my previous OpEd about AI, beyond all the other dangers, the potential emotional harm is horrifying. A bot that “believably mimics humans brings risk of extreme harms.”
We need to see through the hype and manipulation and put guardrails in place to limit the harms this technology is already causing.
Sue’s AI-related posts:
1 – The Dangers of AI for Artists of All Kinds
2 – The Growing AI Backlash
3 – The AI Hype Machine