A coalition of founders, CEOs and professors (Elon Musk among them) says artificial intelligence companies should “immediately pause for at least 6 months the training of AI systems,” though some may just want to sideline the competition so they can launch their own AI products.
Some may argue that the current state of AI language bots etc. has been oversold and overhyped. I’ve always found the ChatGPT-generated text and cartoonish avatar selfies to be “industrial strength nothing” (in the words of author Summer Brennan), and there's also the problem that AI bots still have no idea when they get things completely wrong.
But the landscape may have changed in the last ten days or so. The release of the new Chat GPT-4 is being seen as a potential goldmine for scammers and fraudulent actors, and some fake pictures of the Pope in a puffy jacket did legitimately fool a ton of people. So there is now renewed discussion among the tech cognoscenti of whether the current risks of AI outweigh the benefits.
📢 We're calling on AI labs to temporarily pause training powerful models!— Future of Life Institute (@FLIxrisk) March 29, 2023
Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short 🧵on why we're calling for this - (1/8)
That’s why on Wednesday, TechCrunch reported that 1,100 founder types, CEOs, tech executives and professors published an open letter saying, “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The letter states that “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
"AI systems with human-competitive intelligence can pose profound risks to society and humanity," according to an open letter signed by Elon Musk, Steve Wozniak and other tech leaders. https://t.co/o1jD4v1w4V— ABC News (@ABC) March 30, 2023
One of the top signatories of the letter is Elon Musk, and that right there is an immediate red flag. (The other big, big names are Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang). Musk himself backed out of being an early investor in Chat GPT parent company OpenAI, and he’s been notably snarky toward them on social media ever sense. (As co-founder Sam Altman said of Elon to Kara Swisher last week, "I mean, he’s a jerk, whatever else you want to say about him. But I think he does really care, and he is feeling very stressed about what the future’s going to look like.") The odds of Musk’s motivation to kneecap OpenAI out of pure sour grapes are also likely around 100%.
And looking at the other people who signed the letter, it’s full of executives and higher-ups from Google and Microsoft, both of whom are trying to develop competing products with OpenAI, with at-best middling success. You can easily see how they would prefer to put the freeze on OpenAI’s far more successful advances, hoping that their crappy shit products can catch up during a six-month pause. Similarly, several other currently less-successful AI company CEOs also signed the letter.
(Notably, the letter is also signed by Getty Images CEO Craig Peters, who has a lawsuit going against AI companies for using their work without permission.)
What impact will AI have on the workforce? The proportion of repetitive cognitive jobs - about 30% of all white collar jobs - will drop. That trend has already started. https://t.co/6HItbGDYzo— Andrew Yang🧢⬆️🇺🇸 (@AndrewYang) March 27, 2023
And let’s pick apart this quote from the letter: “Should we automate away all the jobs, including the fulfilling ones?” Hmmm…. The “fulfilling” ones? Are we suddenly drawing a distinction between working-class blue-collar jobs being automated, and rich-people white-collar jobs being automated? It’s difficult to read that sentence any other way, and many of these signatories have been at the forefront of automating non-executive jobs.
It’s going to be very funny when all the generative AI tools make way less money than the current hype cycle suggests, and everyone just gets pissed at the staying power of crappy CGI used by ppl without the power of imagination— noah kulwin (@nkulw) March 27, 2023
Many of the people who signed this letter do indeed have skin in the game with the AI racket, and as such, seem to be overhyping its capabilities. There may be an ulterior motive to drive investment into their own efforts. Yes, they say that “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us,” but this is the same industry that swore we’d have flying taxicabs by the year 2020, so these folks are prone to a little exaggeration and overconfidence here and there.
It's quite simple: You don't sign anything coming out of the "Future of Life Institute". The Longtermists are a eugenicist organization with little regard to actual living people's suffering.— tante (@tante) March 29, 2023
Some people are also making hay of the fact that the letter was posted to something called the Future of Life Institute, which sounds like Esalen or the Human Awareness Institute, just without the nudity or sex. They’re actually a very wealthy ideology group promoting a controversial platform of “longtermism,” of which disgraced crypto guy Sam Bankman-Fried was a big proponent.
There are selfish reasons for San Francisco to root for the AI industry. SF is the center of this industry, and it could be our best hope to revitalize a struggling downtown.
But remember, the goal of the tech industry is not to create quality products — the goal of the tech industry is to inflate employee stock options and to enrich investors. Quality is an afterthought, if even. That’s not a new risk, even if AI involves new risks. Whether these new risks come with a slew of unintended consequences, even with a so-called six-month pause, a pause which these 1,100 founders, CEOs and professors have absolutely no way to enforce.