Tech Leaders Warn Against Unregulated AI in Open Letter, Citing “Profound Risks to Humanity”
Elon Musk calls for pause on AI development | Photo Credit: Carina Johansen/NTB via Reuters
An open letter published on Mar. 2 by the Future of Life Institute signed by tech leaders like Twitter and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak calls on “all AI [developers] to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Currently, the letter has 27,565 reviewed signatures, but there are over 30,000 signees left to be vetted by the institute before they are displayed alongside the letter on their website. This letter was prompted by the sudden rise of AI that has caught many in the production, education and technology industries off guard.
By calling for a temporary pause to rapid AI development, they hope to give tech companies more time to test the software without the need to produce for a rapidly growing industry. This period of development would allow for tests to be conducted on the ethical capabilities and biases of each AI system.
Elon Musk, who has publicly described artificial intelligence as “humanity’s biggest existential threat”, is an external advisor on the board of the Future of Life Institute, which published the letter. Similarly, current president and co-founder of the Future of Life Institute Max Tegmark believes that “the future of human civilization very well may be at stake over this very question of the role of artificial intelligence in our society.”
Although the letter acknowledges that AI is a new field that will bring “a profound change in the history of life on Earth,” it shares the belief that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Stuart Russel, Professor of Computer Sciences at UC Berkeley, who signed the open letter, weighs the impact of artificial intelligence: “Humanity has much to gain from AI, but also everything to lose.”
This call to action has been criticized by the creator of ChatGPT and CEO of OpenAI Sam Altman, who told The Wall Street Journal that the letter was “In some sense… preaching to the choir. We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”
According to Cisco’s 2022 Consumer Privacy report, consumers feel in the dark due to the lack of transparency in this new field. The report found that approximately two-thirds of participants wished for opt-out options and more accountability from AI software and AI developers.
Gary Marcus, psychology professor and founder of machine learning company Geometric Intelligence, who also signed the open letter, supports the idea that AI companies need to become more transparent. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns,” he told The New York Times in an interview. Altman has said that OpenAI spent six months training and conducting safety tests on GPT-4 and as of the end of March, has yet to start training GPT-5.
According to an interview with CBS News, Microsoft CEO Satya Nadella, believes that artificial intelligence is “a new race in the most important software category.” After partnering with OpenAI to create an updated version of the Bing search engine, which includes a feature like ChatGPT that can provide in-depth explanations and further chats, Microsoft is hoping the partnership will allow them to “guide the industry toward more responsible outcomes.”
An article from the Harvard Business Review asks, “Should we require—and can we even expect—AI to explain its decisions?” Since ChatGPT launched, it has produced a number of instances of harmful and unethical content. These include the creation of a works cited that contained forged citations and links, and the false accusation of sexual harassment against a law professor, citing a made-up Washington Post article.
As explored by the open letter, AI can’t understand its actions and will remain unable to without proper training. Despite its role as an education and language tool, ChatGPT continues to disseminate misinformation to its global audience of over 100 million users. The letter asks if letting “machines flood our information channels with propaganda and untruth” is unethical and concludes that this cannot go unchecked.
However, some experts believe this letter doesn’t cover current problems and contributes to “AI Hype,” instead of developing viable solutions.
Emily Bender, a Professor of Linguistics at the University of Washington, claimed in a tweet on Mar. 28 that the letter misinterprets her research, which pointed out that “this head-long rush to ever larger language language models without considering risks was a bad thing.” She tweeted that the letter was “just dripping with #Aihype,” but mentioned there were a few points she did agree with.
Arvind Narayanan, an Associate Professor of Computer Science at Princeton told VICE News that the open letter was taking away from addressing the real harms associated with AI, and mainly expressed long-term concerns rather than current problems. He believes these concerns intentionally “divert attention from present harms-including very real information security and safety risks,” which he tweeted a week after the letter’s release.