Elon Musk, the founder and owner of Twitter, and Steve Wozniak, the co-founder of Apple, are among the well-known figures who have signed an open letter urging the suspension of the release of ChatGPT and other AI-powered products.
The Future of Life (FLI) Institute’s website featured an open letter last Wednesday titled “Stop Massive AI Experiments: An Open Letter.” It said that we call on all AI labs to immediately halt the development of AI systems more potent than GPT-4 for at least six months.
What issues have been raised here, and what solutions have been proposed, even as AI advances towards more effective and superior error-free models? We clarify.
What Does The Open Letter Say About ChatGPT And What Does It Mean?
In the past few years, chatbots—a computer programme that can converse with people—have proliferated on websites. Chatbots assist customers in submitting requests for returns and refunds on websites like Amazon and Flipkart. But a much more sophisticated version was born when OpenAI, a US-based AI research firm, developed the chatbot ChatGPT last year.
According to OpenAI’s description, ChatGPT may respond to “follow-up queries” and “admit its mistakes, dispute wrong premises, and reject improper requests. ” It is based on the GPT 3.5 series of language learning models from the business (LLM). Generative Pre-trained Transformer 3 (GPT) is a computer language model that uses deep learning strategies to generate text that resembles human speech based on inputs.
In artificial intelligence (AI), large amounts of data are supplied into a system instead of particular inputs provided by human programmers. The system then uses this data to train itself to grasp information meaningfully.
Similar to how search engines like Google revolutionised how people sought out information, the introduction of ChatGPT has been hailed as a new stage of technical growth.
Yet, the technical efficacy of ChatGPt has been praised almost universally. There is no agreement on how genuine these claims are in the future or whether it is everything that it is claimed to be right now. It has succeeded in passing entrance exams to prestigious schools like the Wharton School of Business and the US bar exam (which is required to become a lawyer in the country).
Why Is The Criticism Raised?
The letter was published on FLI’s website, which lists its activities as grantmaking, policy research, and advocacy in fields like AI. Scholars, IT Executives, and other famous people have signed the letter, including Yuval Noah Harari, author and professor at the Hebrew University of Jerusalem, Jaan Tallinn, co-founder of Skype, Craig Peters, CEO of Getty Images, and others.
The letter points out that modern AI systems are intelligent enough to compete with people. According to the authors, it should be planned for and handled with appropriate care and resources, as this “may constitute a major transition in the history of life on Earth.” They assert that despite a “out-of-control race” to create new “digital minds” that not even their creators can comprehend or predict, this degree of planning and supervision is not taking place.
Also, various queries are posed here, including “Should we permit machines to saturate our communication channels with misinformation and propaganda? Should all occupations, even those that are fulfilling, be automated away? Should we create non-human minds that could one day outnumber, outwit, replace, and supersede us? Should we run the danger of losing civilizational control? The letter further states that “unelected tech executives” should not be in charge of making such decisions.
What Do They Suggest?
It has been advised to take a six-month break from developing more powerful systems than GPT-4, ChatGPT’s most recent upgrade, which can also handle image-related queries. This halt should involve all important players and be made public and verifiable. They argue that if such a pause can’t be rapidly implemented, governments should intervene and impose a moratorium.
According to them, businesses need to create standard safety rules that unbiased outside specialists can check for advanced AI design and development.
Overall, a sound framework with a clear legal foundation and foolproofing is suggested, along with watermarking technologies “to help distinguish real from synthetic,” accountability for the harm caused by AI, substantial public support for technical AI safety research, etc.
Have Any AI Labs Responded To The Letter?
OpenAI has already spoken about AI and its effects in a cautious manner. A 2023 post stated, “We want to deploy them and acquire experience with running them in the actual world as we construct increasingly more powerful systems. We think this is the most careful way to introduce AGI [Artificial General Intelligence] into the world; a slow transition is preferable to a quick one. We believe it is best to adapt to this gradually since we anticipate that the world will advance much faster thanks to sophisticated AI.
Yet, it took some time for Musk to respond to the mail. Importantly, Musk contributed to OpenAI’s early funding in 2015 but ultimately withdrew his support, citing a conflict of interest due to Tesla’s interest in AI.
A letter warning of potential risks may be a good idea. Still, James Grimmelmann, a professor of digital and information law at Cornell University, told the AP that “Elon Musk’s participation is also deeply hypocritical given how Tesla has fought against accountability for the flawed AI in its self-driving cars.”