Elon Musk, the founder and owner of Twitter, and Steve Wozniak, the co-founder of Apple, are among the well-known figures who have signed an open letter urging the suspension of the release of ChatGPT and other AI-powered products.
The Future of Life (FLI) Institute’s website featured an open letter last Wednesday titled “Stop Massive AI Experiments: An Open Letter.” It said that we call on all AI labs to immediately halt the development of AI systems more potent than GPT-4 for at least six months.
What issues have been raised here, and what solutions have been proposed, even as AI advances towards more effective and superior error-free models? We clarify.
What Does The Open Letter Say About ChatGPT And What Does It Mean?
In the past few years, chatbots—a computer programme that can converse with people—have proliferated on websites. Chatbots assist customers in submitting requests for returns and refunds on websites like Amazon and Flipkart. But a much more sophisticated version was born when OpenAI, a US-based AI research firm, developed the chatbot ChatGPT last year.
According to OpenAI’s description, ChatGPT may respond to “follow-up queries” and “admit its mistakes, dispute wrong premises, and reject improper requests. ” It is based on the GPT 3.5 series of language learning models from the business (LLM). Generative Pre-trained Transformer 3 (GPT) is a computer language model that uses deep learning strategies to generate text that resembles human speech based on inputs.
Artificial intelligence (AI) provides a system with large amounts of data instead of specific inputs from human programmers. The system then uses this data to train itself to grasp information meaningfully.
The introduction of ChatGPT has been hailed as a new stage of technical growth, similar to how search engines, like Google, revolutionised how people sought information.
Yet, the technical efficacy of ChatGPt has been praised almost universally. It is unclear if these claims are true or if they will be in the future. It has succeeded in passing entrance exams to prestigious schools like the Wharton School of Business and the US bar exam (which is required to become a lawyer in the country).
Why Is The Criticism Raised?
The letter was published on FLI’s website, which lists its activities as grantmaking, policy research, and advocacy in fields like AI. Scholars, IT Executives, and other famous people have signed the letter, including Yuval Noah Harari, author and professor at the Hebrew University of Jerusalem, Jaan Tallinn, co-founder of Skype, Craig Peters, CEO of Getty Images, and others.
The letter points out that modern AI systems are intelligent enough to compete with people. The authors say it should be planned for and handled carefully, as it “may constitute a major transition in the history of life on Earth.” They assert that this degree of planning and supervision is not taking place despite a “out-of-control race” to create new “digital minds” that not even their creators can comprehend or predict.
Additionally, this article poses many questions, such as “Should we allow machines to flood our communication channels with false information and propaganda?” Should we automate away all occupations, even those that are fulfilling? Should we create non-human minds that could one day outnumber, outwit, replace, and supersede us? Should we run the danger of losing civilizational control? The letter further states that “unelected tech executives” should not be responsible for making such decisions.
What Do They Suggest?
It has been advised to take a six-month break from developing more powerful systems than GPT-4, ChatGPT’s most recent upgrade, which can also handle image-related queries. All significant stakeholders should participate in this halt, which should be transparent and verifiable. They argue that governments should intervene and impose a moratorium if such a pause can’t be rapidly implemented.
According to them, businesses must create standard safety rules that unbiased outside specialists can check for advanced AI design and development.
Overall, a sound framework with a clear legal foundation and foolproofing is suggested, along with watermarking technologies “to help distinguish real from synthetic,” accountability for the harm caused by AI, substantial public support for technical AI safety research, etc.
Have Any AI Labs Responded To The Letter?
OpenAI has already spoken about AI and its effects cautiously. A 2023 post stated, “We want to deploy them and acquire experience running them in the actual world as we construct increasingly more powerful systems. This is the most careful way to introduce AGI [Artificial General Intelligence] into the world; a slow transition is preferable to a quick one. We believe it is best to adapt to this gradually since we anticipate the world will advance much faster thanks to sophisticated AI.
Yet, it took some time for Musk to respond to the mail. Notably, Musk contributed to OpenAI’s early funding in 2015 but ultimately withdrew his support, citing a conflict of interest due to Tesla’s interest in AI.
A letter warning of potential risks may be a good idea. Still, James Grimmelmann, a professor of digital and information law at Cornell University, told the AP, “Elon Musk’s participation is also deeply hypocritical given how Tesla has fought against accountability for the flawed AI in its self-driving cars.”