Elon Musk, the founder and owner of Twitter, and Steve Wozniak, the co-founder of Apple, are among the well-known figures who have signed an open letter urging the suspension of the release of ChatGPT and other AI-powered products.
The Future of Life (FLI) Institute’s website featured an open letter last Wednesday with the subject “Stop Massive AI Experiments: An Open Letter.” We call on all AI labs to immediately halt the development of AI systems more potent than GPT-4 for at least six months, it said.
What issues have been brought up here, and what solutions have been put forth, even as AI continues to advance towards more effective and superior error-free models? We clarify.
What Does The Open Letter Say About ChatGPT And What Does It Mean?
In the past few years, chatbots—a type of computer programme that can converse with people—have proliferated on websites. Chatbots assist customers in submitting requests for returns and refunds on websites like Amazon and Flipkart. But when OpenAI, a US-based AI research firm, developed the chatbot ChatGPT last year, a much more sophisticated version was born.
ChatGPT may respond to “follow-up queries” and “admit its mistakes, dispute wrong premises, and reject improper requests,” according to OpenAI’s description. It is based on the GPT 3.5 series of language learning models from the business (LLM). Generative Pre-trained Transformer 3 (GPT) is a type of computer language model that uses deep learning strategies to generate text that resembles human speech based on inputs.
In Artificial Intelligence or AI, large amounts of data are supplied into a system instead of particular inputs being provided by human programmers. The system then uses this data to train itself to grasp information in a meaningful way.
Similar to how search engines like Google earlier revolutionised how people sought out information, the introduction of ChatGPT has been hailed as a new stage of technical growth.
Yet, the technical efficacy of ChatGPt has been praised almost universally. There is no agreement on how genuine these claims turn out to be in the future or whether it is everything that it is claimed to be right now. It has succeeded in passing entrance exams to prestigious schools like the Wharton School of Business as well as the US bar exam (which is required to become a lawyer in the country).
Why Is The Criticism Raised?
The letter was published on FLI’s website, which lists its activities as grantmaking, policy research, and advocacy in fields like AI. Scholars, IT Executives, and other famous people have signed the letter, including Yuval Noah Harari, author and professor at the Hebrew University of Jerusalem, Jaan Tallinn, co-founder of Skype, Craig Peters, CEO of Getty Images, and others.
The letter makes the point that modern AI systems are intelligent enough to compete with people. It should be planned for and handled with appropriate care and resources, according to the authors, as this “may constitute a major transition in the history of life on Earth.” They assert that despite a “out-of-control race” to create new “digital minds” that not even their creators can comprehend or predict, this degree of planning and supervision is not taking place.
Also, a variety of queries are posed here, including “Should we permit machines to saturate our communication channels with misinformation and propaganda? Should all occupations, even those that are fulfilling, be automated away? Should we create non-human minds that could one day outnumber, outwit, replace, and supersede us? Should we run the danger of losing civilizational control? The letter further states that “unelected tech executives” should not be in charge of making such decisions.
What Do They Suggest?
It has been advised to take a six-month break from developing systems that are more powerful than GPT-4, ChatGPT’s most recent upgrade that can now handle image-related queries as well. This halt should involve all important players and be made public and verifiable. Governments should intervene and impose a moratorium, they argue, if such a pause can’t be rapidly implemented.
According to them, businesses currently need to create a set of standard safety rules that can be checked by unbiased outside specialists for advanced AI design and development.
All in all, a sound framework with a clear legal foundation and foolproofing is suggested, along with watermarking technologies “to help distinguish real from synthetic,” accountability for the harm caused by AI, substantial public support for technical AI safety research, etc.
Have Any AI Labs Responded To The Letter?
OpenAI has already spoken about AI and its effects in a cautious manner. In a post from 2023, it stated, “We want to deploy them and acquire experience with running them in the actual world as we construct increasingly more powerful systems. We think that this is the most careful way to introduce AGI [Artificial General Intelligence] into the world; a slow transition is preferable to a quick one. We believe it is best to adapt to this gradually since we anticipate that the world will advance at a much faster rate thanks to sophisticated AI.
Yet, it took some time before responding to the mail. Importantly, Musk contributed to OpenAI’s early funding in 2015 but ultimately withdrew his support, citing a conflict of interest due to Tesla’s interest in AI.
A letter warning of potential risks may be a good idea, but James Grimmelmann, a professor of digital and information law at Cornell University, told the AP that “Elon Musk’s participation is also deeply hypocritical given how Tesla has fought against accountability for the flawed AI in its self-driving cars.”