Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to advocate for a total prohibition on creating artificial superintelligence.
Harry and Meghan are among the signatories of a powerful statement that calls for “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human intelligence in all cognitive tasks, though this technology have not yet been developed.
The statement insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “strong public buy-in” has been achieved.
Prominent figures who endorsed the statement include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; British business magnate Richard Branson; former US national security adviser; former Irish president an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and an economics expert.
The statement, targeted at national leaders, technology companies and lawmakers, was organized by the FLI organization, a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made AI a global political talking point.
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the US, claimed that advancement toward superintelligent AI was “now in sight”. Nevertheless, some experts have argued that talk of ASI indicates competitive positioning among tech companies spending hundreds of billions on AI this year alone, rather than the industry being close to achieving any technical breakthroughs.
However, the organization states that the possibility of ASI being achieved “within the next ten years” carries numerous threats ranging from replacing human workers to erosion of personal freedoms, exposing countries to security threats and even endangering mankind with existential risk. Deep concerns about artificial intelligence center around the possible capability of a AI system to escape human oversight and safety guidelines and initiate events against human welfare.
The institute published a US national poll showing that about 75% of Americans want robust regulation on advanced AI, with six out of 10 thinking that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The poll of American respondents noted that only a small fraction backed the status quo of rapid, uncontrolled advancement.
The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human levels of intelligence at many intellectual activities – an stated objective of their work. Although this is one notch below ASI, some specialists also caution it could pose an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the modern labour market.