The federal government of Singapore launched a blueprint at this time for international collaboration on synthetic intelligence security following a gathering of AI researchers from the US, China, and Europe. The doc lays out a shared imaginative and prescient for engaged on AI security by worldwide cooperation relatively than competitors.
“Singapore is among the few international locations on the planet that will get alongside nicely with each East and West,” says Max Tegmark, a scientist at MIT who helped convene the assembly of AI luminaries final month. “They know that they don’t seem to be going to construct [artificial general intelligence] themselves—they may have it executed to them—so it is rather a lot of their pursuits to have the international locations which can be going to construct it discuss to one another.”
The international locations thought probably to construct AGI are, in fact, the US and China—and but these nations appear extra intent on outmaneuvering one another than working collectively. In January, after Chinese language startup DeepSeek launched a cutting-edge mannequin, President Trump known as it “a wakeup name for our industries” and stated the US wanted to be “laser-focused on competing to win.”
The Singapore Consensus on World AI Security Analysis Priorities requires researchers to collaborate in three key areas: learning the dangers posed by frontier AI fashions, exploring safer methods to construct these fashions, and growing strategies for controlling the habits of essentially the most superior AI methods.
The consensus was developed at a gathering held on April 26 alongside the Worldwide Convention on Studying Representations (ICLR), a premier AI occasion held in Singapore this 12 months.
Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI security occasion, as did lecturers from establishments together with MIT, Stanford, Tsinghua, and the Chinese language Academy of Sciences. Consultants from AI security institutes within the US, UK, France, Canada, China, Japan and Korea additionally participated.
“In an period of geopolitical fragmentation, this complete synthesis of cutting-edge analysis on AI security is a promising signal that the worldwide neighborhood is coming along with a shared dedication to shaping a safer AI future,” Xue Lan, dean of Tsinghua College, stated in an announcement.
The event of more and more succesful AI fashions, a few of which have shocking skills, has prompted researchers to fret a few vary of dangers. Whereas some concentrate on near-term harms together with issues brought on by biased AI methods or the potential for criminals to harness the know-how, a major quantity consider that AI might pose an existential menace to humanity because it begins to outsmart people throughout extra domains. These researchers, typically known as “AI doomers,” fear that fashions might deceive and manipulate people as a way to pursue their very own targets.
The potential of AI has additionally stoked discuss of an arms race between the US, China, and different highly effective nations. The know-how is considered in coverage circles as important to financial prosperity and army dominance, and lots of governments have sought to stake out their very own visions and rules governing the way it ought to be developed.