At the summit where China promoted its AI agenda to the world


Three days later The Trump administration has announced its highly anticipated publication AI Action Planthe Chinese government has announced its own AI policy blueprint. Was the timing coincidence? I doubt that.

China’s Global AI Governance Action Plan was released on July 26th, the first day of China’s largest annual AI event, World Artificial Intelligence Conference (WAIC). Jeffrey Hinton and Eric Schmidt were among the many Western tech industry figures who participated in the Shanghai festival. Our wired colleague Will Knight was also on the scene.

The Weick atmosphere was the polar opposite of Trump. America First, Regulatory Light Vision With AI, Will tell me. In his opening speech, Chinese Prime Minister Tian created a calm case about the importance of global cooperation in AI. He was then followed by a series of well-known Chinese AI researchers. He held technical consultations highlighting urgent questions that the Trump administration appears to be largely dispelling.

Zhou Bowen, leader of Shanghai AI Lab, one of China’s top AI research institutes, has promoted the work of the AI safety team at WAIC. He also suggested that the government could play a role in surveillance of commercial AI models of vulnerability.

In an interview with Wired, a professor at the Chinese Academy of Sciences and one of the nation’s leading voices on AI, he said he hopes that AI safety organizations around the world will find ways to work together. “It would be best if the UK, the US, China, Singapore and other labs were brought together,” he said.

The meeting also included a closure meeting on issues of AI safety policy. After he attended such a confession, Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, told Wired that the debate was productive despite the prominent absence of American leaders. The US dephotoed the photo and “a co-led co-led coalition of major AI safety players, China, Singapore, the UK and the EU, will promote efforts to build guardrails around frontier AI model development,” the trio told Wired. He added that the US government wasn’t the only one who went missing. Of all US AI labs, Elon Musk’s Xa sent employees to attend the WAIC Forum.

Many Western visitors were surprised to learn that much of the conversation about Chinese AI revolved around safety regulations. “In the last seven days, I have been able to literally participate in non-stop AI safety events. This was not the case for some of the other global AI summits,” said Brian TSE, founder of Beijing-based AI Safety Institute Concordia AI. Earlier this week, Concordia AI held a one-day safety forum in Shanghai, hosting it with well-known AI researchers such as Stuart Russell and Joavengio.

Switching position

Comparing China’s AI blueprint with Trump’s plan of action, it appears that both countries have switched positions. When Chinese companies first began developing advanced AI models, many observers thought it would be curtailed by censorship requirements imposed by the government. Now, American leaders want to ensure that homemade AI models are “pursuing objective truths.” I wrote it last week Back Channel The newsletter is “a blatant movement of top-down ideological bias.” Meanwhile, China’s AI Action Plan reads like a globalist manifesto. It recommends that the United Nations help lead international AI initiatives, suggesting that governments play an important role in technology regulation.

Their governments are very different, but when it comes to AI safety, people in China and the US are worried about many of the same things, including model hallucinations, discrimination, existential risks, and cybersecurity vulnerabilities. It also means that academic research on AI safety is converging in both countries, including areas such as scalable surveillance (how humans monitor AI models with other AI models) and the development of interoperable safety testing standards.

Leave a Reply

Your email address will not be published. Required fields are marked *