Mankind CEO Dario Amody escalates a war of words with Jensen Fan, calling out “outrageous lies” and becoming emotional about his father’s death



Dwemers and optimists. Techno optimists and accelerators. nvidia Camping and human camping. And of course, there is Openai, which opened Pandora’s artificial intelligence box in the first place.

AI space is driven by debate about whether it is an end-of-life technology or a gateway to the world of future affluence. Return to the dot com bubble In the early 2000s. Humanity CEO Dario Amodei is frank about AI risks and can even become famous for predicting it. Clean up half of all white-collar jobsa much more pessimistic outlook than the optimism offered by Sam Altman of Openai or Jensen Huang of Nvidia in the past. But Amodei rarely laid it all out, like he did with technology journalist Alex Kantrowitz’s. Big Technology Podcast July 30th.

In an open and emotionally charged interview, Amodei escalated the word-world war with Nvidia CEO Jensen Huang, vehemently denied accusations that he was trying to control the AI industry, and expressed deep rage over being labeled “Doomer.” Amodei’s passionate defense was rooted in a deep, personal revelation about his father’s death. He says it will promote the urgent pursuit of beneficial AI while simultaneously driving warnings about its risks, including its belief in strong regulations.

Amodei faced the criticism directly and “I’m very angry when people call me a Dwemer… When someone said, ‘This guy is a Dwemer, he wants to slow things down,’ he dismissed the concept that stems from someone like Jensen Fan. I’ve heard of it. “He insisted he never said anything like that.

Amodei explained that his strong response stems from deep personal experiences. The father’s death in 2006 saw a cure rate jump from 50% to about 95% after just three or four years. This tragic event gave him a deep understanding of the “urgency to solve related problems” and the powerful “humanitarian sense of the benefits of this technology.” He believes AI is the only way to tackle complex problems like biology, and feels it is “over the scale of humanity.” As he went on, he explained how he was in fact truly optimistic about AI despite his own end-of-life warnings about his own future impacts.

Who is the real optimist?

Amodei argued that he valued the interests of AI more than those who call themselves optimists. “In fact, I and humanity feel that we have done a better job of clarifying the benefits of AI than some of the people who call ourselves optimists and accelerators,” he argued.

In raising “optimists” and “acceleratorists,” Amodei, along with venture capital billionaires, mentioned two camps in Silicon Valley, and even movements. Marcs Andreats Close to the center of each. The co-founder of Andreessen Horowitz accepted both and issued “.Techno Optimist Manifesto2023, and often tweeting “by / acc“A short for effective accelerationism.”

Both terms date back to the mid-20th century, and technical optimality emerges Immediately after World War II And the accelerationism that appears in his classic 1967 novel, The Science of Roger Zelazney.Lord of Light. “Andriessen popularized and mainstreamed these beliefs, and it was broadened to the comprehensive belief that technology could solve all problems with Kantrowitz.

Amodei argued that he is “one of the most bullish about improving AI capabilities very quickly,” and repeatedly emphasized how AI progression is exponential in nature. This rapid progress, in his opinion, means that issues such as national security and economic impacts are very close. His urgency is increasing as he “rests close to AI risks,” and it is hard to tell that his ability to handle risk has not kept up to the speed of technological advancement.

To mitigate these risks, we defend the Amodei champion regulations and “responsible scaling policies” and advocate “race to the top.” Companies compete to build a safer system rather than “competition to the bottom” so that people and businesses compete to release their products as quickly as possible without worrying about risk. He points out that humanity was the first to publish such a responsible scaling policy, set an example and aims to encourage others to follow suit. He openly shares human safety research, including interpretability work and constitutional AI, and sees them as the public interest.

Amodei tackled the discussion about “open source” defended by Nvidia and Jensen Huang. Because large-scale language models are fundamentally opaque, they claim to be “red herring,” so there is no such thing as open source development for AI technology currently being built.

An Nvidia spokesman who provided a similar statement to Kantrowitz said: luck The company supports “safe, responsible, and transparent AI.” Nvidia said thousands of startups and developers in its ecosystem and open source community are making them more secure. The company then criticized Amodei’s stance calling for an increase in AI regulations. “Lobbying for regulatory capture against open source will curb innovation and make AI unsafe, undemocratic.

Humanity reiterated its statement that it supports the recently submitted public submission to secure America’s lead in infrastructure development and support strong and balanced export controls that ensure the value of freedom and democracy shapes the future of AI. The company said previously. luck “As Dario has never argued that “humanity alone” can build safe and powerful AI, as public records show, Dario advocates for AI developers’ national transparency standards (including humanity), allowing public and policymakers to recognize and prepare accordingly the model’s capabilities and risks. ”

Kantrowitz also found Amodei, starting from Openai, artificially. years ago The drama I watched Sam Altman fired His board of directors over ethical concerns unfolds several chaotic days before Altman returns home.

Amodei did not directly mention Altman, but said his decision to co-found humanity was spurred by the perceived lack of integrity and reliability in rival companies regarding their stated missions. He emphasized that in order to ensure safety efforts to succeed, “the leaders of the company must be… trustworthy people, they must be people whose motivations are honest.” He continued. “If you’re working for someone who’s not honest and not honest, you really don’t want to make the world better, you’re not going to work just because you’re contributing to the bad.”

Amody also expressed his dissatisfaction with the extremes of the AI debate. He calls such a position “intelligently and morally insecure” the argument that AI cannot be safely constructed as “nonsense” from certain “dwemers.” He sought more thoughtfulness, integrity, and “more people willing to oppose their interests.”

For this story, luck Generated AI was used to assist with initial drafts. The editors checked the accuracy of the information prior to publication.

Leave a Reply

Your email address will not be published. Required fields are marked *