好英语网好英语网

好英语网 - www.laicaila.com
好英语网一个提供英语阅读,双语阅读,双语新闻的英语学习网站。

科技行业领袖警告AI可能给人类带来“灭绝风险”

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn
科技行业领袖警告AI可能给人类带来“灭绝风险”

A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.

一群行业领袖周二警告,他们正在开发的人工智能技术有朝一日可能对人类的生存构成威胁,应被视为与大流行病及核战争同等的社会风险。

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter was signed by more than 350 executives, researchers and engineers working in A.I.

非营利组织人工智能安全中心发表的一份只有一句话的声明写道:“减轻人工智能带来的灭绝风险应该成为全球的优先事项,就像应对其他社会规模的风险——如大流行病和核战争一样。”这封公开信由逾350名从事人工智能工作的高管、研究人员和工程师签署。
 

包括OpenAI首席执行官萨姆·奥尔特曼在内的三家领先人工智能公司的高管签署了一封公开信,提请人们警惕人工智能的风险。

The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

签署者包括三家领先人工智能公司的高管:OpenAI首席执行官萨姆·奥尔特曼,谷歌DeepMind首席执行官杰米斯·哈萨比斯,以及Anthropic首席执行官达里奥·阿莫代伊。

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field. (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)

杰弗里·辛顿和约书亚·本吉奥是因在神经网络方面的开创性工作而获得图灵奖的三位研究人员中的两位,他们通常被视为现代人工智能运动的“教父”,他们也签署了该声明,此外还有该领域的其他杰出研究人员。(截至周二,第三位图灵奖得主、Meta人工智能研究项目负责人扬·勒昆尚未签名。)

The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

该声明是在人们对人工智能可能存在的危害越来越担心之际发表的。最近,所谓大型语言模型,也就是ChatGPT和其他聊天机器人使用的人工智能系统所取得的进展引发了人们的担忧——人工智能有可能很快就会被大规模用于传播错误信息和宣传,或者可能消除数以百万计白领的工作。

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

一些人认为,最终,人工智能可能变得足够强大,如果不采取任何措施来减缓其发展,它可能会在几年内造成社会规模的破坏,尽管研究人员有时并没有解释这种情况如何发生。

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

这些担忧得到了许多行业领导者的认同,这使他们处于一个不同寻常的境地,他们声称,他们正在开发的一项技术——在许多情况下,他们正在拼命工作,以图比竞争对手更快——构成了严重的风险,应该受到更严格的监管。

This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to talk about A.I. regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks of advanced A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms.

本月,奥尔特曼、哈萨比斯和阿莫代伊与美国总统拜登和副总统哈里斯碰面,讨论人工智能监管问题。在会后的参议院证词中,奥尔特曼警告,先进人工智能系统的风险已经严重到需要政府干预的程度,并呼吁对人工智能的潜在危害进行监管。

Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.

人工智能安全中心执行主任丹·亨德里克斯在接受采访时表示,这封公开信代表了一些行业领袖的“出柜”,他们曾对自己正在开发的技术存在的风险表示担忧——但只是在私下里。

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

“即使在人工智能界,也存在普遍误解,认为只有少数人是末日论者,”亨德里克斯说。“但事实上,很多人私下里都会对这些事情表示担忧。”

Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

一些怀疑论者认为,人工智能技术还太不成熟,不足以构成生死攸关的威胁。对于今天的人工智能系统,他们更担心的不是长期危险,而是短期问题,比如有偏见的反应和不正确的反应。

But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far off.

但也有人认为,人工智能的进步实在太快,以至于它在某些领域的表现已经超过了人类的水平,而且很快就会在其他领域超越人类。他们说,这项技术已经显示出先进能力和先进理解力的迹象,这让人们担心,“通用人工智能”(AGI)——可以在各种任务中达到或超过人类水平的人工智能——可能已经离我们不远了。

In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

在上周的一篇博客文章中,奥尔特曼和OpenAI的另外两名高管提出了几种负责任地管理强大人工智能系统的方法。他们呼吁领先的人工智能制造商进行合作,对大型语言模型进行更多技术研究,并成立一个类似于国际原子能机构(旨在控制核武器的使用)的国际人工智能安全组织。

Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.

奥尔特曼还表示,支持制定规则,要求大型尖端人工智能模型的制造者注册获得政府颁发的许可证。

In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month pause on the development of the largest A.I. models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”

今年3月,1000多名技术人员和研究人员签署了另一封公开信,呼吁暂停开发最大的人工智能模型六个月,理由是担心“开发和部署更强大数字思维的竞赛失控”。

That letter, which was organized by another A.I.-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading A.I. labs.

这封信由另一家专注于人工智能的非营利组织——生命未来研究所牵头,伊隆·马斯克和其他知名科技领袖签名,但来自领先人工智能实验室的签名并不多。

The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who share general concerns about powerful A.I. systems, Mr. Hendrycks said.

亨德里克斯说,人工智能安全中心这份简短的新声明——总共只有22个英文单词——旨在团结人工智能专家,这些专家可能对特定风险的性质或防止这些风险发生的措施存在分歧,但他们对强大的人工智能系统有着共同的担忧。

“We didn’t want to push for a very large menu of 30 potential interventions,” he said. “When that happens, it dilutes the message.”

“我们不想推动一个包含30种潜在干预措施的大菜单,”他说。“这种事会稀释信息。”

The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.

这份声明最初被分享给几位知名人工智能专家,其中包括辛顿。辛顿本月辞去了在谷歌的工作,他说,这样他就可以更自由地谈论人工智能的潜在危害。之后,该声明进入了几个主要的人工智能实验室,然后一些员工在上面签名。

The urgency of A.I. leaders’ warnings has increased as millions of people have turned to A.I. chatbots for entertainment, companionship and increased productivity, and as the underlying technology improves at a rapid clip.

随着成百上千万人使用人工智能聊天机器人寻求娱乐、陪伴和提高工作效率,以及底层技术的快速进步,来自该领域领导者的警告的紧迫性也在增加。

“I think if this technology goes wrong, it can go quite wrong,” Mr. Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”

“我认为,如果这项技术出了问题,可能会出大问题,”奥尔特曼告诉参议院小组委员会。“我们希望与政府合作,防止这种情况发生。”
赞一下
上一篇: 微软称GPT-4展现出具备人类逻辑迹象
下一篇: 如何充分利用ChatGPT

相关推荐

隐藏边栏