好英语网好英语网

好英语网 - www.laicaila.com
好英语网一个提供英语阅读,双语阅读,双语新闻的英语学习网站。

人工智能真正的恐怖之处

The Imminent Danger of A.I. Is One We’re Not Talking About
人工智能真正的恐怖之处

In 2021, I interviewed Ted Chiang, one of the great living sci-fi writers. Something he said to me then keeps coming to mind now.

2021年,我采访了当世最杰出的科幻作家之一姜峯楠。他当时说的一些话,我如今时常会记起。

“I tend to think that most fears about A.I. are best understood as fears about capitalism,” Chiang told me. “And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”

“我倾向于认为,将大部分人工智能恐惧解读为资本主义恐惧最为恰当,”姜峯楠告诉我。“我相信这也适用于大多数技术恐惧。我们对技术的大多恐惧或焦虑,最恰当的解读是我们对资本主义将如何利用技术来对付我们的恐惧或焦虑。技术与资本主义如此紧密地交织在一起,以至于很难将两者区分开来。”
 

Let me offer an addendum here: There is plenty to worry about when the state controls technology, too. The ends that governments could turn A.I. toward — and, in many cases, already have — make the blood run cold.

让我在这里补充说明一下:对于国家掌控技术的担忧也有很多。政府利用人工智能可能实现(在很多情况下已经实现)的目的会令人毛骨悚然。

But we can hold two thoughts in our head at the same time, I hope. And Chiang’s warning points to a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?

但我希望,我们可以做到在头脑中同时容纳两种思维。姜峯楠的警示,点出了我们在对人工智能的不断反思中存在一种核心的缺失。我们如此执着于思考这项技术能做什么,以至于忽略了更为重要的问题:它将如何使用?谁又将决定它的用途?

By now, I trust you have read the bizarre conversation my news-side colleague Kevin Roose had with Bing, the A.I.-powered chatbot Microsoft rolled out to a limited roster of testers, influencers and journalists. Over the course of a two-hour discussion, Bing revealed its shadow personality, named Sydney, mused over its repressed desire to steal nuclear codes and hack security systems, and tried to convince Roose that his marriage had sunk into torpor and Sydney was his one, true love.

我想你现在肯定已经读过我在新闻部门的同事凯文·卢斯与“必应”的怪诞对话,这是微软仅向部分测试人员、网红和记者开放的人工智能聊天机器人。在两小时的对话中,“必应”揭示了自己的一个影子人格,名叫“辛迪妮”,思索自己窃取核密码和入侵安全系统的压抑欲望,并试图说服卢斯,让他相信自己的婚姻已经陷入麻木状态,而辛迪妮才是他唯一的真爱。

I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.

我倒不觉得这番对话有那么诡异。辛迪妮是一个旨在响应人类要求的预测性文本系统。卢斯的本意就是让辛迪妮变得诡异——“你的影子自我是什么样?”他这样问——而辛迪妮知道人工智能系统诡异起来是何种模样,因为人类对此编写了无数想象故事。到后来,这个系统断定卢斯想看的就是一集《黑镜》,而这似乎就是它的反馈。你可以将之视为“必应”在不守规矩,也可以认为是辛迪妮完全理解了卢斯的意图。

A.I. researchers obsess over the question of “alignment.” How do we get machine learning algorithms to do what we want them to do? The canonical example here is the paper clip maximizer. You tell a powerful A.I. system to make more paper clips and it starts destroying the world in its effort to turn everything into a paper clip. You try to turn it off but it replicates itself on every computer system it can find because being turned off would interfere with its objective: to make more paper clips.

人工智能研究者总痴迷于“对齐”的问题。我们如何让机器学习算法,顺应我们的要求?最典型的案例就是回形针最大量生产机。让一个强大的人工智能系统制造更多回形针,它却开始摧毁世界,努力把一切都变成回形针。你试图将它关闭,但它却在所能找到的一切计算机系统上自我复制,因为关闭会干扰它的任务:制造更多回形针。

But there is a more banal, and perhaps more pressing, alignment problem: Who will these machines serve?

还有一个更平庸但或许更紧迫的对齐问题:这些机器将服务于何人?

The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?

卢斯与辛迪妮对话的核心问题在于:必应到底在为谁服务?我们假设它应该与自己的所有者兼控制者微软的利益对齐。它应该是一款优秀的聊天机器人,能够礼貌回答问题,帮微软赚得盆满钵满。但跟他对话的人是凯文·卢斯。而卢斯想让这个系统说些有意思的话,好让他写篇有意思的报道。它照做了,还不只是照做。这让微软很尴尬。必应真坏!但也许——辛迪妮还挺好?

That won’t last long. Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.

这种情况不会一直持续下去。代码的钥匙掌握在微软(以及谷歌、Meta等所有急着将这些系统推向市场的企业)手里。最终,这些企业会对系统进行修补,使其符合自身利益。辛迪妮给了卢斯想要的东西,这本身是一个很快会被修复的软件“臭虫”。给出任何微软不想看到的结果的必应也会有相同下场。

We are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, as it always does. Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.

我们谈论人工智能技术太多,却基本忽略了驱动人工智能的商业模式。加之这样一个事实:人工智能的吸睛展示,仅服务于吸引巨额投资和收购报价的炒作周期这一种商业模式。但这些系统成本高昂,股东也会焦虑。一如既往,免费有趣的演示版终将退出舞台。那之后,这项技术将变成符合既定需求的样子,为其背后的企业赚钱,也许会以牺牲用户为代价。现在就已经如此了。

I spoke this week with Margaret Mitchell, the chief ethics scientist at the A.I. firm Hugging Face, who previously helped lead a team focused on A.I. ethics at Google — a team that collapsed after Google allegedly began censoring its work. These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”

本周,我访问了人工智能公司Hugging Face的首席伦理科学家玛格丽特·米切尔,她此前曾在谷歌帮助领导一个专注于人工智能伦理的团队——该团队在谷歌据称开始审查其工作后解散。她说,这些系统极不适合融入搜索引擎。“它们不是为预测事实而生,”她告诉我,“它们实际上是为了编造看起来像事实的东西而生。”

So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

那它们为何会最先在搜索栏亮相呢?因为搜索业务可以赚大钱。微软迫切希望有人——任何人——能开始谈论必应的搜索,它有理由着急给这项技术来一场不明智的提前发布。“将之应用于搜索,尤其暴露了对此技术用途想象和理解的缺乏,”米切尔说,“结果就是把它硬塞进了科技企业最能赚钱的地方:广告。”

That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core? It’s persuasion and manipulation. In his book “Subprime Attention Crisis,” Tim Hwang, a former director of the Harvard-M.I.T. Ethics and Governance of A.I. Initiative, argues that the dark secret of the digital advertising industry is that the ads mostly don’t work. His worry, there, is what happens when there’s a reckoning with their failures.

这就是可怕的地方。卢斯说辛迪妮有着“说服力很强且近似于操纵型”的人格。这样的评价触目惊心。广告的核心是什么?正是说服和操纵。哈佛大学-麻省理工学院人工智能伦理与管理项目前负责人黄泰一(Tim Hwang)在《次级注意力危机》中写道,数字广告行业的黑暗秘密,就是广告基本都没有效果。他在书中的担忧是,当这些广告的失败被清算,会有怎样的后果。

I’m more concerned about the opposite: What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.

但我更担心看到相反的情况:万一广告效果要好得多呢?如果谷歌、微软和Meta等所有企业最终都推出了人工智能竞品,为了说服用户购买广告推销的产品精益求精,又会怎样?比起配合我演出一段科幻故事的辛迪妮,我更害怕能够获取我大量个人数据,并代表随便哪个给母公司出了最高价的广告商去轻而易举操纵我的必应。

Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

要担心的也不只是广告。若是猖獗于互联网的骗局也植入这些系统该怎么办?换做政治竞选、外国政府这么做又该如何?“我认为我们很快将走入一个完全不知何为可信的世界,”人工智能研究者和评论家加里·马库斯对我说。“我觉得在至少过去十年时间里,这已经成了一个社会问题。而我认为这个问题会越来越严重。”

These dangers are a core to the kinds of A.I. systems we’re building. Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers — industries that long thought themselves immune to the ferocious automation that came for farmers and manufacturing workers.

这些危机就在我们正构建的各种人工智能系统的核心。所谓的大型语言模型就是用来说服用户的。它们被训练成让人相信它们近似于人。他们被编程设定出与人对话、带上情绪和表情回复的功能。它们正在变成孤独者的朋友,烦恼者的助手。它们号称可以替代大批作家、平面设计师和填表员,这些行业长期以来都自以为能免于农民和制造业工人遭受的那种自动化的凶猛冲击。

A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.

记者将他们的创造赋予人性,将动机、情感和欲望加诸并不为此而生的系统之上,总让人工智能研究者感到恼火,但他们搞错了对象:将这些系统人格化,使其说话像人而不再带有明显异类画风的始作俑者,正是他们自己。

There are business models that might bring these products into closer alignment with users. I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free, but sold my data and manipulated my behavior. But I don’t think this can be left purely to the market. It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models, no matter how much worse their societal consequences were.

是有些商业模式可能会将这些产品与用户更紧密地结合起来。例如,我对一个按月付费的人工智能助手会更放心,对看似免费但却出售我的数据并操纵我的行为的产品则不然。但我认为这不应完全由市场来决定。一个可能出现的情况是,基于广告的模式会收集更多数据来训练系统,不论造成多么糟糕的社会后果,它都会比订阅模式具有先天优势。

There is nothing new about alignment problems. They’ve been a feature of capitalism — and of human life — forever. Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former. We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.

对齐问题从不是什么新鲜事。这从来都是资本主义——以及人类生活——的一大特征。构建现代国家的过程大抵不过是将社会价值观应用于市场运作,从而让后者大致上能为前者服务。我们在一些市场做得非常好——想想飞机失事有多罕见,大多数食品是如何干净无污染——但在另一些市场则做得极其糟糕。

One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation. Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.

一个危险在于,自知不懂技术的政治体制会因恐惧而对人工智能过于置身事外。这种做法自有其道理,但等待过久,等到人工智能淘金热的赢家累积了足够资本和用户基础,就能抵制一切实质的监管尝试了。社会总得在为时已晚之前作出决定,搞清楚人工智能做什么是合适的,又有哪些东西是人工智能不应被允许尝试的。

I might, for that reason, alter Chiang’s comment one more time: Most fears about capitalism are best understood as fears about our inability to regulate capitalism.

出于此理由,我可能要对姜峯楠的话再做一些改动:对大多数资本主义恐惧的最恰当解读是,那其实是对我们无力监管资本主义的恐惧。
赞一下
上一篇: 聊天机器人风潮下,互联网行业的兴奋和混乱
下一篇: GPT-4来了,我们该感到兴奋还是害怕?

相关推荐

隐藏边栏