人工智能:永远铁面无私的入境边检员
International travel is increasing at a rapid rate. In 2017, a record 1.4 billion tourists visited other countries last year and that number is expected to reach 1.8 billion by 2030.
跨国旅客的数量正在快速增长。2017年,有14亿人次出国。2030年,这个数字预计将增至18亿。
This swelling number of globetrotters also means growing queues at passport control. The vast majority of people who are detained by border agents don’t present a threat, which slows down the already often lengthy process of crossing an international border. Border crossing agents have a tough job. They have to make hundreds of judgement-calls every hour about whether someone should be allowed to enter a country. With the looming threat of terrorist attacks, people trafficking and smuggling, there is a lot of pressure to get it right.
这意味着等待入境的队伍会越来越长。被限制入境者当中,绝大多数不会造成威胁,但却让本已冗长的入境手续变得更为缓慢。边检人员的工作很艰难。每小时要做出数百次判断,决定是否应该让旅客入境。在恐怖主义、人口贩卖和走私的威胁下,要把事情做对做好有很大的压力。
Although they may have some additional intelligence on their computer system, when border guards examine most travellers, they’re relying on their own hunches and experience. And for many border control officers, that experience may not amount to much – it’s a position with a high turnover rate; border guards in the US quit at double the rate of other law enforcement positions.
虽然边检人员可从电脑系统上获取一些额外情报,但他们检视大多数旅客,都是凭借经验和直觉。对许多边检人员而言,这种经验可能并不多——这个职位人员流动率很高;美国边检人员的辞职率是其它执法部门的两倍。
Anyone who has been stopped from entering a country at immigration, even briefly, will know what an upsetting and stressful experience it can be. Staring into the hard eyes of a border guard as they examine your passport is always a nerve-wracking experience.
任何一个入境时被拦下来的人,都会知道这是一个多么令人沮丧、倍感压力的经历,即便是很短暂的停留也是如此。在边防人员检查你的护照时直视他们无情的双眼,永远是让人精神紧张的经历。
But there could soon be another, unseen border agent with a hand in these decisions – one that cannot be reasoned with or softened with a smile.
但很快就可能有一种看不见的边检人员,我们既无法和他们说理,也不能用一个微笑改变他们的想法。
边检人员往往没有什么同情的态度,但更加依赖科技可能让入境审查流程彻底没有了人情味
全球多个政府正在开发人工智能系统,协助评估入境旅客。
One of these is being developed by US technology firm Unisys, a company that began working with US Customs and Border Patrol following the 9/11 terrorist attacks in 2001, to develop technology for identifying dangerous passengers long before they board a flight. Their threat assessment system, called LineSight, slurps up data about travellers from different government agencies and other sources to give them a mathematical risk evaluation.
美国科技公司Unisys开发的系统就是其中之一。该公司在2001年“9/11”恐怖袭击事件后,开始与美国海关和边境巡逻队合作研发新技术,用于在登机前确定危险乘客。 他们的威胁评估系统被称为LineSight,收集来自不同政府机构和其他来源的旅客数据,进行数学上的风险评估。
They have since expanded its capability to look for other types of traveller or cargo that might be of concern to border officials. John Kendall, director of the border and national security program at Unisys, uses an example of two fictional travellers to illustrate how LineSight works.
他们还把这项技术用于寻找有潜在威胁的旅客或者货物上。Unisys的边境及国家安全项目总监肯达尔(John Kendall)用两名虚构的旅客的例子解释了LineSight的运作方法。
Romain and Sandra are ticketed passengers who have valid passports and valid visas. They would pass through most security systems unquestioned, but LineSight’s algorithm picks up something fishy about Romain’s travel patterns – she’s visited the country several times over the past few years with a number of children who had different last names, something predictive analytics associates with human trafficking.
罗曼(Romain)和桑德拉(Sandra)都持有有效护照和签证。大部分情况下,他们能直接通过安检系统,无需接受盘问。但LineSight的算法发现了罗曼旅行中的一些可疑之处——她在过去几年多次前往某国,带着许多不同姓氏的小孩同行,预测分析将此与人口贩卖联系起来。
“Romain also purchased her ticket using a credit card from a bank associated with a sex trafficking ring in Eastern Europe,” says Kendall. LineSight is able to obtain this information from the airline Romain is flying with and cross check it with law enforcement databases.
肯达尔还表示:“罗曼买机票所用信用卡的发卡银行与东欧一个贩卖性工作者的人口走私集团有关联。”LineSight能从罗曼搭乘的航空公司获取这些信息,并与执法部门的数据库进行交叉核验。
“All of this information can be gathered and sent to a customs official before Romain and Sandra check in for their flight,” adds Kendall. “We collect data from multiple sources. Different governments collect different information, whether it’s from their own databases, from travel agencies. It’s not neat.”
肯达尔还补充说:“在罗曼和桑德拉登机之前,就可以搜集所有信息,发送给边检人员。我们从多个来源搜集数据。不同的政府搜集不同的信息,有的来自政府自己的数据库,还有的来自旅行社。这些数据各式各样的都有。”
The system can take a similar approach to analysing cargo shipments, helping to pull together relevant information that might identify potential cases of smuggling.
这个系统可以用类似分析货运信息的手段,将可能的走私信息整合起来。
The power of Unisys’s AI approach is the ability to ingest and assess a huge amount of data in a very short amount of time – it takes just two seconds for LineSight to process all of the relevant data and complete a threat assessment.
Unisys人工智能的强大之处在于可以短时间内搜集并评估海量数据——LineSight只需两秒钟就能处理所有相关数据并完成威胁评估。
But there are concerns about using AI to analyse data in this way. Algorithms trained to recognise patterns or behaviour with historic data sets can reflect the biases that exist in that information. Algorithms trained on data from the US legal system, for example, were found to replicate an unfair bias against black defendants, who were incorrectly identified as likely reoffend at nearly twice the rate as white criminals. The algorithm was replicating the human bias that existed in the US justice system.
但也有人担心这种用人工智能分析数据的方法。用历史数据训练出的算法在识别规律或行为时可能带有这些历史数据里存在的偏见。例如,人们发现,利用美国司法系统数据训练的算法就对非洲裔被告存在偏见。该算法会做出不正确的判断,认为非洲裔被告再次犯案几率是白人的两倍,反映的正是美国司法系统里人类的偏见。
Erica Posey of the Brennan Center for Justice fears similar biases could creep into algorithms used to make immigration decisions.
布瑞南司法中心(Brennan Center for Justice)的普赛(Erica Posey)担心,类似的偏见可能会悄然植入入境检查的算法中。
“Any predictive algorithm trained on existing data sets about who has been prevented from travelling in the past will almost certainly rely heavily on proxies to replicate past patterns,” she says.
“用过去被禁止入境旅客的信息对现有数据集进行训练,训练出的预测性算法几乎都极度依赖过去数据中的代表性,进而复制过去的规律。”普赛说。
According to Kendall, Unisys hope to deal with this by allowing its algorithm learn from its mistakes.
根据肯达尔的说法,Unisys希望通过允许算法从过去的错误中吸取经验来应对这一问题。
“If they stop somebody, and it turns out there was nothing wrong, that automatically updates the algorithm,” he says. “So every time we do an assessment the algorithm gets smarter. It’s not based on intuition, it’s not based on my bias – it’s based on the full population of travellers that come through.”
“如果他们把某个人拦下来,结果发现此人并没有什么不妥,这就会让算法自动更新。”他说。“因此我们每做一次评估,算法就变得更加智能。这并不是基于直觉,也不是基于我的偏见——而是基于通过边境的所有旅客。”
The company also says LineSight doesn’t assign one piece of data more weight than another, instead presenting all the relevant information to the border and customs officers.
该公司还表示,LineSight并不会让某一条数据的权重高于其他数据,而是将所有相关信息呈递给边检和海关人员。
But there are other teams that are looking to go even further by allowing machines to make judgements about whether travellers can be trusted. Human border officers make decisions about this based on a person’s body language and the way they answer their questions. There are some who hope that artificial intelligence might be better at picking up signs of deception.
但还有其他团队计划更进一步,让机器决定是否可以相信旅客。边检人员基于一个人的肢体语言和回答问题的方式作出判断。还有一些人希望人工智能可以更好地发现欺骗的迹象。
Aaron Elkins, a computer scientist at San Diego State University, points out that humans are typically only able to spot deception in other people 54% of the time. By comparison, AI-powered machine vision systems have been able to achieve an accuracy of over 80% in multiple studies. Infrared cameras that can pick up on changes in blood flow and pattern recognition systems capable of detecting subtle ticks have all been used.
圣地亚哥州立大学(San Diego State University)的计算机科学家埃尔金斯(Aaron Elkins)指出,人类通常只能发现他人54%的谎言。相比之下,人工智能驱动的机器视觉系统在多个研究中准确率超过80%。可以检测到血流变化的红外摄像头以及可以检测出细微变化的规律识别系统都得以运用。
Elkins himself is one of the inventors behind Avatar (Automated Virtual Agent for Truth Assessments in Real Time), a screening system that could soon be working with real-life border agents. Avatar uses a display that features a virtual border agent that asks travellers questions while the machine scrutinises the subject’s posture, eye movements, and changes in their voice.
埃尔金斯自己就是筛查系统Avatar(自动化实时判断真伪虚拟员工)的投资人之一,这一系统很快就会和边检工作人员一起合作。Avatar的显示界面有一个虚拟的机器人向旅客提问,并仔细分析旅客的姿势、眼动以及声音的变化。
After experiments of tens of thousands of subjects lying in a laboratory setting, the Avatar team believes they have managed to teach the system to pick up on the physical manifestations of deception.
在实验室对数万名对象进行测试后,Avatar团队认为,他们已经教会了系统识别欺骗行为。
Another system, called iBorder Ctrl is to be tested at three land border crossings in Hungary, Greece and Latvia. It too features an automated interviewer that will interrogate travellers and has been trained on videos of people either telling the truth or lying.
另一个名为iBorder Ctrl的系统即将在匈牙利、希腊、拉脱维亚三国的陆上边境进行测试。该系统已经经过说谎或说实话的人的视频训练,也有自动面试功能,会对旅客展开询问。
Keeley Crocket, an expert in computational intelligence at Manchester Metropolitan University in the UK, who is one of those developing iBorder Ctrl, says the system looks for micro-gestures – subtle nonverbal facial cues that include blushing as well as subtle backward and forward movement. Crocket has high hopes for this first phase of field tests, saying the team are hoping the system will “obtain 85% accuracy” in the field tests.
英国曼彻斯特城市大学的计算机智能专家科洛奇特(Keeley Crocket)是iBorder Ctrl的研发人员之一,表示该系统寻找人们的微表情——非语言的、细微的面部表情信息,包括脸红以及轻微的前后移动。科洛奇特对第一阶段的现场测试寄予厚望,称团队希望现场测试时该系统的“准确率达到85%”。
“Until we have completed this [phase of testing], we will not know for sure,” she cautions.
科洛奇特警告说:“除非我们完成了(这一阶段的测试),否则我们无法确定。”
But there is an ongoing debate about whether such AI “lie detectors” actually work.
不过,关于人工智能“谎言探测器”是否真的有用还存在争论。
Vera Wilde, a lie detection researcher and vocal critic of the iBorder Ctrl technology, points out that science has yet to prove a definitive link between our outward behaviour and deception, which is precisely why polygraph tests are not admissible in court.
谎言检测研究员以及iBorder Ctrl技术的声音分析师旺德(Vera Wilde)指出,科学尚未证明我们的外在行为和欺骗之间的明确联系,这正是为什么测谎仪结果在法庭上不被接受的原因。
“There is no unique ‘lie response’ to detect,” she says.
她说:“并没有能检测到的特别的‘谎言反应’。”
Even if such a link could achieve scientific certainty, the use of such technology at a border crossing raises tricky legal questions. Judith Edersheim, co-director of the Massachusetts General Hospital Center for Law, Brain and Behavior (CLBB), has suggested that lie-detection technology could constitute an illegal search and seizure.
即使这种关联性强到科学能够证明,在边境口岸使用时也会引起棘手的法律问题。马萨诸塞州综合医院法律、大脑与行为中心(Massachusetts General Hospital Center for Law, Brain and Behavior)的联合主任埃德斯海姆(Judith Edersheim)表示,谎言检测技术可能造成非法搜查和扣押。
“Compulsory screening is a seizure of your thoughts, a search of your mind,” she says. This would require a warrant in the US. And there could be similar problems in Europe too. Article 22 of the General Data and Protection Regulation protects EU citizens against profiling. Can the iBorder Ctrl ever be transparent enough to prove it hasn’t used some element of profiling?
她说:“强制性筛查是对你思想的攫取,是对人思维的检索。”在美国,这么做需要有逮捕令。在欧洲也可能存在类似的问题。 《通用数据和保护条例》(General Data and Protection Regulation)第22条保护欧盟公民免受此类侧写分析(profiling)。 iBorder Ctrl是否足够透明,能否证明它没有使用某些侧写分析元素?
It’s important to note that at this stage, travellers testing out iBorder Ctrl will be volunteers and will still face a human border agent before they enter the countries where it is being tested. The system will give the human border officers a risk assessment score determined by the iBorder Ctrl’s AI.
值得注意的是,在这个阶段,测试iBorder Ctrl的旅客都是志愿者。此外,在他们真正进入这些正在开展测试的国家之前,仍将面对真人边检人员。该系统将为真人边检人员提供由iBorder Ctrl的人工智能确定的风险评估分数。
And it seems likely that AI will never completely replace humans altogether when it comes to border control. The Unisys, Avatar, and iBorder Ctrl teams all agree that no matter how sophisticated the technology becomes, they’ll still rely heavily on humans to interpret the information.
在边检方面,人工智能似乎永远不会完全取代人类。 Unisys、Avatar和iBorder Ctrl团队都认为,无论技术变得多么成熟,他们仍然会严重依赖人类来解释信息。
But a reliance on machines to make judgements about a traveller’s right to enter a country still raises significant concerns among human rights and privacy advocates. If a traveller is determined to be a high risk, will a border patrol agency provide them information about why?
但依靠机器来决定旅客能否进入某个国家,仍然引起了人权和隐私倡导者的极大关注。 如果旅客被确定为高风险,边境巡逻机构是否会告诉他们原因是什么呢?
“We need transparency as to how the algorithm itself is developed and implemented, how different types of data will be weighted in algorithmic calculations, how human decision-makers are trained to interpret AI conclusions, and how the system is audited,” says Posey. “And fundamentally, we also need transparency as to the impact on individuals and the system as a whole.”
“关于如何开发和实施算法本身,如何在算法计算中对不同类型的数据进行加权,如何训练人类决策者来解释人工智能系统的结论,以及如何对系统进行审计,透明度是必不可少的。”普赛说。“从根本上说,如何对个人和整个系统的影响进行评估,透明度也是必需的。”
Kendall, however, believes AI may be an essential tool in dealing with the challenges facing international borders.
然而,肯达尔认为,人工智能可能是处理国际边境挑战的重要工具。
“It’s a complex set of threats,” he says. “The threats we face today will be different from the threats in a couple of years’ time.”
“这是一系列复杂的威胁,”他说。“几年后,我们面临的威胁将和今天的大不相同。”
The success of AI border-guards will depend not only on their ability to stay one step ahead of those who pose these threats, but also if they can make travelling easier for the 1.8 billion of us who want to see a bit more of the world.
人工智能边检的成功不仅取决于他们能否比那些构成威胁的人抢先一步,而且也取决于它们能否让18亿希望周游列国的人更轻松地出行。