Print      
Artificial intelligence won’t run ICE’s ‘extreme vetting’
Software lacking, agency drops plan to enhance checks
Washington Post

WASHINGTON — Immigration officials have abandoned their pursuit of a controversial machine-learning technology that was a pillar of the Trump administration’s ‘‘extreme vetting’’ of foreign visitors, dealing a reality check to the goal of using artificial intelligence to predict human behavior.

Immigration and Customs Enforcement officials told tech-industry contractors last summer that they wanted a system for their ‘‘extreme vetting initiative’’ that could automatically mine Facebook, Twitter, and the broader Internet to determine whether a visitor might commit criminal or terrorist acts or was a ‘‘positively contributing member of society.’’

But ICE quietly dropped the machine-learning requirement from its request in recent months, opting instead to hire a contractor that can provide training, management, and human personnel who can do the job.

Federal documents say the contract is expected to cost more than $100 million and be awarded by the end of the year.

After gathering ‘‘information from industry professionals and other government agencies on current technological capabilities,’’ ICE spokeswoman Carissa Cutrell said, the focus of what the agency now calls Visa Lifecycle Vetting ‘‘shifted from a technology-based contract to a labor contract.’’

An ICE official briefed on the decision-making process said the agency found there was no ‘‘out-of-the-box’’ software that could deliver the quality of monitoring the agency wanted.

That artificial-intelligence system, which followed Trump’s executive order in January calling for strict screening rules and slashed travel from six majority-Muslim countries, was expected to flag thousands of people a year for deportation investigations and visa denials, government filings show.

Such an application would have to be custom-designed, at significant cost, and be subject to a cumbersome internal review to ensure it would not trigger privacy or other legal violations, the official said.

Civil rights, immigration, and privacy groups have criticized the contract as a ‘‘digital Muslim ban’’ that would subject visitors to an invasive level of personal surveillance just for entering the country. Others questioned whether the agency was looking for an impossible technology: an AI sharp enough to predict human intentions based on an Internet search.

Some legal experts said they welcomed the change but worried that ICE, having signaled its interest to contractors, might pursue similar automatic-screening technology later.

‘‘Have they realized only that it doesn’t exist now, which is important in its own right, or have they also recognized that this really was an idea that was built on a complete fantasy?’’ said Rachel Levinson-Waldman, senior counsel at the Brennan Center for Justice, a left-leaning policy institute. ‘‘That you can somehow take these hugely disparate sources of information, in lots of different languages, and make a prediction about what somebody’s value and worth is?’’

The proposed technology was a key element of the initiative Trump said would increase Americans’ safety. After eight people were killed in a New York truck attack in October, Trump tweeted he ‘‘ordered Homeland Security to step up our already Extreme Vetting Program. Being politically correct is fine, but not for this!’’

ICE’s Counterrorism and Criminal Exploitation Unit, Cutrell said, receives 1.2 million ‘‘investigative leads’’ per year and prioritizes them by potential threat. The agency, Cutrell said, believed an automated system would provide a more effective way to continuously monitor the 10,000 people determined to be the greatest potential risk. The monitoring would only look at public social-media posts, ICE said, and would stop once the inspected person was granted legal residency in the United States.

But critics said social-media-scanning would chill free speech and is unproven in its ability to forecast a terrorist attack.