Can a stealth T-shirt foil digital spies?
A Northeastern University engineer has designed clothing that makes it hard for AI systems to recognize the wearer as human
Xue Lin, an assistant professor, said her ultimate goal is to help improve surveillance systems by finding their vulnerabilities.
By Hiawatha Bray, Globe Staff

It’s one of the gaudiest T-shirts you’ll ever see — a riotous, pixellated blend of blue, yellow, violet, and green. But if you’re wearing it, you’ll be invisible — not to the human eye, but to computers.

It is a kind of stealth clothing, intended to confuse video surveillance systems, designed by Northeastern University­ computer engineer Xue Lin in cooperation with researchers at IBM Corp. and the Massachusetts Institute of Technology

Lin calls it an “adversarial T-shirt,’’ because that’s the technical name for data designed to trick systems that use artificial intelligence to swiftly detect patterns in large data sets, such as someone wearing a red hat in a crowd of people.

In this case, the adversarial data are the patterns on the shirt. Some AI systems find these patterns so confusing that they can’t recognize the wearer as a human being.

The shirt itself won’t protect the wearer from facial-recognition software, such as Amazon’s controversial Rekognition system. But it could provide a defense against other types of surveillance, such as object-recognition systems that identify people based on clothing and body shape.

Such programs are already on the market. The video surveillance system used by the New Bedford Housing ­Authority, for instance, includes object-recognition software from Avigilon, a Canadian company owned by Motorola Solutions. The software can be told to scan hours of recorded video looking for a human being wearing, say, a red shirt and black pants.

But a person wearing Lin’s T-shirt might not be recognized as human at all. “When a person is wearing it, it will just disappear,’’ Lin said. Which means that the human wearing the shirt will disappear, as well.

The shirt works because AI vision systems don’t see things the way that humans do. All they see are pixels: dots of light they’ve been trained to interpret as cars or maiboxes or human bodies. Add enough unexpected pixels to the mix and the machines no longer know what they’re looking at.

Lin and a Northeastern graduate student, Kaidi Xu, designed the shirt to deceive a common object-recognition program called Yolo, an open-source program that anyone can download and test. The researchers found that when shown images of people wearing the shirt, Yolo would fail to identify the wearers as people 63 percent of the time. Lin hasn’t tested her shirt against other object-recognition programs, such as the Avigilon system. But since all such programs work on the same general principles, it’s likely that they too will have trouble seeing the shirt — and the person wearing it.

“One of the things the adversarial fashions reveal is that these technologies are not perfect, that there are problems with them,’’ said Dave Maass, a senior investigative researcher for the Electronic Frontier Foundation, a civil liberties group.

Maass warned that even when the software works, it raises the threat of abuse.

Imagine that the police are looking for a suspect who’s wearing a red T-shirt. A smart-camera system might spot a dozen such people on a public street. Should the police put them all under surveillance?

Maass said that police need to set rules of engagement before such technology is widely deployed.

“I think we are hitting the precipice and we’re going to fall off the cliff if we don’t do something about it,’’ Maass said.

Lin isn’t the only one trying to deceive AI surveillance systems. In April, researchers at the Belgian university KU Leuven showed how random colors and patterns printed on a large card could defeat Yolo. And a Chicago startup called Reflectacles said that it plans to start selling eyeglasses next year that are designed to defeat facial-recognition programs like the Face ID system in newer Apple smartphones, by blocking the infrared light beams that these systems emit.

Pro-democracy protesters in Hong Kong are up against a police force that routinely uses facial-recognition software to identify activists. The protesters are fighting back with crude but sometimes effective methods, including face paint and full-face masks. They worked so well that the city government announced a ban on the public wearing of masks and face paint during protest rallies. But a Hong Kong court overturned the ban as a violation of the city’s constitution.

But unlike those protesters and companies such as Reflectacles, Lin isn’t trying to help defeat these systems; she’s trying to make them even better, until they’re nearly impossible to fool. “We try to explore the vulnerability of these neural networks,’’ she said, “and hopefully, we can fix this problem.’’

Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him on Twitter @GlobeTechLab.