Print      
Event studies computer models, justice system
By Michael Levenson
Globe Staff

Should sophisticated computer models help judges predict which defendants are safe enough to release before trial? Or should judges rely on their own wisdom, discretion, and experience to make those decisions?

Speaking Tuesday at an event at Harvard Law School, two computer scientists and two legal scholars agreed that judges, like all people, have biases that inform their decisions. But are computer models really any better?

Jonathan L. Zittrain, a Harvard law professor, pointed out that computerized risk scores assigned to criminal defendants could be based on data that is biased because it comes from a criminal justice system in which people of color are disproportionately stopped and arrested.

Because of the biases inherent in such data, Cynthia Dwork, a Harvard computer scientist, said she held out little hope that a fair and just computer algorithm could be designed for the criminal justice system.

“Generally, I would say: garbage in, garbage out,’’ she said.

But Christopher L. Griffin, Jr., research director at Harvard Law’s Access to Justice Lab, said predictive models could be helpful in guiding judges by adding to the range of data available to them when they decide whether to jail or release defendants before trial.

“We like to think of these tools as not necessarily de-biasing mechanisms, but information-enhancing ones that increase the signal-to-noise ratio,’’ he said.

Aside from their biases, computer models are also problematic because they are too complex for most people to understand, several professors said. Zittrain likened the algorithms used in the criminal justice system to the notoriously opaque US tax code.

“You can read the tax code all night and still have no clue what it’s there to do,’’ he said. “If you transpose that into the realm of justice or something else, you could see all sorts of problems with those models.’’

Margo I. Seltzer, a Harvard computer scientist, said asking people to trust a computer algorithm to make decisions in the criminal justice system is like asking patients to trust a computer model to make medical decisions.

“How many of you would take a drug because you plugged it into a black box about which we knew nothing . . . and then feel comfortable taking that drug?’’ she asked, to laughter from the audience of Harvard students.

Seltzer said the complexity of computer algorithms is one reason people trust decisions made by other people, who can be questioned and can explain their thinking.

“I like to think of my doctor as a little bit different than a black box,’’ Seltzer said. “I’m that annoying patient who actually asks my doctor, like, ‘Why do you want me to take this?’ ’’ She said one of her doctors knows her so well he gives her research papers that explain the clinical results for the drugs he is prescribing her.

“This is a doctor I can get behind,’’ Seltzer said. “And this is exactly what I want to see in the algorithms that are deciding people’s fates.’’

The event, called “Programming the Future of AI: Ethics, Governance, and Justice,’’ was presented as part of HUBweek, an innovation-themed festival sponsored by Harvard, MIT, Massachusetts General Hospital, and The Boston Globe.

Michael Levenson can be reached at michael.levenson@globe.com