Can Artificial Intelligence Prevent Innate Racial Bias?

San Francisco's District Attorney's office is once again embracing technology as a means of criminal justice reform.

Despite having a $12 billion budget and being located adjacent to Silicon Valley, San Francisco doesn’t always take advantage of the ways in which tech can improve civic life or the work of its city employees. (Case in point: the disaster that is the DMV.) But there is one office that is pushing the envelope and collaborating with programmers, nonprofits, and computer scientists with the vital goal of improving its criminal justice practices. Just last month District Attorney Geroge Gascón announced that a partnership with Code for America had enabled his office to clear all old marijuana convictions made defunct with the passage of Proposition 64. And on Wednesday, he shared the news that a new collaboration with Stanford was in the works, to employ artificial intelligence as a means of mitigating implicit racial bias among his staff. 

If the words “artificial intelligence” combined with “criminal justice system” give you goosebumps, you’re not alone. The system is already so flawed; is technology going to hinder or hurt us? Gascón is convinced it’s the former. 

“Lady justice is depicted wearing a blindfold to signify impartiality of the law, but it is blindingly clear that the criminal justice system remains biased when it comes to race,” he says. “This technology will reduce the threat that implicit bias poses to the purity of decisions which have serious ramifications for the accused, and that will help make our system of justice more fair and just.”

The concept is fairly simple. When a district attorney receives a case and has to decide whether to move forward with charges, they get all sorts of information that could subconsciously be creating a racial bias. Often someone’s race is listed in police incident reports, but if not it can easily be deduced or assumed based on the hair and eye color listed, the neighborhood where the alleged crime took place, or even the name of the officer who made the arrest — it’s a small city, and DAs often know what districts individual police patrol.

Under this new AI system, all of that information would be redacted. An attorney would look at the basic evidence, decide whether or not to charge the case, and then move forward to an unredacted copy. If their decision then changes based on the disclosed information, they’ll have to justify it. 

It’s one small step toward mitigating our nation’s incredibly racist criminal justice system, and while it doesn’t prevent cops from arresting people of color, it could — in theory — prevent district attorneys from prosecuting them based off racial bias. 

Stanford Computational Policy Lab is leading the AI efforts, which, although they sound simple, aren’t a walk in the park to design. The program’s deputy director, Alex Chohlas-Wood, says they’re in the final stages of developing the platform. The algorithms used have to find the aforementioned descriptors — such as race or neighborhood — and redact them, but also maintain a natural narrative so that the incident reports are still readable. Using color coding — for example, every time one suspects name is redacted it’ll be blocked out in red — may even make skimming the reports faster and more efficient for attorneys. 

“Technology is not the answer to everything we do, and there is danger in using technology,” Gascón says. “But it can enhance the way that we do work. There is no way we can do this redaction in a reasonable amount of time if we had to do it by hand, we just don’t have the resources. I think there are many areas in the criminal justice system and in government in general where you can take technology and use it to increase the efficiency, the economies of scale, and the dignity in which you do your work.”

San Francisco will be the first city to employ this system in its district attorney’s office, but once finalized, the project will be made open source. Gascón hopes that other cities pick it up. 

“It is our hope that not only are we going to influence the work here in San Francisco, but that this is going to create a seachange of practices around the country,” he says. “We believe that the ability to do this now will separate those that can’t redact with those that won’t.”

Related Stories