Skip to main content

Microsoft is developing a tool to help engineers catch bias in algorithms

Image Credit: Khari Johnson / VentureBeat

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Microsoft is developing a tool that can detect bias in artificial intelligence algorithms with the goal of helping businesses use AI without running the risk of discriminating against certain people.

Rich Caruana, a senior researcher on the bias-detection tool at Microsoft, described it as a “dashboard” that engineers can apply to trained AI models. “Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” he told MIT Technology Review.

Bias in algorithms is an issue increasingly coming to the fore. At the Re-Work Deep Learning Summit in Boston this week, Gabriele Fariello, a Harvard instructor in machine learning and chief information officer at the University of Rhode Island, said that there are “significant … problems” in the AI field’s treatment of ethics and bias today. “There are real decisions being made in health care, in the judicial system, and elsewhere that affect your life directly,” he said.

The list of algorithmic bias run amok seems to grow by the year. Northpointe’s Compas software, which uses machine learning to predict whether a defendant will commit future crimes, was found to judge black defendants more harshly than white defendants. Research from Boston University and Microsoft shows that the data sets used to teach AI programs contain sexist semantic connections, for example considering the word “programmer” closer to the word “man” than “woman.” And a study conducted by MIT’s Media Lab shows that facial recognition algorithms are up to 12 percent more likely to misidentify dark-skinned males than light-skinned males.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect,” Caruana said.

Microsoft isn’t the only one attempting to tamp down algorithmic bias. In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Recent studies from IBM’s Watson and Cloud Platforms group have also focused on mitigating bias in AI models, specifically as it relates to facial detection.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.