Building AI systems that make fair decisions

A growing body of research has demonstrated that algorithms and other types of software can be discriminatory, yet the vague nature of these tools makes it difficult to implement specific regulations. Determining the existing legal, ethical and philosophical implications of these powerful decision-making aides, while still obtaining answers and information, is a complex challenge. Harini Suresh, a PhD student at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), is investigating this multilayered puzzle: how to create fair and accurate machine learning algorithms that let users obtain the data they need. Suresh studies the societal implications of automated systems in MIT Professor John Guttag’s Data-Driven Inference Group, which uses machine learning and computer vision to improve outcomes in medicine, finance, and sports. Here, she discusses her research motivations, how a food allergy led her to MIT, and teaching students about deep learning. Q: What led you to MIT? A: When I Continue reading Building AI systems that make fair decisions