Maintaining the Humanity of Our Models


Artificial intelligence (AI) and machine learning (ML) have been major research interests in computer science for the better part of the last few decades. However, all too recently, both AI and ML have rapidly grown to be media frenzies, pressuring companies and researchers to claim they use these technologies. As ML continues to percolate into the layman’s life, we, as computer scientists and machine learning researchers, are r­­esponsible for ensuring we clearly convey the extent of our work and the humanity of our models. In our current discussion, we limit ourselves to the following three important aspects that are needed to regularize ML for mass adoption: a standard for model interpretability, a consideration for human bias in data, and an understanding of a model’s societal effects.

In AAAI 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents