Abstract
The advances in Artificial Intelligence are creating new opportunities to improve people’s lives around the world, from business to
healthcare, from lifestyle to education. For example, some systems
profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such
predictions impact the user’s life directly or indirectly (e.g., loan
disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems
are also increasing. To address these concerns, such systems are
mandated to be responsible: transparent, fair, and explainable to
developers and end-users. In this paper, we present ComplAI, a
unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model’s behavior in
drift scenarios, and to provide a single Trust Factor that evaluates
different supervised Machine Learning models not just from their
ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their
models and enable explanations, (b) assess and visualize different
aspects of the model such as robustness, drift susceptibility, and
fairness, and (c) compare different models (from different model
families or obtained through different hyperparameter settings)
from an overall perspective thereby facilitating actionable recourse
for improvement of the models. ComplAI is model agnostic and
works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and
frameworks (viz. scikit-learn, TensorFlow, etc.). It can be seamlessly
integrated with any ML life-cycle framework. Thus, this already
deployed framework aims to unify critical aspects of Responsible
AI systems for regulating the development process of such real
systems. The poster paper of ComplAI is also available