top of page
< Back

AI Fairness 360

North America

AI Fairness 360 is used during AI creation to ensure that bias is not present in the models developed. The toolkit can be used during three different phases of the data science lifecycle. It can be used to analyze and mitigate biases in the training data, then it can be used to analyze and mitigate biases in the algorithms that create the machine learning model. Finally, it can be used to analyze and mitigate predictions that are made by the model at deployment time.

bottom of page