Towards a Standard for Identifying and Managing Bias in AI with Reva Schwarz, NIST



#QuantUniversity Guest Lecture Series:
As individuals and communities interact in and with an environment that is increasingly virtual, they are often vulnerable to the commodification of their digital footprint. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI).

While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most impact. Whether that impact is helpful or harmful is a fundamental question in the area of Trustworthy and Responsible AI. Harmful impacts stemming from AI are not just at the individual or enterprise level, but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning models across domains and industries requires concerted effort. Current attempts for addressing the harmful effects of AI bias remain focused on computational factors such as representativeness of datasets and fairness of machine learning algorithms. These remedies are vital for mitigating bias, and more work remains. Yet, human and systemic institutional and societal factors are significant sources of AI bias as well, and are currently overlooked. Successfully meeting this challenge will require taking all forms of bias into account.
Reference:
https://www.nist.gov/publications/towards-standard-identifying-and-managing-bias-artificial-intelligence

source