Jessica Lee, Njeri Mutura, Safiya Noble
Companies are increasingly turning to deep learning algorithms and other forms of artificial intelligence to analyze, understand patterns and make decisions based on the large volumes of data they have access to. From healthcare to adtech, AI is being harnessed to make critical decisions that can have significant impact on consumers. The risk for bias in these decisions has been well-documented. From the initial collection to designing the models, there is a risk that bias will creep into each stage of the decision-making process. At the same time, the GDPR, CPRA, and other privacy laws are calling for limits on automated decision-making, as well as setting standards for transparency and explainability. In this panel we will look at how and when bias can creep into AI, the potential harms, and how privacy law may serve as an avenue for interrupting that bias.
Jessica Lee, Partner, Loeb & Loeb
Njeri Mutura, Sr. Corporate Counsel, Legal & Compliance Lead, Microssoft
Safiya Noble, Assoc. Professor & Co-Director, UCLA Center for Critical Internet Inquiry
Readings:
Aiming-for-truth-fairness-and-equity-in-your-companys-use-of-AI-_-Federal-Trade-Commission
Commission-white-paper-artificial-intelligence-feb2020_en
21st-Century-Integrated-Digital-Experience-Act._-2020-B
Artificial-Intelligence-A-Roadmap-for-California-Little-Hoover-Commission
DIGITAL-ACCOUNTABILITY-AND-TRANSPARENCY-ACT-OF-2014_-11
EU-Commissions-Proposal-for-Regulation-on-Artificial-Intelligence