Some of the biggest questions in the rise of artificial intelligence and machine learning concern their potential to exacerbate bias. As the adoption of AI-based decision tools accelerates, we must consider the ways in which systemic bias can live on in machine intelligence.
How do we design and train models to ensure fairness and transparency? How do we prevent non-representative training data, biased data, or poor institutional understanding from encoding human bias into algorithms? What are the best ways to help governments evaluate the possibility of unexpected bias before the deployment of a new model?
At RegLab, we believe that academic institutions have an important role to play in helping the public sector independently assess the validity, reliability, and hidden inequities in the deployment of new technology.