Numerous proposals, prototypes, and models have emerged for using artificial intelligence (AI) and machine learning to predict individual risk related to covid-19. In the United States, for instance, the Department of Veterans Affairs uses individualised risk scores to allocate medical resources to people with covid-19, and prisons have sought to detect symptoms by processing inmates’ phone calls. Further tools, such as vulnerability predictions for individuals and voice based detection of infection, are on the horizon. But use of AI for such purposes has given
rise to questions about legality.
When a state or federal government seeks to use AI models to predict an individual’s risk of covid-19, the key legal questions will ultimately turn on how effective the models are and how much they burden legal interests. We focus on two of the most salient legal concerns under US law: privacy and discrimination.