AI is a powerful technology that carries both benefits and risks. We wish to promote innovation to ensure its potential benefits are responsibly realized and widely shared, while simultaneously ensuring that current and potential societal risks are mitigated. To address the growing societal impact of AI, many jurisdictions are pursuing policymaking. The AI research and policy community lacks consensus on the evidence base relevant for effective policymaking, as has been seen with the debates over California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (California’s SB-1047). Points of contention include disagreement about what risks should be prioritized, if or when they will materialize, and who should be responsible for addressing these risks.
In light of this, we firmly believe AI policy should be informed by scientific understanding of AI risks and how to successfully mitigate them. Therefore, if policymakers pursue highly committal policy, the evidence of the associated AI risks should meet a high evidentiary standard. Advancing significant legislation without clear understanding of risks it intends to address may lead to more negative consequences than positive outcomes.
We support evidence-based policy and recognize current scientific understanding is quite limited. Therefore, we recommend the following priorities to advance scientific understanding and science- and evidence-based AI policy:
- We need to better understand AI risks.
- We need to increase transparency on AI design and development.
- We need to develop techniques and tools to actively monitor post-deployment AI harms and risks.
- We need to develop mitigation and defense mechanisms for identified AI risks.
- We need to build trust and reduce fragmentation in the AI community.
We describe each of these priorities in more detail below. We believe by following these steps we can pave a more productive path toward robust and responsible AI, anchored in the best scientific practice and AI policymaking that is evidence based.