Safety

Post
Stanford Report features RegLab’s work developing the Statutory Research Assistant (STARA) and an AI tool to identify racially restricted covenants.
Post
A policy brief that assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI.
Post
Dan Ho and other members of the RegLab team contributed to the report produced by the Joint California Policy Working Group on AI Frontier Models, which offers a policy framework for responsible, ethical, and safe use of AI.
Post
A benchmark featuring 25 systems with complex, real-world codebases, and 40 bug bounties that cover 9 of the OWASP Top 10 Risks.
Post
Congratulations to the Cybench team, who won first prize in the Center for AI Safety’s SafeBench competition!
Post
Dan Ho offers behind the scenes insight into the AI bootcamp for California officials that RegLab co-hosted.
Post
Dan Ho co-authored a new article published in Science that analyzes the impact of different policy proposals related to regulating open foundation models.