

Key Takeaways
Dr. McGregor will discuss recognizing that all AI systems fail ensures success with three critically important cultures: data-centric culture, benchmarks, and third-party audit. Join us to hear from someone working in an integral role in the AI Incident Database, which is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
More About Dr. Sean McGregor
Dr. Sean McGregor is a machine learning safety researcher and executive director of the AI Incident Database. He previously launched the Digital Safety Research Institute at UL Research Institutes, trained edge neural network models at Syntiant, and co-founded a stealth AI safety nonprofit. His open-source work has been featured in major media outlets, and his research spans machine learning, human-computer interaction, and AI ethics. He also leads the ML Commons Agentic Workstream, focusing on AI risks and how to understand and insure against them.


