Catastrophic Risk
The potential for AI systems to cause large-scale, irreversible harm to society. This includes risks from misuse (bioweapons), accidents (autonomous systems), and structural disruption (mass unemployment).
Why It Matters
Catastrophic risk assessment is becoming mandatory for frontier AI developers. Governments worldwide are developing frameworks to evaluate and mitigate these risks.
Example
An autonomous weapons system making targeting decisions without human oversight, or an AI-powered biological research tool being misused to engineer dangerous pathogens.
Think of it like...
Like the safety analysis for a nuclear power plant — the technology provides enormous benefits, but the worst-case scenarios require extraordinary precautions.
Related Terms
Existential Risk
The risk that advanced AI systems could pose a threat to the long-term survival or flourishing of humanity. This is the most serious concern in the AI safety research community.
AI Safety
The research field focused on ensuring AI systems operate reliably, predictably, and without causing unintended harm. It spans from technical robustness to long-term existential risk concerns.
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
Dual Use
Technology or research that can be applied for both beneficial and harmful purposes. Most AI capabilities are inherently dual-use, creating governance challenges.
Risk Assessment
The systematic process of identifying, analyzing, and evaluating potential risks associated with an AI system. Risk assessment considers both the likelihood and impact of potential harms.