Dual Use
Technology or research that can be applied for both beneficial and harmful purposes. Most AI capabilities are inherently dual-use, creating governance challenges.
Why It Matters
Dual-use nature means AI governance cannot simply ban capabilities — the same technology that enables medical breakthroughs could be misused for harm.
Example
Protein folding AI (AlphaFold) that advances drug discovery but could theoretically also be used to design harmful biological agents.
Think of it like...
Like a kitchen knife — essential for cooking but potentially dangerous. The tool itself is neutral; it is the intent and context of use that determines the outcome.
Related Terms
AI Safety
The research field focused on ensuring AI systems operate reliably, predictably, and without causing unintended harm. It spans from technical robustness to long-term existential risk concerns.
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.
Catastrophic Risk
The potential for AI systems to cause large-scale, irreversible harm to society. This includes risks from misuse (bioweapons), accidents (autonomous systems), and structural disruption (mass unemployment).