Black Box
A model or system whose internal workings are not visible or understandable to the user — you can see the inputs and outputs but not the reasoning in between. Most deep learning models are considered black boxes.
Why It Matters
The black box nature of AI creates trust, regulatory, and debugging challenges. Industries like healthcare and finance are pushing for more transparent alternatives.
Example
A deep neural network with millions of parameters that accurately predicts cancer risk but cannot explain which specific factors drove a particular patient's risk score.
Think of it like...
Like a vending machine — you put in money and a selection, something happens inside you cannot see, and a product comes out. You know what it does but not how.
Related Terms
Explainability
The ability to understand and articulate how an AI model reaches its decisions or predictions. Explainable AI (XAI) makes the decision-making process transparent and comprehensible to humans.
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.
Transparency
The principle that AI systems should operate in a way that allows stakeholders to understand how they work, what data they use, and how decisions are made.
Neural Network
A computing system inspired by the biological neural networks in the human brain. It consists of interconnected nodes (neurons) organized in layers that process information and learn to recognize patterns.
Deep Learning
A specialized subset of machine learning that uses artificial neural networks with multiple layers (hence 'deep') to learn complex patterns in data. Deep learning excels at tasks like image recognition, speech processing, and natural language understanding.