Attention Map
A visualization showing which parts of the input an AI model focuses on when making predictions. Attention maps reveal the model's internal focus patterns.
Why It Matters
Attention maps provide interpretability for transformer models, showing whether the model is looking at the right things when making decisions.
Example
A vision transformer's attention map highlighting the dog in an image when classifying it as 'dog,' showing the model focused on the animal and not the background.
Think of it like...
Like eye-tracking studies that show where a person looks on a webpage — attention maps show where the AI 'looks' when processing information.
Related Terms
Attention Mechanism
A component in neural networks that allows the model to focus on the most relevant parts of the input when producing each part of the output. It assigns different weights to different input elements based on their relevance.
Self-Attention
A mechanism where each element in a sequence attends to all other elements to compute a representation, determining how much focus to place on each part of the input. It is the core innovation of the transformer.
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.
Transformer
A neural network architecture introduced in 2017 that uses self-attention mechanisms to process sequential data in parallel rather than sequentially. Transformers are the foundation of modern LLMs like GPT, Claude, and Gemini.