Latent Space
A compressed, lower-dimensional representation of data learned by a model. Points in latent space capture the essential features of the data, and nearby points represent similar data items.
Why It Matters
Latent spaces are how generative models create new content — they learn a meaningful map of possibilities and can sample new points to generate novel outputs.
Example
In a face-generating model, one direction in latent space might control age, another controls hair color, and moving along these axes generates predictable variations.
Think of it like...
Like a mixing board in a recording studio where each slider controls a different aspect of sound — the combination of slider positions defines a unique output.
Related Terms
Variational Autoencoder
A generative model that learns a compressed, lower-dimensional representation (latent space) of input data and can generate new data by sampling from this learned space.
Embedding
A numerical representation of data (text, images, etc.) as a vector of numbers in a high-dimensional space. Similar items are placed closer together in this space, enabling machines to understand semantic relationships.
Representation Learning
The process of automatically discovering useful features or representations from raw data, rather than manually engineering them. Deep learning excels at learning hierarchical representations.
Generative AI
AI systems that can create new content — text, images, music, code, video — rather than just analyzing or classifying existing data. These models learn patterns from training data and generate novel outputs that resemble the original data.
Dimensionality Reduction
Techniques that reduce the number of features (dimensions) in a dataset while preserving the most important information. This makes data easier to visualize, speeds up training, and can improve model performance.