AI Glossary

The definitive dictionary for AI, Machine Learning, and Governance terminology. From Flash Attention to RAG — look up any term.

L

Labeling Platform

Software tools that manage the process of data annotation at scale, including task distribution, quality control, annotator management, and labeling interfaces.

Data Science

LangChain

A popular open-source framework for building applications powered by language models. It provides tools for prompt management, chains, agents, memory, and integration with external tools and data sources.

Artificial Intelligence

Large Language Model

A type of AI model trained on massive amounts of text data that can understand and generate human-like text. LLMs use transformer architecture and typically have billions of parameters, enabling them to perform a wide range of language tasks.

Artificial Intelligence

Latency

The time delay between sending a request to an AI model and receiving the response. In ML systems, latency includes data preprocessing, model inference, and network transmission time.

Artificial Intelligence

Latent Space

A compressed, lower-dimensional representation of data learned by a model. Points in latent space capture the essential features of the data, and nearby points represent similar data items.

Machine Learning

Layer Normalization

A normalization technique that normalizes the inputs across the features for each individual example (rather than across the batch). It stabilizes training in transformers and RNNs.

Machine Learning

Leaderboard

A ranking of AI models by performance on specific benchmarks. Leaderboards drive competition and provide quick comparisons but can encourage gaming and narrow optimization.

Artificial Intelligence

Learning Rate

A hyperparameter that controls how much the model's weights are adjusted in response to errors during each training step. It determines the size of the steps taken during gradient descent optimization.

Machine Learning

LightGBM

Light Gradient Boosting Machine — Microsoft's gradient boosting framework optimized for speed and efficiency. LightGBM uses histogram-based splitting and leaf-wise growth for faster training.

Machine Learning

LIME

Local Interpretable Model-agnostic Explanations — a technique that explains individual predictions by approximating the complex model locally with a simple, interpretable model.

Machine Learning

Linear Regression

The simplest regression algorithm that models the relationship between input features and a continuous output as a straight line (or hyperplane in multiple dimensions). It minimizes the sum of squared errors.

Machine Learning

Llama

A family of open-weight large language models released by Meta. Llama models are available for download and customization, making them the most widely adopted open-source LLM family.

Artificial Intelligence

LLM-as-Judge

Using a large language model to evaluate the quality of another model's outputs, replacing or supplementing human evaluators. The judge LLM scores responses on various quality dimensions.

Artificial Intelligence

Logistic Regression

A classification algorithm that uses the sigmoid function to predict the probability of a binary outcome. Despite its name containing 'regression,' it is used for classification tasks.

Machine Learning

Long Context

The ability of AI models to process very large amounts of input text — typically 100K tokens or more — enabling analysis of entire books, codebases, or document collections.

Artificial Intelligence

Long Short-Term Memory

A type of recurrent neural network designed to learn long-term dependencies through special gating mechanisms that control information flow. LSTMs address the vanishing gradient problem of standard RNNs.

Machine Learning

LoRA

Low-Rank Adaptation — a parameter-efficient fine-tuning technique that freezes the original model weights and adds small trainable matrices to each layer. It dramatically reduces the compute and memory needed for fine-tuning.

Machine Learning

Loss Function

A mathematical function that measures how far a model's predictions are from the actual correct values. The goal of training is to minimize this loss function, making predictions as accurate as possible.

Machine Learning

Low-Code AI

AI development platforms that require minimal coding, typically providing visual interfaces with optional code customization for more advanced users.

General