MLOps
Machine Learning Operations — the set of practices that combine ML, DevOps, and data engineering to deploy and maintain ML models in production reliably and efficiently.
Why It Matters
MLOps is what separates ML experiments from ML products. Without it, models degrade silently, drift goes undetected, and teams cannot iterate quickly.
Example
A CI/CD pipeline that automatically retrains a model weekly, runs evaluation tests, and deploys the new version if it outperforms the current one — all without manual intervention.
Think of it like...
Like DevOps but for ML — it is the infrastructure and practices that ensure your AI systems run smoothly in production, not just in notebooks.
Related Terms
Data Pipeline
An automated workflow that extracts data from sources, transforms it through processing steps, and loads it into a destination for use. In ML, data pipelines ensure consistent data flow from raw sources to model training.
Model Serving
The infrastructure and process of deploying trained ML models to production where they can receive requests and return predictions in real time. It includes scaling, load balancing, and version management.
Model Monitoring
The practice of continuously tracking an ML model's performance, predictions, and input data in production to detect degradation, drift, or anomalies after deployment.
Feature Store
A centralized repository for storing, managing, and serving machine learning features. It ensures consistent feature computation between training and serving, and enables feature reuse across teams.
Deployment
The process of making a trained ML model available for use in production applications. Deployment involves packaging the model, setting up serving infrastructure, and establishing monitoring.