AI Supply Chain
The end-to-end ecosystem of components needed to build and deploy AI, from chip manufacturing and cloud infrastructure through data, models, tools, and applications.
Why It Matters
Understanding the AI supply chain reveals dependencies and bottlenecks — from TSMC chip fabrication to NVIDIA GPUs to cloud providers to model developers.
Example
Chips (TSMC) → GPUs (NVIDIA) → Cloud (AWS/Azure/GCP) → Training frameworks (PyTorch) → Base models (OpenAI/Anthropic) → Tools (LangChain) → Applications (enterprise products).
Think of it like...
Like the automotive supply chain — from raw materials through component manufacturing to assembly to dealer — disruption at any point affects the whole chain.
Related Terms
GPU
Graphics Processing Unit — originally designed for rendering graphics, GPUs excel at the parallel mathematical operations needed for training and running AI models. They are the primary hardware for modern AI.
Compute
The computational resources (processing power, memory, time) required to train or run AI models. Compute is measured in FLOPs (floating-point operations) and is a primary constraint and cost in AI development.
Cloud Computing
On-demand access to computing resources (servers, storage, databases, AI services) over the internet. Cloud providers like AWS, Azure, and GCP offer scalable infrastructure without owning physical hardware.
Deployment
The process of making a trained ML model available for use in production applications. Deployment involves packaging the model, setting up serving infrastructure, and establishing monitoring.