Inferencing is different from training in that models are often quantized to reduced precisions to save on the memory and computational requirements to process requests.
From the DeepSpeed-FastGen paper:
- Prefill or prompt processing
- input is user-provided text (the prompt)
- output is a key-value cache for attention
- compute-bound and scales with the input length
- Decode or token generation:
- adds a token to the KV cache, then generates a new token
- memory bandwidth-bound and shows approximately O(1) scaling
Until I have time to summarize it, I recommend reading Efficient Memory Management for Large Language Model Serving with PagedAttention by Kwon et al to understand how GPU memory is consumed during inferencing. This paper explains the role of key-value caches to store parts of the attention mechanism.
Implementation
Open-source
NIM and vLLM are open-source frameworks(?) for implementing inferencing services. I’m not sure they do exactly the same thing as each other.
- llm-d is a “distributed inference serving stack” which incorporates vLLM.
- vLLM Production Stack is a distributed “inference stack on top of vLLM.”
Azure
AI Foundry has an inferencing-as-a-service feature. Not sure how this works as of 2025.
ChatGPT
ChatGPT stores conversations, prompts, and metadata in Azure Cosmos DB.1
ChatGPT is built on Azure Kubernetes Service.1
Footnotes
-
Scott Guthrie’s keynote at Microsoft Build 2025 - Unpacking the tech ↩ ↩2