The link That Never Was: Intel Optane PMem + LLM KV Cache Offload

Intro The world of LLMs is dominated by one expensive bottleneck: GPU memory. This directly impacts how many models can fit and how fast can they generate text, especially in multi-turn conversations or for processing long contexts. The solution is KV Cache Offloading (i.e with LMCache). One technology was perfectly suited to supercharge it, Intel …

vLLM Production Stack on Azure AKS with Terraform🧑🏼‍🚀

Intro The vLLM Production Stack is designed to work across any cloud provider with Kubernetes. After covering AWS EKS, today we’re deploying vLLM production-stack on Azure AKS with the same Terraform approach. This guide shows you how to deploy the same production-ready LLM serving environment on Azure, with azure-specific optimizations. We’ll cover network architecture, certificate …

Meet Nebius: The Cloud Built for the AI Era

Intro Every once in a while, a new cloud platform shows up that doesn’t just offer “more compute” . It rethinks what the cloud should look like in an AI-first world. That’s what caught my attention with Nebius, a European-born cloud designed from the ground up for high-performance, AI-centric workloads. One that just closed a …

vLLM Production Stack on Amazon EKS with Terraform🧑🏼‍🚀

Intro Deploying vLLM manually is fine for a lab, but running it in production means dealing with Kubernetes, autoscaling, GPU orchestration, and observability. That’s where the vLLM Production Stack comes in – a Terraform-based blueprint that delivers production-ready LLM serving with enterprise-grade foundations. In this post, we’ll deploy it on Amazon EKS, covering everything from …

LLM Embeddings Explained Like I’m 5

Intro We often hear about RAG (Retrieval-Augmented Generation) and vector databases that store embeddings, but we fail to remember what exactly are embeddings used for and how they work. In this post, we’ll break down how embeddings work – in the simplest way possible (yes, like you’re 5 🧠📎). I. What is an Embedding? Embeddings …