Meet Nebius: The Cloud Built for the AI Era

Intro Every once in a while, a new cloud platform shows up that doesn’t just offer “more compute” . It rethinks what the cloud should look like in an AI-first world. That’s what caught my attention with Nebius, a European-born cloud designed from the ground up for high-performance, AI-centric workloads. One that just closed a …

vLLM Production Stack on Amazon EKS with Terraform🧑🏼‍🚀

Intro Deploying vLLM manually is fine for a lab, but running it in production means dealing with Kubernetes, autoscaling, GPU orchestration, and observability. That’s where the vLLM Production Stack comes in – a Terraform-based blueprint that delivers production-ready LLM serving with enterprise-grade foundations. In this post, we’ll deploy it on Amazon EKS, covering everything from …

LLM Embeddings Explained Like I’m 5

Intro We often hear about RAG (Retrieval-Augmented Generation) and vector databases that store embeddings, but we fail to remember what exactly are embeddings used for and how they work. In this post, we’ll break down how embeddings work – in the simplest way possible (yes, like you’re 5 🧠📎). I. What is an Embedding? Embeddings …

vLLM production-stack: LLM inference for Enterprises (part1)

Intro If you’ve played with vLLM locally you already know how fast it can crank out tokens. But the minute you try to serve real traffic with multiple models, thousands of chats, you hit the same pain points the community kept reporting: ⚠️ Pain point What you really want High GPU bill Smarter routing + …

vLLM for beginners: Deployment Options (PartIII)

Intro In Part 2 of our vLLM for beginners Series, we explored performance features like PagedAttention, attention backends, and prefill/decode optimization. In this final part, we’ll shift from theory to practice, covering how to deploy vLLM across different environments, from source builds to docker containers (K8s deployment will be covered separately). 💡In this series, we aim to provide …