vLLM Production Stack on Amazon EKS with Terraform๐Ÿง‘๐Ÿผโ€๐Ÿš€

Intro Deploying vLLM manually is fine for a lab, but running it in production means dealing with Kubernetes, autoscaling, GPU orchestration, and observability. Thatโ€™s where the vLLM Production Stack comes in – a Terraform-based blueprint that delivers production-ready LLM serving with enterprise-grade foundations. In this post, we’ll deploy it on Amazon EKS, covering everything from …

LLM Embeddings Explained Like Iโ€™m 5

Intro We often hear about RAG (Retrieval-Augmented Generation) and vector databases that store embeddings, but we fail to remember what exactly are embeddings used for and how they work. In this post, weโ€™ll break down how embeddings work – in the simplest way possible (yes, like youโ€™re 5 ๐Ÿง ๐Ÿ“Ž). I. What is an Embedding? Embeddings …

vLLM production-stack: LLM inference for Enterprises (part1)

Intro If youโ€™ve played with vLLM locally you already know how fast it can crank out tokens. But the minute you try to serve real traffic with multiple models, thousands of chats, you hit the same pain points the community kept reporting: โš ๏ธ Pain point What you really want High GPU bill Smarter routing + …

Safety Detectives Interview with CloudThrill CEO Kosseila Hd

Our Founder & CEO, Kosseila Hd, recently sat down with SafetyDetectives to share CloudThrillโ€™s vision on why the real AI revolution lies in infrastructure rather than just apps, and what that means for organizations building and owning their private, scalable AI.

Recursive `.๐‘”๐‘–๐‘ก๐‘–๐‘”๐‘›๐‘œ๐‘Ÿ๐‘’`: When Ignoring Goes Too Far

intro You might have heard of the “recursive .gitignore symptom”, or maybe you havenโ€™tโ€”but if you work with Git long enough, thereโ€™s a good chance youโ€™ll run into it. Itโ€™s one of those sneaky issues that can cause unexpected behavior in your repositories, making files disappear from Git tracking when you least expect it. Imagine …