vLLM for beginners: The Fundamentals

Intro last year, I have dived deep into Ollama inference where I ended up building and speaking about Ollama Kubernetes deployments along with rich documentation in my ollama_lab repo and quantization article—This year’s Cloudtrhill focus is VLLM Inference which is a next level beast from a model serving standpoint. Exploring multiple inference options is time-intensive …

Ollama deployment on Civo K8s Cluster with terraform

Intro Tired of sharing your IP & sensitive data to OpenAI ? What if you could run your own private AI chatbot powered by Local Inference & LLMs, with 100% data privacy—all inside a Kubernetes cluster?Today we’ll show you how to deploy an end-to-end LLM inference setup on a Civo Cloud Talos K8s cluster with …

kv_cache Explained: How It Enhances vLLM Inference

Intro Too often, machine learning concepts are explained like a mathematician talking to other mathematicians—leaving the rest of us scratching our heads. One of those is kv_cache, a key technique that makes large language models run faster and more efficient.This blog is my attempt to break it down simply, without drowning in dark math :). …

CloudThrill Joins NVIDIA Inception

Intro CloudThrill has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements. What we do: We are focused on helping organizations deploy privacy-first, cost-efficient AI infrastructure with open-source LLMs and container-native technologies. Our services blend deep expertise in cloud-native architecture, MLOps, and scalable inference to empower businesses to innovate securely and …

world of LLM

How to Quantize AI Models with Ollama CLI

Intro You’ve probably fired up ollama run some-cool-model tons of times, effortlessly pulling models from Ollama’s Repo or even directly from Hugging Face. But have you ever wondered how those CPU-friendly GGUF quantized models actually land on places like Hugging Face in the first place? What if I told you, you could contribute back with tools you might already be …