๐๐ผHey AI heads๐๏ธ ๐๐จ๐ข๐ง ๐ฎ๐ฌ for the very first ๐๐๐๐ก ๐๐๐๐ญ๐ฌ ๐๐ข๐ฏ๐๐ด, hosted by Kosseilaโaka @CloudDude , From @CloudThrill.
๐ฏ This chill & laid back livestream will unpack ๐๐๐ ๐ช๐ฎ๐๐ง๐ญ๐ข๐ณ๐๐ญ๐ข๐จ๐ง๐ฅ:
โ
๐๐๐ it matters
โ
๐๐๐ it works
โ
Enterprise (vllm) vs Consumer (@Ollama) tradeoffs
โ
and ๐๐๐๐๐ itโs going next.
Weโll be joined by two incredible guest stars to talk about ๐๐ง๐ญ๐๐ซ๐ฉ๐ซ๐ข๐ฌ๐ ๐ฏ๐ฌ ๐๐จ๐ง๐ฌ๐ฎ๐ฆ๐๐ซ quantz ๐ฃ๏ธ:
๐ท ๐๐ฅ๐๐๐ซ ๐๐ฎ๐ซ๐ญ๐ข๐ฬ, bringing the enterprise perspective with vLLM.
๐ท๐๐จ๐ฅ๐ข๐ง ๐๐๐๐ฅ๐ญ๐ฒ, aka Bartowski, top downloaded GGUF quant ๐๐๐๐ฌ on Hugging Face.๐ซต๐ผ Come learn, and have some fun๐.
๐๐ก๐๐ฉ๐ญ๐๐ซ๐ฌ :(00:00) Host Introduction
(04:07) Eldar Intro
(07:33) Bartowski Intro
(13:04) What’s Quantization!
(16:19) Why LLMs Quantization matters?
(20:39) Training Vs Inference “The new deal”
(27:46) Biggest misconception about quantization
(33:22) Enterprise Quantization in production (vLLM)
(48:48) Consumer LLMs and quantization (Ollama, llama.cpp, GGUF) “LLMs for the people”
(01:06:45) Bitnet 1Bit Quantization from Microsoft
(01:28:14) How long it takes to Quantize a model (llama3 70B) GGUF or lm–compressor
(01:34:23) What is I-Matrix, and why people confuse it with IQ Quantization ?
(01:39:36) What’s LoRA and LoRAQ
(01:42:36) What is Sparsity ?
(01:47:42) What is Distillation ?
(01:52:34) Extreme Quantization (Unsloth) of Big models (Deepseek) at 2bits with 70% size cut
(01:57:27) Will future models llama5 be trained on fp4 tensor cores ? if so why quantize it?
(02:02:15) The future of LLMs on edge Devices (Google AI edge)
(02:08:00) How to Evaluate the quality of Quantized model ?
(02:26:09) Hugging face Role in the world of LLM/quantization
(02:33:46) Hugging face Role in the world of LLM/quantization
(02:36:41) Localllama Sub-redit Down (Moderator goes banana) (02:40:11) Guests Hope for the Future of LLMs and AI in General Check out quantization
Blog : https://cloudthrill.ca/llm-quantizati…
#AI #LLM #Quantization #TechBeatsLive #Locallama #VLLM #Ollama