Lecture 10: Inference & Deployment
About
📝 100 AI Papers with Code
About this series
Transformer
Vision Transformer
🎓 Stanford CS336: LLM from Scratch
About this course
Lecture 01: Introduction & BPE
Lecture 02: PyTorch Basics & Resource Accounts
Lecture 03: Transformer LM Architecture
Lecture 04: MoE Architecture
Lecture 05&06: GPU Optimization, Triton & FlashAttention
Lecture 07&08: Parallelism
Lecture 09&11: Scaling Laws
Lecture 10: Inference & Deployment
Lecture 12: Evaluation
Lecture 13&14: Data Collection & Processing
Lecture 15: LLM Alignment SFT & RLHF(PPO, DPO)
Lecture 16 & 17: LLM Alignment SFT & RLVR(GRPO)
Assignment 01: BPE Tokenizer & Transformer LM
Assignment 02: Flash Attention & Parallelism
Assignment 05: SFT & GRPO
📖 Deep Learning Foundation & Concepts
About this book
Lecture 10: Inference & Deployment
训练完LLM之后,如何进行高效的推理和部署是非常重要的。Lecture 10介绍了模型压缩、量化、蒸馏等技术,以提升推理效率和降低资源消耗。此外,还讨论了模型部署的常见方法和挑战,包括云端部署、本地部署以及边缘计算等。通过这些技术,可以更好地将训练好的模型应用于实际场景中。
Back to top
Lecture 09&11: Scaling Laws
Lecture 12: Evaluation