07: Emerging Properties in Self-Supervised Vision Transformers (
DINO
)
About
📝 100 AI Papers with Code
About this series
Transformer
Vision Transformer
🎓 Stanford CS336: LLM from Scratch
About this course
Lecture 01: Introduction & BPE
Lecture 02: PyTorch Basics & Resource Accounts
Lecture 03: Transformer LM Architecture
Lecture 04: MoE Architecture
Lecture 05&06: GPU Optimization, Triton & FlashAttention
Lecture 07&08: Parallelism
Lecture 09&11: Scaling Laws
Lecture 10: Inference & Deployment
Lecture 12: Evaluation
Lecture 13&14: Data Collection & Processing
Lecture 15: LLM Alignment SFT & RLHF(PPO, DPO)
Lecture 16 & 17: LLM Alignment SFT & RLVR(GRPO)
Assignment 01: BPE Tokenizer & Transformer LM
Assignment 02: Flash Attention & Parallelism
Assignment 05: SFT & GRPO
📖 Deep Learning Foundation & Concepts
About this book
On this page
1
Preliminary
2
DINO
3
Summary
4
Key Concepts
5
Q & A
6
Related resource & Further Reading
07: Emerging Properties in Self-Supervised Vision Transformers (
DINO
)
Self Supervised Learning
Representation Learning
一种无需标签的自监督学习方法,通过教师–学生自蒸馏训练 Vision Transformer,自发涌现出语义一致的全局表示与清晰的注意力分割能力。
1
Preliminary
2
DINO
3
Summary
4
Key Concepts
5
Q & A
6
Related resource & Further Reading
Back to top