<aside> <img src="/icons/table_gray.svg" alt="/icons/table_gray.svg" width="40px" />

Table of Content

</aside>

Lecture 1: Handling Big Models

Quantization is shrinking models to small size, so that any one can run it on their own computer with no performance degradation.

Model Compression Techniques (Not covered in the course):

1. Pruning:

Remove connections/nodes/weights that are not important for the model.

2. Knowledge Distillation

Train Smaller Model (student) using the original model (teacher)

Quantization

Idea: Store the parameters of the model in lower precision (for example, from fp32int8)

Lecture 2: Data Types and Sizes

Integer Data Type:

Unsigned Integer (Always Positive):

Signed Integer: