🐶

Pet Breeds

Anki

Aim: create a model that can classify an image as a 3 or a 7

Misc

FastAI: what is notable about lists
Uses a special L class with additional features

Setup

FastAI: what is a DataBlock?
A container for building Datasets and DataLoaders
FastAI: what class enables easy building of Datasets and DataLoaders
DataBlock
FastAI: what arguments must be passed to DataBlock
DataBlock(blocks, get_items, splitter)
FastAI: what additional arguments may be useful to pass to DataBlock?
DataBlock(..., get_x, get_y, item_tfms, batch_tfms)
FastAI: what is the key difference between a DataBlock and a Datasets/DataLoaders?
The DataBlock doesn't specify the location of the data, only what to do with it
FastAI: resizing image transform
Resize(side_len) (square) or Resize(x_len, y_len)
FastAI: multi-transform utility function for images
aug_transforms
FastAI: create a Datasets/ Dataloaders from a DataBlock
data_block.datasets(path) or data_block.dataloaders(path)
FastAI: debug DataBlock
data_block.summary(path)

Training

FastAI: base helper class for analysing model predictions
Interpretation (e.g. ClassificationInterpretation)
FastAI: how do you use the learning rate finder?
learner.lr_find()
FastAI: what does the learning rate finder do
Start from a tiny lr and increase it until the loss begins to increase (then plots the graph)
FastAI: given the learning rate finder graph, how should one pick a learning rate?
Either:
  1. One order of magnitude (base 10) less than where the minimum loss was achieved
  1. The last point where the loss was clearly decreasing
FastAI: discriminative learning rates
learn.fit_one_cycle(..., max_lr=slice(low, high))
Each layer has a different learning rate, spread equally across the given range
At what point should you stop training
When your validation metrics are getting worse, not the loss
What should you do if validation loss starts getting worse
Check validation metrics ➡️ only if they start getting worse should you stop training
What should you do if validation metrics start getting worse
Stop training
Why might validation loss begin to get worse, but validation metrics continue to improve?
The model is becoming overconfident in its predictions (loss ⬆️)
(even when the actual accuracy, as measured by the validation metrics, is still improving)
If overfitting is detected, what should we do instead of early stopping (to get best accuracy)
Retrain from scratch with a lower learning rate
At what point should you train a model during the development process
As soon as possible!