Speaker: Riyadh Baghdadi
High-level aim: automatic code optimisation - a hard problem!
Predicting the performance of optimisations is hard & they have complex interactions.
Optimisation Selection
E.g. code + List of loop optimisations → optimisation selection
Becomes a search problem, wrt a cost model
2 problems then:
- Space exploration
- Candidate evaluation
- Cost estimation (how much saving?)
- Correctness check (is this still valid)
Aim here is 2.1.; For 1. we use MCTS or beam search
Goal
Build deep learning model for predicting speedup estimation of selection of optimisations.
Representations
Example features for a computation:
- Loop i
- Transformations applied on loop i
- Loop j
- ...
- Mem access 1
- ...
- Operations count
Uses these to create embeddings
Combine these hierarchically using LSTMs to create final embedding.
Feedforward NN applied to end, to predict single speedup value.
Training Dataset
Problem: not enough data for training
Solution: artificially generate random programs and optimisations and run them
Results
Pretty strong predicted speedup (16% MAE)
How does it compare to the optimal combination (found using search) → very close on most tasks! And better than Hailide AutoScheduler.