Graph Representation Learning for Chip Design

Speaker: Azalia Mirhoseini (Google)

In the past decade systems and hardware have transformed ML. Now it's time for ML to transform hardware!
Computational demands of AI running far ahead of chips - since 2012 the amount of compute used in SOTA models has doubled every 3.4 months.

Combinatorial Optimisation on Graph Data

Many problems in systems and chips are combinatorial optimisation problems over graphs

Advantages of learning-based approaches

ML models can learn the underlying relationship between context and target and leverage it to explore optimisation trade-offs.
"Gain experience" as they solve more instances over time (unlike heuristics).
Can scale on distributed platforms and train billions of parameters.

ML for Chip Design Goals

Reduce the design cycle from 1.5-2 years to weeks. Today Google designs chips for the NN architectures that will come about 2-5 years from now.

Chip Placement Problem

Place the chip components to minimise the latency of computation, power consumption, chip area and cost, while adhering to certain constraints.
Problem: huge state space, (far more than e.g. Go).
Prior approaches:
  • Partitioning (e.g. Min-Cut)
  • Hill-climing
  • Analytic Solvars (e.g. RePlAce)
What are the rewards?
What are the rewards?
Runs in hours rather than weeks (like alternatives) and gives significant improvements.

Knowledge Tansfer Across Chips

Can train on one chip and fine-tune on others.
Problem: Value network is poor at predicting on transfer task

Dataset

Generated a dataset of 10k chip placements, labeled with their wirelength and congestion.

Reward Model Architecture and Features

notion image
Aim is just to predict value (initially).
Conventional GNN methods not working well. New approach:
notion image
I.e. concat fn of features and edge weight, fn again, and take mean for node.
Reduce mean used to generate graph embedding

Policy/Value Model Architecture

notion image
Pre-training also shown to be very effective in improving training efficiency.