PyTorch

Speaker: Zachary Devito

How Usability Improves Performance in PyTorch

Why was PyTorch successful?

Performance? No - it was initially 20% slower than alternative approaches.
Innovative new algorithm? No, used autograd approach developed elsewhere.
Answer: "laser focus" on usability for developers
  1. Eager mode by default
  1. Bindings for SOAT algorithms: CUDNN, BLAS, Intel MKL
  1. <24 hour response times on Github issues
  1. Comprimises to improve usability didn't significantly harm performance
Exponential growth in efficiency of algorithms (faster than Moore's law) means productivity is more important than performance now.
Don't compromise usability for potential performance gains.

Case Study: Fixed Sizes and Usability

Real networks do not always have fixed sizes ... but many libraries do!
E.g. images not the same size, but batches are rectilinear. Same with NLP.
Often use padding or scaling, but this is not hardware-efficient.
A surprising amount of dynamic behaviour and sizes occur in real world models?
So when is it ok to restrict this dynamic behaviour?
Add restrictions when there are already-realised performance gains. But me much more sceptical when the gains are theoretical.

Usability in PyTorch for Production

Study of torchscript users

  • Capture the structure of PyTorch programs to do custom transforms
  • Create self-contained archives of trained PyTorch programs for transfer learning, or deployment
  • Serve models as part of a service
  • Improve performance
To address these:
  • Introduced torch.fx to help with custom transforms
  • Introduced torch.package that provides self-contained eager-mode models without harsher restrictions of Torchscript
  • Introduced torch::deploy which is a native lib for running packaged models