How does torch.compile work?
PyTorch is a popular open-source tensor library for machine learning (ML) and scientific computing in Python.
It’s especially popular among the research community because of its active open-source community and its flexibility for experimenting with new ML architectures.
For all of its benefits, it has a clear downfall compared to other ML frameworks like TensorFlow.
It’s slow!
Recent work from the PyTorch team at Meta attempts to bridge the flexibility-performance gap with torch.compile
, a feature that speeds up PyTorch code with compilation.
In this blog post, I’ll discuss the motivation for torch.compile
and its implementation as a Python-level just-in-time (JIT) compiler called TorchDynamo.