Menu toggle

Frameworks and libraries

The reason deep learning systems are at all viable is that they find and exploit highly compositional solutions to certain problems in contemporary high-performance computing.

Stated more correctly, many of the problems we wish to tackle admit solutions which can be realized as compositions of many basic functions, and this feature (along with the corresponding multiplicativity of the Chain Rule) leads to computationally feasible algorithms for approximating the component functions.

In light of this high compositionality, one might expect functional programming to dominate production-level Machine and Deep Learning frameworks; and that's true, to a degree, on the level of tensor operations, especially on GPUs.

At the higher level of abstraction in which we read in data and instantiate, tweak, train, and ultimately deploy various flavors of neural networks, Python has emerged as the goto tooling language. For example, two of the top opensource frameworks for ML/DL, PyTorch and Tensorflow, write C bytecode via Python bindings.

In 2016, when the DL@DU research group was first formed, PyTorch didn't yet exist and Tensorflow was just barely opensource. Back then, we wrote in Torch, which is no longer under development since it has been migrated to its de-facto successor, PyTorch. Torch used Lua as its scripting language while PyTorch uses Python. PyTorch is essentially Torch but now wrapped in Python.

PyTorch is still lightning fast and easy to turn loose on GPUs. It supports automatic differentiation and dynamic computational graphs. Generally, Tensorflow is considered the goto enterprise solution while PyTorch, for which is somewhat easier to look under the hood, is the prefered framework in research settings.

By 2018, most of the Torch code in the DL@DU projects had been ported to PyTorch. It would be great if those of you who wish to gain experience with Tensorflow would port some of the PyTorch code to Tensorflow. Also, Simmons is interested in implementing deep learning tools in functional programming languages such as Haskell and Ocaml, and maybe Scala.

Lastly, we have our own library, DUlib, which wraps PyTorch code. DUlib eliminates lots of boilerplate and generally makes life easier. At first, it is best to write a little boilerplate and, later, about project 3 or so, start incorporating DUlib tools.

TL;DR:

  • We mainly write in PyTorch, sometimes porting to Tensorflow, but feel free to write in any language/framework.
  • Solution code and code otherwise provided at The DL@DU Project often uses our library DUlib.