There are multitude of hardware considerations that affect how a deep learning model will perform. In this workshop you will learn about the intersection between software and hardware with respect to deep learning. Learning the ins and outs of hardware will allow you to conceptualize, design and implement better software relying on deep learning models.
In the lecture part of you can expect learn about the difference between cloud and edge, what are CPU, TPU and GPU and how to they differ in their utility for machine-learning. Additionally, you will also gain an overview of different IDEs and software used for deep learning. Lastly, you will learn about using embedded devices and the practical benefits and implications of going to edge devices.
In the practical part you will get to work with an embedded device, where you will train, evaluate, quantize, and benchmark machine learning models. Additionally, you will get to optimize your model’s runtime performance by applying various conversions and utilizing different runtimes such as ONNX and TensorRT.
TLDR: the workshop will give you the information necessary to go from training a machine learning model to its deployment on an embedded edge device.
-
Flux 0.150
Monday 10-10-2022
13:30 - 16:00