Getting Started
The essentials for getting started with Tensil
What if you could just run this to get a custom ML accelerator specialized to your needs?
$ tensil rtl --arch <my_architecture>
What if compiling your ML model for that accelerator target was as easy as running this?
$ tensil compile --arch <my_architecture> --model <my_model>
Wonder no more: with Tensil you can!
Tensil is a set of tools for running machine learning models on custom accelerator architectures. It includes an RTL generator, a model compiler, and a set of drivers. It enables you to create a custom accelerator, compile an ML model targeted at it, and then deploy and run that compiled model.
The primary goal of Tensil is to allow anyone to accelerate their ML workloads. Currently, we are focused on supporting convolutional neural network inference on edge FPGA (field programmable gate array) platforms, but we aim to support all model architectures on a wide variety of fabrics for both training and inference.
You should use Tensil if:
With Tensil you can:
At present, these are Tensil’s limitations:
Join us on Discord or on Github to help us plan our roadmap!
Select a section below to dive in. We recommend beginning at Getting Started.
The essentials for getting started with Tensil
Recipes for common tasks
Complete worked examples to help you learn about Tensil
Handy reference material
Key concepts that will help you understand what Tensil does
Our plans for continuing development
Automatically generated Scaladoc reference materials