From 9237f4e777571874d85c7731e9df321caa6d4d29 Mon Sep 17 00:00:00 2001 From: glopesdev Date: Tue, 11 Jun 2024 22:45:56 +0100 Subject: [PATCH] Merge intro with getting started guide --- docs/articles/intro.md | 15 ----------- docs/articles/manual.md | 15 +++-------- docs/articles/sleap-intro.md | 13 ++++++++++ docs/articles/sleap-predictcentroids.md | 2 +- docs/articles/sleap-predictposeidentities.md | 5 ++-- docs/articles/sleap-predictsinglepose.md | 2 +- docs/articles/toc.yml | 8 ++---- docs/index.md | 26 +++++++++++++++++++- 8 files changed, 48 insertions(+), 38 deletions(-) delete mode 100644 docs/articles/intro.md create mode 100644 docs/articles/sleap-intro.md diff --git a/docs/articles/intro.md b/docs/articles/intro.md deleted file mode 100644 index a80552f..0000000 --- a/docs/articles/intro.md +++ /dev/null @@ -1,15 +0,0 @@ -## Bonsai - SLEAP installation - -Bonsai.SLEAP can be downloaded through the Bonsai package manager. In order to get visualizer support, you should download both the `Bonsai.SLEAP` and `Bonsai.SLEAP.Design` packages. However, in order to use it for either CPU or GPU inference, you need to pair it with a compiled native TensorFlow binary. You can find precompiled binaries for Windows 64-bit at https://www.tensorflow.org/install/lang_c. - -To use GPU TensorFlow (highly recommended for live inference), you also need to install the `CUDA Toolkit` and the `cuDNN libraries`. The current SLEAP package was developed and tested with [CUDA v11.3](https://developer.nvidia.com/cuda-11.3.0-download-archive) and [cuDNN 8.2](https://developer.nvidia.com/cudnn). Additionally, make sure you have a CUDA [compatible GPU](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#support-hardware) with the latest NVIDIA drivers. - -After downloading the native TensorFlow binary and cuDNN, you can follow these steps to get the required native files into the `Extensions` folder of your local Bonsai install: - -1. The easiest way to find your Bonsai install folder is to right-click on the Bonsai shortcut > Properties. The path to the folder will be shown in the "Start in" textbox; -2. Copy `tensorflow.dll` file from either the CPU or GPU [tensorflow release](https://www.tensorflow.org/install/lang_c#download_and_extract) to the `Extensions` folder; -3. If you are using TensorFlow GPU, make sure to add the `cuda/bin` folder of your cuDNN download to the `PATH` environment variable, or copy all DLL files to the `Extensions` folder. - -## SLEAP installation - -For all questions regarding installation of SLEAP, please check the official [docs](https://sleap.ai/). diff --git a/docs/articles/manual.md b/docs/articles/manual.md index 773db4e..4f1afab 100644 --- a/docs/articles/manual.md +++ b/docs/articles/manual.md @@ -1,7 +1,7 @@ How to use ========== -`Bonsai.SLEAP` currently implements real-time inference on four distinct SLEAP networks through their corresponding Bonsai `Predict` operators. +`Bonsai.Sleap` currently implements real-time inference on four distinct SLEAP networks through their corresponding Bonsai `Predict` operators. ```mermaid flowchart TD @@ -32,20 +32,11 @@ Returns single: *Pose*`") ``` - In order to use the `Predict` operators, you will need to provide the `ModelFileName` to the exported .pb file folder containing your pre-trained SLEAP model, along with the corresponding `PoseConfigFileName` to the `training_config.json` file. -The simplest Bonsai workflow for running the complete model will thus be: - -:::workflow -![PredictPoseIdentities](~/workflows/PredictPoseIdentities.bonsai) -::: - -If everything works out, you should see some indication in the Bonsai command line window that the GPU was successfully detected and enabled. The first frame will cold start the inference graph and this may take a bit of time, but after that, your poses should start streaming through! - -![Bonsai_Pipeline_expanded](~/images/demo.gif) +[!include[Introduction](~/articles/sleap-intro.md)] -Working examples for each of these operators can be found in the extended description for each operator, which we cover below. +Working examples for each of these operators can be found in the extended descriptions, which we cover below. ## PredictCentroids [!include[PredictCentroids](~/articles/sleap-predictcentroids.md)] diff --git a/docs/articles/sleap-intro.md b/docs/articles/sleap-intro.md new file mode 100644 index 0000000..fd396eb --- /dev/null +++ b/docs/articles/sleap-intro.md @@ -0,0 +1,13 @@ +--- +uid: sleap-intro +--- + +The simplest Bonsai workflow for running the complete SLEAP `top-down-id-model` is: + +:::workflow +![PredictPoseIdentities](~/workflows/PredictPoseIdentities.bonsai) +::: + +If everything works out, you should see some indication in the Bonsai command line window that the GPU was successfully detected and enabled. The first frame will cold start the inference graph and this may take a bit of time, but after that, your poses should start streaming through! + +![Bonsai_Pipeline_expanded](~/images/demo.gif) diff --git a/docs/articles/sleap-predictcentroids.md b/docs/articles/sleap-predictcentroids.md index 42a4a1a..d16df1c 100644 --- a/docs/articles/sleap-predictcentroids.md +++ b/docs/articles/sleap-predictcentroids.md @@ -3,7 +3,7 @@ uid: sleap-predictcentroids title: PredictCentroids --- -[`PredictCentroids`](xref:Bonsai.Sleap.PredictCentroids) implements the [*centroid* network](https://sleap.ai/develop/api/sleap.nn.config.model.html?highlight=centroid#sleap.nn.config.model.CentroidsHeadConfig). This operator is most commonly used to find a set of candidate centroids from a full-resolution image. For each frame, it will return a [`CentroidCollection`](xref:Bonsai.Sleap.CentroidCollection) that can be further indexed to access the individual instances. +[`PredictCentroids`](xref:Bonsai.Sleap.PredictCentroids) implements the [*centroid* network](https://sleap.ai/develop/api/sleap.nn.config.model.html?highlight=centroid#sleap.nn.config.model.CentroidsHeadConfig). This model is most commonly used to find a set of candidate centroids from a full-resolution image. For each frame, it will return a [`CentroidCollection`](xref:Bonsai.Sleap.CentroidCollection) which can be further indexed to access the individual instances. As an example application, the output of this operator is also fully compatible with the [`CropCenter`](xref:Bonsai.Vision.CropCenter) transform node, which can be used to easily generate smaller crops centered on the detected centroid instance (i.e. [`Centroid`](xref:Bonsai.Sleap.Centroid)) diff --git a/docs/articles/sleap-predictposeidentities.md b/docs/articles/sleap-predictposeidentities.md index 6b7086c..7cd7b80 100644 --- a/docs/articles/sleap-predictposeidentities.md +++ b/docs/articles/sleap-predictposeidentities.md @@ -3,8 +3,9 @@ uid: sleap-predictposeidentities title: PredictPoseIdentities --- -[`PredictPoseIdentities`](xref:Bonsai.Sleap.PredictPoseIdentities) evaluates the full SLEAP model network. In addition to extracting pose information for each detected instance in the image, it also returns the inferred identity of the object (i.e. it implements a [*top-down-id-model* network](https://sleap.ai/develop/api/sleap.nn.config.model.html#sleap.nn.config.model.MultiClassTopDownConfig). -In addition to the properties of the [`Pose`](xref:Bonsai.Sleap.Pose) object, the extended [`PoseIdentity`](xref:Bonsai.Sleap.PoseIdentity) class adds the [`Identity`](xref:Bonsai.Sleap.PoseIdentity.Identity) property that corresponds to the highest confidence identity. This will match one of the class labels found in `training_config.json`. +[`PredictPoseIdentities`](xref:Bonsai.Sleap.PredictPoseIdentities) evaluates the full SLEAP model network. In addition to extracting pose information for each detected instance in the image, it also returns the inferred identity of the object, i.e. it performs inference on the [*top-down-id-model* network](https://sleap.ai/develop/api/sleap.nn.config.model.html#sleap.nn.config.model.MultiClassTopDownConfig). + +In addition to the properties of the [`Pose`](xref:Bonsai.Sleap.Pose) object, the extended [`PoseIdentity`](xref:Bonsai.Sleap.PoseIdentity) class adds [`Identity`](xref:Bonsai.Sleap.PoseIdentity.Identity) property that indicates the highest confidence identity. This will match one of the class labels found in `training_config.json`. The [`IdentityScores`](xref:Bonsai.Sleap.PoseIdentity.IdentityScores) property indicates the confidence values for all class labels. Since we are very often only interested in the instance with the highest identification confidence we have added the operator [`GetMaximumConfidencePoseIdentity`](xref:Bonsai.Sleap.GetMaximumConfidencePoseIdentity) which returns the [`PoseIdentity`](xref:Bonsai.Sleap.PoseIdentity) with the highest confidence from the input [`PoseIdentityCollection`](xref:Bonsai.Sleap.PoseIdentityCollection). Moreover, by specifying a value in the optional [`Identity`](xref:Bonsai.Sleap.GetMaximumConfidencePoseIdentity.Identity) property, the operator will return the instance will the highest confidence for that particular class. diff --git a/docs/articles/sleap-predictsinglepose.md b/docs/articles/sleap-predictsinglepose.md index 6d85b76..2df9437 100644 --- a/docs/articles/sleap-predictsinglepose.md +++ b/docs/articles/sleap-predictsinglepose.md @@ -11,4 +11,4 @@ The following example workflow highlights how combining [basic computer-vision a ![SingleInstanceModel](~/workflows/SingleInstanceModel.bonsai) ::: -Finally, it is worth noting that [`PredictSinglePose`](xref:Bonsai.Sleap.PredictSinglePose) affords two input overloads. When receiving a single image it will output a corresponding [`Pose`](xref:Bonsai.Sleap.Pose). Since the operator skips the centroid-detection stage, it won't embed a [`Centroid`](xref:Bonsai.Sleap.Centroid) field in[`Pose`](xref:Bonsai.Sleap.Pose). Alternatively, a *batch* mode can be accessed by providing an array of images to the operator, instead returning [`PoseCollection`](xref:Bonsai.Sleap.PoseCollection). This latter overload results in dramatic performance gains relative to single images. +Finally, it is worth noting that [`PredictSinglePose`](xref:Bonsai.Sleap.PredictSinglePose) affords two input overloads. When receiving a single image it will output a corresponding [`Pose`](xref:Bonsai.Sleap.Pose). Since the operator skips the centroid-detection stage, it won't embed a [`Centroid`](xref:Bonsai.Sleap.Centroid) field in [`Pose`](xref:Bonsai.Sleap.Pose). Alternatively, a *batch* mode can be accessed by providing an array of images to the operator, instead returning [`PoseCollection`](xref:Bonsai.Sleap.PoseCollection). This latter overload results in dramatic performance gains relative to single images. diff --git a/docs/articles/toc.yml b/docs/articles/toc.yml index dfced4b..c007a63 100644 --- a/docs/articles/toc.yml +++ b/docs/articles/toc.yml @@ -1,6 +1,2 @@ -- name: Introduction - href: ../index.md -- name: Installation - href: intro.md -- name: How to use - href: manual.md \ No newline at end of file +- href: ../index.md +- href: manual.md \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index 4c9d780..10c7c1d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,7 +1,31 @@ ![logo](~/images/sleap-Bonsai-icon.svg) +Getting Started +=============== + Bonsai.SLEAP is a [Bonsai](https://bonsai-rx.org/) interface for [SLEAP](https://sleap.ai/) allowing multi-animal, real-time, pose and identity estimation using pretrained network models stored in a [Protocol buffer (.pb) format](https://developers.google.com/protocol-buffers/). Bonsai.SLEAP loads these .pb files using [TensorFlowSharp](https://github.com/migueldeicaza/TensorFlowSharp), a set of .NET bindings for TensorFlow allowing native inference using either the CPU or GPU. By using the .pb file and the corresponding configuration file (`training_config.json`), the `PredictFullModelPose` operator from Bonsai.SLEAP will push the live image data through the inference network and output a set of identified poses from which you can extract an object id and specific object part position. `Bonsai` can then leverage this data to drive online effectors or simply save it to an output file. -The Bonsai.SLEAP package came about following a fruitful discussion with the SLEAP team during the [Quantitative Approaches to Behaviour](http://cajal-training.org/on-site/qab2022). +## How to install + +Bonsai.SLEAP can be downloaded through the Bonsai package manager. In order to get visualizer support, you should download both the `Bonsai.Sleap` and `Bonsai.Sleap.Design` packages. However, in order to use it for either CPU or GPU inference, you need to pair it with a compiled native TensorFlow binary. You can find precompiled binaries for Windows 64-bit at https://www.tensorflow.org/install/lang_c. + +To use GPU TensorFlow (highly recommended for live inference), you also need to install the `CUDA Toolkit` and the `cuDNN libraries`. This package was developed and tested with [CUDA v11.3](https://developer.nvidia.com/cuda-11.3.0-download-archive) and [cuDNN 8.2](https://developer.nvidia.com/cudnn). Additionally, make sure you have a CUDA [compatible GPU](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#support-hardware) with the latest NVIDIA drivers. + +After downloading the native TensorFlow binary and cuDNN, you can follow these steps to get the required native files into the `Extensions` folder of your local Bonsai install: + +1. The easiest way to find your Bonsai install folder is to right-click on the Bonsai shortcut > Properties. The path to the folder will be shown in the "Start in" textbox; +2. Copy `tensorflow.dll` file from either the CPU or GPU [tensorflow release](https://www.tensorflow.org/install/lang_c#download_and_extract) to the `Extensions` folder; +3. If you are using TensorFlow GPU, make sure to add the `cuda/bin` folder of your cuDNN download to the `PATH` environment variable, or copy all DLL files to the `Extensions` folder. + +> [!Tip] +> For all questions regarding installation and use of SLEAP for training models, please check the official [docs](https://sleap.ai/). + +## Simple example + +[!include[Introduction](~/articles/sleap-intro.md)] + +## Acknowledgments + +The Bonsai.SLEAP package came about following a fruitful discussion with the SLEAP team during the [Quantitative Approaches to Behaviour](http://cajal-training.org/on-site/qab2022). \ No newline at end of file