Sign Up

Deep Learning by Ngene - Toolkit for LabVIEW Download

Deep Learning Toolkit for LabVIEW

D Discussion Watch * 28 ↓3,962
 screenshot
Version8.0.1.252
ReleasedJun 13, 2024
Publisher Ngene
License Ngene Custom
LabVIEW VersionLabVIEW>=20.0
Operating System Windows
Project links Homepage   Documentation   Repository   Discussion

Description

Empowering LabVIEW with Deep Learning
DeepLTK is a Deep Learning Toolkit for LabVIEW providing high-level API to build, configure, visualize, train, analyze and deploy Deep Neural Networks within LabVIEW. The toolkit is completely developed in LabVIEW and does not have any outer dependencies, which simplifies the installation, development, deployment and distribution of toolkit based applications and systems (particularly, can be easily deployed on NI's Real Time targets).

Main Features
Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW
Accelerate training and deployment of DNNs on GPUs
Save trained networks and load for deployment
Visualize network topology and common metrics (memory footprint, computational complexity)
Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference
Speed up pre-trained networks by employing network graph optimization utilities
Analyze and evaluate network's performance
Start with ready-to-run real-world examples
Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)

Supported Layers:
Input (1D, 3D)
Augmentations: Noise, Flip(Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale(Zoom), Blur, Move.
Fully Connected - FC
Convolutional - Conv2D
Convolutional 1D - Conv1D
Convolutional Advanced - Conv2D_Adv
Upsampling
ShortCut (Residual)
Concatenation
Batch Normalization
Activation
Pooling (MaxPool, AvgPool, GlobalMax, GlobalAvg)
DropOut (1D, 3D)
SoftMax (1D, 3D)
YOLO_v2 (object detection)
YOLO_v4 (object detection)

Activation types:
Linear
Sigmoid
Hyperbolic Tangent
ReLU
Leaky ReLU
ReLU6
Mish
Swish

Solver (Optimization Algorithm):
Stochastic Gradient Descend (SGD) based Backpropagation algorithm with Momentum and Weight decay
Adam - Stochastic gradient descent method which is based on adaptive estimation of first-order and second-order moments.

Loss Functions:
MSE - Mean Squared Error
Cross Entropy (LogLoss)
Object Detection (YOLO_v2)
Object Detection (YOLO_v4)

Examples:
Examples are available to demonstrate the applications of the toolkit in:
1. MNIST_Classifier_MLP.vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 1 dimensional dataset using MLP (Multilayer Perceptron) architecture
2. MNIST_Classifier_CNN(Train).vi - training the deep neural network for image classification task in handwritten digit recognition problem using CNN (Convolutional Neural Network) architecture
3. MNIST_Classifier(Deploy).vi - deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.
4. MNIST(RT_Deployment) project - deployment of pretrained model on NI's Real Time targets.
5. YOLO_Object_Detection(Cam).vi - automatically building and loading pretrained network for object detection based on YOLO (You Only Look Once) architecture.
6. Object_Detection project - demonstrates training of neural network for object detection on simple dataset.

More Examples: https://github.com/ngenehub/deepltk_examples

Release Notes

8.0.1.252 (Jun 13, 2024)

v8.0.1
This is a major update which is designed to speed up the toolkit's performance in GPU mode by introducing new data types for storing datasets of different numeric types.
This version breaks backward compatibility with previous versions of the toolkit.

Features
1. Added support for new data types (I8, U8, I16, U16 and U32ARGB) for representing datasets. Inputs in dataset clusters are now represented as variants.
2. Added new API (polymorphic "NN_Variant_To_DVR.vi") to convert variant type of data in datasets to supported types of data DVRs.
3. "NN_Layer_Create(Input1D/3D).vi" is modified to allow specifying input data type. Input data type and dimensionality can also be automatically detected by providing dataset to "Dataset" input of the VI. Now data normalization can be calculated within the network by providing corresponding Shift(s) and Scaler(s) values to input layer at layer creation time.
4. "NN_Set_Input.vi" polymorphic VI is enhanced to support new data types. Previous instances have been renamed to conform with polymorphic VI name, which might cause a backward compatibility issue, in which case old VIs should be replaced with "NN_Set_Input.vi" polymorphic VI.
5. Added new API "NN_Get_Layer(byName).vi" to easily retrieve layers from network by their names.
6. Added new utility API "NN_Get_T_dt.vi" for simplified calculation of execution time.
7. Added Quick Drop Shortcut (Ctrl+Space, Ctrl+L) to display cluster element labels.
8. Added new "GPU_Info" tool to LabVIEW/Help/Ngene/DeepLTK menu to display information about available GPU/s and installed drivers. "NN_GPU_Check_Drivers.vi" is renamed to "NN_GPU_Get_Info.vi".
9. Added new API "NN_GPU_Reset.vi" to reset GPU when needed.
10. Added MNIST Dataset to the toolkit installer.
11. Added "NN_Dims(V,H).ctl" to API.

Optimizations
1. Improved inference speed by optimizing CPU to GPU data transfer infrastructure.
2. Modified Xavier weight initializer so the "Value" parameter now scales the standard deviation of the distribution.
3. Removed deprecated "NN_get_Detections(YOLO_v2).vi" and "NN_get_Detections(YOLO_v2)(Batch).vi" from functions palette. "NN_Get_Detections.vi" and "NN_Get_Detections(Batch).vi" should be used instead.
4. Improved memory consumption for cases when Adam is used as optimizer.
5. Now default network workspace size is unlimited by default.
6. Improved error handling when providing batched data at the input during the inference. Now data batch size and network's batch size are compared.
7. Updated toolkit's VI priorities to improve the performance.

Bug Fixes
1. Fixed the bug so the SoftMax layer and Cross-Entropy loss functions can work independently.
2. Fixed a bug where calling "NN_Destroy.vi" would impact other training processes.
3. Fixed a training instability bug, where previous NaN results could affect next training session.
4. Fixed a bug when activation type of Activation layer was not correctly reflected in cfg file for 1D case.

Other Updates
1. Updated help file.
2. Changed the connector pane type and pinout for NN_Eval.vi polymorphic instances.


Recent Posts

Deep Learning with LabVIEW

by VIPM Community, 3 years, 10 months ago, 0 , 4
resource