Sign In

Deep Learning by Ngene - Toolkit for LabVIEW Download

Deep Learning Toolkit for LabVIEW

* 21 ↓1,935
ReleasedJul 12, 2022
Publisher Ngene
License Ngene Custom
LabVIEW VersionLabVIEW>=16.0
Operating System Windows
Project links Homepage  


Empowering LabVIEW with Deep Learning
DeepLTK is a Deep Learning Toolkit for LabVIEW providing high-level API to build, configure, visualize, train, analyze and deploy Deep Neural Networks within LabVIEW. The toolkit is completely developed in LabVIEW and does not have any outer dependencies, which simplifies the installation, development, deployment and distribution of toolkit based applications and systems (particularly, can be easily deployed on NI's Real Time targets).

Main Features
Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW
Accelerate training and deployment of DNNs on GPUs
Save trained networks and load for deployment
Visualize network topology and common metrics (memory footprint, computational complexity)
Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference
Speed up pre-trained networks by employing network graph optimization utilities
Analyze and evaluate network's performance
Start with ready-to-run real-world examples
Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)

Supported Layers:
Input (1D, 3D)
Augmentations: Noise, Flip(Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale(Zoom), Blur, Move.
Fully Connected - FC
Convolutional - Conv2D
Convolutional Advanced - Conv2D_Adv
ShortCut (Residual)
Batch Normalization
Activations: Linear(None), Sigmoid, tanh(Hyperbolic Tangent), ReLU(Rectified Linear Unit), LReLU(Leaky ReLU)
Pooling (MaxPool, AvgPool, GlobalMax, GlobalAvg)
DropOut (1D, 3D)
SoftMax (1D, 3D)
YOLO_v2 (object detection)

Activation types:
Hyperbolic Tangent
Leaky ReLU

Solver (Optimization Algorithm):
Stochastic Gradient Descend (SGD) based Backpropagation algorithm with Momentum and Weight decay
Adam - Stochastic gradient descent method which is based on adaptive estimation of first-order and second-order moments.

Loss Functions:
MSE - Mean Squared Error
Cross Entropy (LogLoss)
Object Detection (YOLO_v2)

Examples are available to demonstrate the applications of the toolkit in:
1. MNIST_Classifier_MLP(Train_1D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 1 dimensional dataset using MLP (Multilayer Perceptron) architecture
2. MNIST_Classifier_MLP(Train_3D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 3-dimensional dataset using MLP (Multilayer Perceptron) architecture
3. MNIST_Classifier_CNN(Train).vi - training the deep neural network for image classification task in handwritten digit recognition problem using CNN (Convolutional Neural Network) architecture
4. MNIST_Classifier(Deploy).vi - deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.
5. MNIST(RT_Deployment) project - deployment of pretrained model on NI's Real Time targets.
6. YOLO_Object_Detection(Cam).vi - automatically building and loading pretrained network for object detection based on YOLO (You Only Look Once) architecture.
7. MNIST_CNN_GPU project - accelerating MNIST_Classifier_CNN(Train).vi example on GPU
8. YOLO_GPU project - accelerating YOLO object detection on GPU
9. Object_Detection project - demonstrates training of neural network for object detection on simple dataset.

Release Notes (Jul 12, 2022)

This is a minor update to the toolkit which aims to fix small bugs

Bug Fixes
1. Fixed issue related to toolkit's license activation.
2. Added missing descriptions in example VIs.
3. Fixed text format in VIP EULA.
4. Added missing text in help file.

Backward Compatibility
Important: This is major update of the toolkit, which breaks backward compatibility with previous (pre v5.x.x) versions of the toolkit.
1. Redesigned the process for specifying and configuring Loss Function. Now setting and configuring Loss function and configuring the training process are separated. New separate API for setting loss function ( is added.
2. Modified "NN_Train_Params.ctl".
a) Now loss function related parameters are removed from "NN_Train_Params.ctl".
b) "Momentum" is replaced with "Beta_1" and "Beta_2" parameters for specifying first and second order momentum coefficients.
c) "Weight_Decay" is replaced with "Weight_Decay(L1)" and "Weight_Decay(L2)" for specifyinh L1 and L2 weight normalization.
3. "" is deprecated. Its functionality is now split between" and ""
4. Added support for Adam optimizer.
5. Added support for Swish and Mish activation functions.
6. Con3D layer is now renamed to Conv2D.
7. Added advanced Conv2D layer (Conv2D_Adv), which supports for:
a) dilation
b) grouped convolution
c) non square kernel window dimensions
d) non-square stride sizes
e) different vertical and horizontal padding sizes
8. Modified Upsample layer configuration control (Upsample_cfg.ctl) to separate vertical and horizontal strides.
9. Added new network performance evaluation API (
10. Label_Idx is removed from "". Now classification predictions can be converted to categorical/binary labels with help of ""
11. Added new API for converting floating point predictions from network to categorical/binary (
12. Added new API for categorical/binary labels to one-hot-encoded format (
13. Now MaxPool and AvgPool layers support non square window dimensions.
14. Added new API control for 3D dimension representation (NN_Dims(C,H,W).ctl)
15. Region layer is now renamed to YOLO_v2.
a) Removed loss related configuration parameters (moved to configuration control).
b) Now anchor dimensions in YOLO_v2 layer should be provided in relative (to input image dimensions) format.
c) YOLO_v2 layer can automatically create last/preceding Conv2D layer, to match required number of classes and number of anchors.
16. Added support for Channel-Wise Cross-Entropy loss function for 3D output type networks with channel wise SoftMax output layer.
17. Added "Train?" control to "" to take into account whether the network is in train state or not.
18. "" is converted to polymorphic VI, which instance is chosen based on dataset provided at the input.
19. Optimized "" for speed.
20. Increased Confusion Matrix table display precision from 3 to 4 digits.
21. Updated reference examples to make them compatible with latest changes.
22. Now DeepLTK supports CUDA v10.2 and CUDNN v7.5.x.
23. Configuration file format is updated to address feature changes.
24. Help file renamed to "DeepLTK_Help.chm"
25. Help file updated to represent recent changes.

Bug Fixes
1. Fixed a bug when MaxPool and AvgPool layers were incorrectly calculating output values on the edges.
2. Fixed a bug related to deployment licensing on RT targets.
3. Fixed a bug where receptive field calculation algorithm did not take into account dilation factor.
4. Corrected accuracy metrics calculation in "NN_Eval(In3D_OutBBox).vi".
5. Fixed typos in API VI descriptions and control/indicators.
6. Fix incorrect receptive field calculation for networks containing upsampling layer/s.
7. Fixed incorrect texts in error messages

Install with VIPM Download Package

Note: Get VIPM Desktop to install this package into directly into LabVIEW.

Featured in

  Post an Idea   Post a Resource

Recent Posts

Deep Learning with LabVIEW

by VIPM Community, 1 year, 10 months ago, 0 , 3