Sign Up

Deep Learning by Ngene - Toolkit for LabVIEW Download

Deep Learning Toolkit for LabVIEW

D Discussion Watch * 27 ↓3,837
 screenshot
Version7.0.1.241
ReleasedFeb 13, 2024
Publisher Ngene
License Ngene Custom
LabVIEW VersionLabVIEW>=20.0
Operating System Windows
Project links Homepage   Documentation   Repository   Discussion

Description

Empowering LabVIEW with Deep Learning
DeepLTK is a Deep Learning Toolkit for LabVIEW providing high-level API to build, configure, visualize, train, analyze and deploy Deep Neural Networks within LabVIEW. The toolkit is completely developed in LabVIEW and does not have any outer dependencies, which simplifies the installation, development, deployment and distribution of toolkit based applications and systems (particularly, can be easily deployed on NI's Real Time targets).

Main Features
Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW
Accelerate training and deployment of DNNs on GPUs
Save trained networks and load for deployment
Visualize network topology and common metrics (memory footprint, computational complexity)
Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference
Speed up pre-trained networks by employing network graph optimization utilities
Analyze and evaluate network's performance
Start with ready-to-run real-world examples
Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)

Supported Layers:
Input (1D, 3D)
Augmentations: Noise, Flip(Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale(Zoom), Blur, Move.
Fully Connected - FC
Convolutional - Conv2D
Convolutional 1D - Conv1D
Convolutional Advanced - Conv2D_Adv
Upsampling
ShortCut (Residual)
Concatenation
Batch Normalization
Activation
Pooling (MaxPool, AvgPool, GlobalMax, GlobalAvg)
DropOut (1D, 3D)
SoftMax (1D, 3D)
YOLO_v2 (object detection)
YOLO_v4 (object detection)

Activation types:
Linear
Sigmoid
Hyperbolic Tangent
ReLU
Leaky ReLU
ReLU6
Mish
Swish

Solver (Optimization Algorithm):
Stochastic Gradient Descend (SGD) based Backpropagation algorithm with Momentum and Weight decay
Adam - Stochastic gradient descent method which is based on adaptive estimation of first-order and second-order moments.

Loss Functions:
MSE - Mean Squared Error
Cross Entropy (LogLoss)
Object Detection (YOLO_v2)
Object Detection (YOLO_v4)

Examples:
Examples are available to demonstrate the applications of the toolkit in:
1. MNIST_Classifier_MLP.vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 1 dimensional dataset using MLP (Multilayer Perceptron) architecture
2. MNIST_Classifier_CNN(Train).vi - training the deep neural network for image classification task in handwritten digit recognition problem using CNN (Convolutional Neural Network) architecture
3. MNIST_Classifier(Deploy).vi - deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.
4. MNIST(RT_Deployment) project - deployment of pretrained model on NI's Real Time targets.
5. YOLO_Object_Detection(Cam).vi - automatically building and loading pretrained network for object detection based on YOLO (You Only Look Once) architecture.
6. Object_Detection project - demonstrates training of neural network for object detection on simple dataset.

Release Notes

7.0.1.241 (Feb 13, 2024)

v7.0.1
This is a major update which breaks backward compatibility with v6.x.x versions of the toolkit.

Features
1. Added support for YOLO_v4 layer.The modifications are reflected in "NN_Layer_Create" and "NN_Set_Loss" API VIs.
2. Added API for calculating confusion matrix for object detection tasks.
3. Added new common API "NN_get_Detections.vi" for getting detections from YOLO_v2/4 layers. "NN_get_Detections(YOLO_v2).vi" will be deprecated.
4. Added support for Conv1D layer.
5. Added support for Activation layer.
6. Added Batch Normalization layer.
7. Added "Has_Bias" parameter in Convolutional and FC layers.
8. Added "epsilon" and "momentum" BN parameters in Convolutional and FC layers.
9. Added support for specifying input layer when creating layers of the network. This should support complex architectures like Wider ResNet.
10. Added new API "NN_Display_Confusion_Matrix.vi" for displaying confusion matrix in the table.
11. Added API "NN_Set_Max_GPU_WS_Size.vi" for controlling maximum GPU workspace memory size.
12. Now NN_Destroy.vi returns paths of generated ".cfg", ".bin" and ".svg" files.
13. Added support for Nvidia RTX 40xx generation of GPUs.
14. Deprecated support for older (Kepler) GPUs.
15. Removed "Out_Idx Ref." elements from NN_Dataset(xxx).ctl types.

Optimizations
1. Optimized GPU memory utilization for intermediate buffers (workspaces).
2. Improved YOLO_v2 performance.
3. Improved the quality of minibatch sampling from dataset when sampling mode is set to "random". Now the distribution is uniform.
4. Updated NN_Eval to display more results for object detection tasks. Now it returns more accurate metrics (e.g. mAP@0.5, mAP@0.75, mAP@0.5-0.95, F1 score, etc).
5. SVG diagrams of network topologies now provide more information for configuration parameters of Conv1D and Conv2D layers.

Bug Fixes
1. Fixed mismatch of "epsilon" and "momentum" parameters of Batch Normalization function between CPU and GPU modes.
2. Fixed L1 decay/regularization bug in GPU mode.
3. Fixed an error occurring when calling BN Merge during the training with Adam optimizer.
4. Fixed bugs in GPU error handling.
5. Fixed an issue of missing DLL/s when creating installer based on the toolkit.
6. Fixed typos in Help file.


Download Package

Versions

  Post an Idea   Post a Resource

Recent Posts

Deep Learning with LabVIEW

by VIPM Community, 3 years, 8 months ago, 0 , 4
resource