Sign In

Deep Learning by Ngene - Toolkit for LabVIEW Download

Deep Learning Toolkit for LabVIEW

* 6 ↓376
 screenshot
Version4.1.1.166
ReleasedSep 25, 2020
Publisher Ngene
LicenseNgene Custom
LabVIEW VersionLabVIEW>=16.0
Operating System Windows
Project links Homepage  

Description

Empowering LabVIEW with Deep Learning
DeepLTK is a Deep Learning Toolkit for LabVIEW providing high-level API to build, configure, visualize, train, analyze and deploy Deep Neural Networks within LabVIEW. The toolkit is completely developed in LabVIEW and does not have any outer dependencies, which simplifies the installation, development, deployment and distribution of toolkit based applications and systems (particularly, can be easily deployed on NI's Real Time targets).

Main Features
Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW
Accelerate training and deployment of DNNs on GPUs
Save trained networks and load for deployment
Visualize network topology and common metrics (memory footprint, computational complexity)
Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference
Speed up pre-trained networks by employing network graph optimization utilities
Analyze and evaluate network's performance
Start with ready-to-run real-world examples
Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)

Supported Layers:
Input (1D, 3D)
Augmentations: Noise, Flip(Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale(Zoom), Blur, Move.
Fully Connected - FC
Convolutional - Conv3D
Upsampling
ShortCut (Residual)
Concatenation
Batch Normalization
Activations: Linear(None), Sigmoid, tanh(Hyperbolic Tangent), ReLU(Rectified Linear Unit), LReLU(Leaky ReLU)
Pooling (MaxPool, AvgPool, GlobalMax, GlobalAvg)
DropOut (1D, 3D)
SoftMax (1D, 3D)
Region (object detection)

Activation types:
Linear
Sigmoid
Hyperbolic Tangent
ReLU
Leaky ReLU

Solver (Optimization Algorithm):
Stochastic Gradient Descend (SGD) based Backpropagation algorithm with Momentum and Weight decay

Loss Functions:
MSE - Mean Squared Error
Cross Entropy (LogLoss)
Object Detction (YOLO_v2)

Examples:
Examples are available to demonstrate the applications of the toolkit in:
1. MNIST_Classifier_MLP(Train_1D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 1 dimensional dataset using MLP (Multilayer Perceptron) architecture
2. MNIST_Classifier_MLP(Train_3D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 3-dimensional dataset using MLP (Multilayer Perceptron) architecture
3. MNIST_Classifier_CNN(Train).vi - training the deep neural network for image classification task in handwritten digit recognition problem using CNN (Convolutional Neural Network) architecture
4. MNIST_Classifier(Deploy).vi - deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.
5. MNIST(RT_Deployment) project - deployment of pretrained model on NI's Real Time targets.
6. YOLO_Object_Detection(Cam).vi - automatically building and loading pretrained network for object detection based on YOLO (You Only Look Once) architecture.
7. MNIST_CNN_GPU project - accelerating MNIST_Classifier_CNN(Train).vi example on GPU
8. YOLO_GPU project - accelerating YOLO object detection on GPU
9. Object_Detection project - demonstrates training of neural network for object detection on simple dataset.

Release Notes

4.1.1.166 (Sep 25, 2020)

v4.1.1.166
Features
1. Got rid of MASMT and TPLAT toolkit dependency to simplify installation process.
2. Added Possibility to use dataset's input feature maps in as ground truths to minimize the memory utilization when training autoencoder architectures.
3. Added new layer - Softmax3D to perform channel wise SoftMax operation over 3D type of feature maps/inputs.
4. Updated layer creation API to obtain layer's reference at creation.
5. Added possibility to calculate feature receptive field for each layer in the network.
6. Included network's output dimensions into NN.ctl.
7. Included Labels as string array in NN.ctl.
8. Updated the help documentation to reflect the changes.
9. Added API NN_Calc_Confusion_Matrix_Core.vi for inference.
10. Fixed a bug in BN Merge functionality.
Bugfixes
1. Updated Confusion Matrix table representation? Ground Truths - Row; Predictions - Columns.
2. Greatly sped up augmentation functions: up to 10x for some functions.
3. Fixed an improper operation when using some types of augmentation functions.
4. Fixed an improper operation when using pooling layer in global mode over non-square input features.
5. Fixed a string formatting in SVG network diagrams.
6. Fixed a bug for storing small numbers in .CFG file.
7. Made other bug fixes.

Install

Note, you must have the VIPM Desktop app installed for this button to work.

Versions
Featured in

  Post an Idea   Post a Resource

Recent Posts

Deep Learning with LabVIEW

by VIPM Community, 3 days, 4 hours ago, 0 , 0
resource