Deep Learning by Ngene - Toolkit for LabVIEW Download
Deep Learning Toolkit for LabVIEW

Version | 6.1.1.206 |
Released | Oct 18, 2022 |
Publisher | Ngene |
License | Ngene Custom |
LabVIEW Version | LabVIEW>=20.0 |
Operating System | Windows |
Project links | Homepage |
Description
Empowering LabVIEW with Deep Learning
DeepLTK is a Deep Learning Toolkit for LabVIEW providing high-level API to build, configure, visualize, train, analyze and deploy Deep Neural Networks within LabVIEW. The toolkit is completely developed in LabVIEW and does not have any outer dependencies, which simplifies the installation, development, deployment and distribution of toolkit based applications and systems (particularly, can be easily deployed on NI's Real Time targets).
Main Features
Create, configure, train, and deploy deep neural networks (DNNs) in LabVIEW
Accelerate training and deployment of DNNs on GPUs
Save trained networks and load for deployment
Visualize network topology and common metrics (memory footprint, computational complexity)
Deploy pre-trained networks on NI's LabVIEW Real-Time target for inference
Speed up pre-trained networks by employing network graph optimization utilities
Analyze and evaluate network's performance
Start with ready-to-run real-world examples
Accelerate inference on FPGAs (with help of DeepLTK FPGA Add-on)
Supported Layers:
Input (1D, 3D)
Augmentations: Noise, Flip(Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale(Zoom), Blur, Move.
Fully Connected - FC
Convolutional - Conv2D
Convolutional Advanced - Conv2D_Adv
Upsampling
ShortCut (Residual)
Concatenation
Batch Normalization
Activations: Linear(None), Sigmoid, tanh(Hyperbolic Tangent), ReLU(Rectified Linear Unit), LReLU(Leaky ReLU)
Pooling (MaxPool, AvgPool, GlobalMax, GlobalAvg)
DropOut (1D, 3D)
SoftMax (1D, 3D)
YOLO_v2 (object detection)
Activation types:
Linear
Sigmoid
Hyperbolic Tangent
ReLU
Leaky ReLU
Mish
Swish
Solver (Optimization Algorithm):
Stochastic Gradient Descend (SGD) based Backpropagation algorithm with Momentum and Weight decay
Adam - Stochastic gradient descent method which is based on adaptive estimation of first-order and second-order moments.
Loss Functions:
MSE - Mean Squared Error
Cross Entropy (LogLoss)
Object Detection (YOLO_v2)
Examples:
Examples are available to demonstrate the applications of the toolkit in:
1. MNIST_Classifier_MLP(Train_1D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 1 dimensional dataset using MLP (Multilayer Perceptron) architecture
2. MNIST_Classifier_MLP(Train_3D).vi - training the deep neural network for image classification task in handwritten digit recognition problem (based on MNIST database) on 3-dimensional dataset using MLP (Multilayer Perceptron) architecture
3. MNIST_Classifier_CNN(Train).vi - training the deep neural network for image classification task in handwritten digit recognition problem using CNN (Convolutional Neural Network) architecture
4. MNIST_Classifier(Deploy).vi - deploying pretrained network by automatically loading network configuration and weights files generated from the examples above.
5. MNIST(RT_Deployment) project - deployment of pretrained model on NI's Real Time targets.
6. YOLO_Object_Detection(Cam).vi - automatically building and loading pretrained network for object detection based on YOLO (You Only Look Once) architecture.
7. MNIST_CNN_GPU project - accelerating MNIST_Classifier_CNN(Train).vi example on GPU
8. YOLO_GPU project - accelerating YOLO object detection on GPU
9. Object_Detection project - demonstrates training of neural network for object detection on simple dataset.
Release Notes
v6.1.1
This is a major update which does not break backward compatibility with v5.x.x version of the toolkit.
Features
1. Added support for Nvidia RTX 3xxx series of GPU by upgrading CUDA libraries.
2. Now CUDA libraries are part of the toolkit installer, which eliminates the need for separate installation of CUDA libraries.
3. Now all augmentation operations are accelerated on GPU, which greatly speeds up the training process when augmentations are enabled
4. Support for older versions of LabVIEW is deprecated. LabVIEW 2020 and newer are supported starting with this release.
5. Improved DeepLTK library loading time in LabVIEW.
Recent Posts
Deep Learning with LabVIEW
by VIPM Community, 2 years, 5 months ago, 0 |
|