MIL X is a machine-vision software development kit (SDK) that delivers an extensive collection of tools for developing machine vision applications. It’s the comprehensive toolkit OEMs and integrators have trusted for more than 25 years, with intelligent enhancements for cutting-edge application development.
The latest MIL X service pack expands the offerings of this patented SDK to deliver:
Convolutional neural networks (CNNs) are a class of deep learning neural networks, commonly used to analyze visual imagery. Machine vision software assesses examples of each image class and develops deep learning algorithms that can make inferences about the visual appearance of each class.
MIL X leverages deep learning to perform image classification. The latest addition is a coarse segmentation approach that maps image neighborhoods according to categories to identify and roughly locate the presence of specific features or defects. A second, global approach assigns images or image regions to pre-established classes. Image-oriented classification is particularly well-suited for analyzing images of highly textured, naturally varying, and acceptably deformed goods. A tree ensemble technique is used for feature-oriented classification, which categorizes objects of interest from features extracted from these images.
Classical 2D vision techniques are not always able to perform localization, recognition, inspection, or measurement tasks. That’s when 3D vision comes in, able to deliver accurate dimensional data for more complex imaging tasks.
With an expansive collection of tools for conducting 3D capture, display, processing, and analysis including metrology and registration, MIL X works with 3D data in the form of point clouds, depth maps, and/or elementary objects. Ensuring ease-of-use, all MIL X tools can use 3D data produced by profile and snapshot sensors, stereo and time-of-flight (ToF) cameras, STL, and PLY files.
Sometimes it is necessary to fuse multiple images into a single option in order to bring out details not otherwise seen in the same lighting conditions.
To perform inferencing, CNNs must first be trained. Training involves building a dataset—including labeling images and augmenting the dataset with synthesized images—as well as monitoring and analyzing the training process.
MIL X offers users two distinct options for deep learning training of CNNs: Users can take a hands-on approach and train their own CNN, or, Matrox Imaging’s team of vision experts can perform the training on the users’ behalf. Whichever the route, MIL X provides the necessary infrastructure and interactive environment to build a training dataset and conduct different types of training, including transfer learning and fine-tuning.
Working with a programming library to assess the feasibility and best approach to developing a vision application can be both intimidating and time consuming.
MIL CoPilot, the companion interactive environment for MIL X, lets users experiment, prototype, and generate functional program code within its interactive environment. The latest update now offers training and inference support for users looking to train their own CNNs for deep learning analysis. Users can now label and augment required datasets, monitor the training process, and view results in clear, concise tables.
Embedded vision integrates an image sensor and a processor chip, providing distinct advantages in terms of small system size, reduced costs, and low power consumption. Arm processor architecture is the most widely-used platform for embedded vision systems.
MIL X now provides support for a wide selection of processing, analysis, annotation, display, and archiving functionality directly on Arm Cortex®-A family processors. Arm’s Neon™ architecture extension provides the processing and analysis speed users are familiar with from MIL.
Founded in 1976, Matrox® Imaging is an established and trusted supplier to top OEMs and integrators involved in machine vision, image analysis, and medical imaging industries. The systems and components consist of smart cameras, 3D sensors, vision controllers, frame grabbers, and I/O cards, all designed to provide optimum price-performance within a common software environment.