Today’s need for AI and ML
Artificial intelligence (AI) and Machine learning (ML) are used in many applications, in industries as diverse as travel, banking and financial services, manufacturing, food technology, healthcare, logistics, transportation, entertainment and many more.
One of the well-known applications is in autonomous driving, where the car can use machine learning to recognize barriers, pedestrians, and other cars. Other uses include predicting or detecting diseases and inspecting circuit boards.
What accelerates AI Deployment?
One of the key factors accelerating AI and ML deployment is the growth in computing power allowing complex mathematics calculations to be executed easily and quickly.
There are also increasing numbers of algorithms helping the creation of models and making data inference easier and quicker. Governments and companies are also investing heavily in this area.
The AI/ML tools that help non data scientists easily understand, create, and deploy models are a crucial element and are today more and more available and accessible.
Although model building will be done on the cloud, on high performance machines, we will often want to do the inference locally. This has several benefits, including added security because we are not communicating to the external world. Acting locally means we are not consuming bandwidth and are not paying extra money to send the data to the cloud and then getting the results back.
Some of the benefits of performing inference at the edge include:
Latency is a good driver to perform inference locally because we are not waiting for the information to be sent and the results to be sent back. The Edge can help users by moving machine learning from high performance machines to high end microcontrollers and high-end microprocessor units.
What is Artificial Intelligence and Machine Learning?
Artificial Intelligence was established in the 1950s. Essentially, AI replaces the programming procedure by developing algorithms based on the data, rather than the legacy method of writing them manually. Machine Learning is a subset of artificial intelligence, where the machine tries to extract knowledge from the data. We provide the machine with prepared data and then ask it to come up with an algorithm that will help predict the results for new fresh set of data.
ML is based on what we call 'supervised learning'. In this technique, the data is labelled, and the results are based on that labelling - we also build the model based on that labelling. Another technique is deep learning, which works on more complex algorithms, where the data is not labelled. We will mainly consider supervised learning for the Edge in this article.
The basic element of ML is the neural network, which consist of layers of nodes, each node having a connection to either the inputs or to the next layers. There are several types of neural networks. The more we move from machine learning to deep learning, the more we will see complex networks. Deep learning also incorporates some feedback mechanisms, whereas simple ML models have simple forward actions, moving from the data to the output or result.
How Do You “Train” a Machine?
The first step is data collection. As we focus on supervised learning, we collect labelled data, so that patterns can be found correctly. The quality of this data will determine how accurate the model is. We need to put it together and make it random, as, if it is too organized, models will not be created correctly, and we can end up with bad algorithms.
The second step is to clean and remove the unwanted data. Any set where some futures are missing should be removed. Any states where the data is not needed or any states which are typically unknown should also be removed.
Data must be then separated into two parts, one for training and the other for testing.
The third step is training the algorithm. This is split it into three steps. The first step is to choose the machine learning classification algorithm. Several ones are available and are suitable to different types of data. Example of machine learning classification algorithm are:
It is important to choose the right model composition as this determines the output you get after running the ML algorithm on the collected data. This may need some data scientist skills but could also be left to the automatic engine provided by several model creation tools.
The second sub-step is the model training operation, which consists of running several iterations to improve the weights of the different layers and the overall accuracy of the model.
We then need to evaluate the model, which is done by testing the model with a subset of data. The one we have already kept for future testing and evaluation. This set of data is unknown to the model. We can then compare the model output to the well-known results.
Once we have completed these steps, we can use the model created and validate the results by performing inference on targets. The idea is to take the model in the field, provide it with some inputs and see whether the results are correct.
Microchip Software and Tools
Microchip has partnered with several third-party companies, including Edge Impulse, Motion Gestures and SensiML.
We also support popular frameworks such as TensorFlow Lite For Microcontrollers, which is part of Microchip Harmony framework. TensorFlow Lite can be used to create models across all Microchip portfolio, except for 8-bit devices as of today. Microchip microcontrollers and microprocessors are compatible and support TensorFlow Lite.
Microchip’s microcontrollers and microprocessor solutions support many applications such as smart embedded vision. They are also a good fit for predictive maintenance based on either vibration, power measurement or on sound monitoring. Microchip portfolio can be used in gesture recognition and, coupled with touch capabilities, can make it easier to control human machine interfaces.
Microchip provides high performance PCI switches which enable the interconnection of GPUs and helps with model training.
Data collection can be done using microcontrollers, microprocessor units, FPGAs, and sensors. All available in Microchip portfolio.
Data validation and inference operation can both be done on microcontrollers, microprocessors, or on FPGAs.
Overall, these solutions make ML easy to implement using the Microchip portfolio.
When it comes to software, Microchip machine learning centre contains is a great location where our latest solutions are presented.
In addition of Microchip Harmony framework supporting popular frameworks, machine learning software is provided thanks to several partnerships.
One partnership is with Edge Impulse, which has a full TinyML pipeline where we can collect the data, build the model, and deploy it. This partner uses TensorFlow Lite for microcontrollers. One of the biggest advantages here is that Edge Impulse’s code is completely open-source and royalty free.
Another partner is Motion Gestures, which specializes in gesture recognition and can be used to build human machine interfaces. This tool can help create and deploy gestures in minutes, cutting software development time – it also produces satisfactory results for gesture recognitions that approached 100% recognition in out tests.
There are two ways to use this tool, either with touch, the classic way, or motion, using some IMU sensors.
Getting Started
Microchip offers several kits to get developers started in AI and ML. On the microcontrollers side, the SAMD21 ML, SAMD21 Machine Learning evaluation kit with a TDK sensor. Another variant uses Bosch AMU.
On the motion gesture side, we have a demo with SAMC21 Xplained Pro plus a QTouch touch pad, one of the tools with which you can start to implement your ML gesture recognition application.
The IGaT is a graphic and touch board which also uses ML, with out-of-the-box firmware that has the gesture recognition demo in addition to many other demos for cars, home, entertainment, and others.
Adafruit EdgeBadge - TensorFlow Lite for Microcontrollers is another kit which uses TensorFlow Lite directly.
It has a 2-inch TFT display. EdgeBadge can be used by Arduino community. Several examples are provided such as Sine Wave Demo, Gesture Demo and Micro Speech Demo.
On the high end side, The PolareFire video kit has a dual camera interface, MIPI interface, HDMI interface, and comes with 2GB DDR, 4 SDRAM, a USB2UART interface, and 1GB SPI flash.
Out of the box, this kit provides an object detection demo using or based on a ML model.
For more information: https://www.microchip.com/en-us/education/developer-help/learn-solutions/machine-learning