Overview
Welcome to the hands-on labs section where youβll explore deploying ML models onto real embedded devices, which will offer a practical introduction to ML systems. Unlike traditional approaches with large-scale models, these labs focus on interacting directly with both hardware and software. They help us show case various sensor modalities across different application use cases. This approach provides valuable insights into the challenges and opportunities of deploying AI on real physical systems.
Learning Objectives
By completing these labs, we hope learners will:
Gain proficiency in setting up and deploying ML models on supported devices, enabling you to tackle real-world ML deployment scenarios with confidence.
Understand the steps involved in adapting and experimenting with ML models for different applications, allowing you to optimize performance and efficiency.
Learn troubleshooting techniques specific to embedded ML deployments equipping you with the skills to overcome common pitfalls and challenges.
Acquire practical experience in deploying TinyML models on embedded devices bridging the gap between theory and practice.
Explore various sensor modalities and their applications expanding your understanding of how ML can be leveraged in diverse domains.
Foster an understanding of the real-world implications and challenges associated with ML system deployments preparing you for future projects.
Target Audience
These labs are designed for:
Beginners in the field of machine learning who have a keen interest in exploring the intersection of ML and embedded systems.
Developers and engineers looking to apply ML models to real-world applications using low-power, resource-constrained devices.
Enthusiasts and researchers who want to gain practical experience in deploying AI on edge devices and understand the unique challenges involved.
Supported Devices
We have included laboratory materials for three key devices that represent different hardware profiles and capabilities.
- Nicla Vision: Optimized for vision-based applications like image classification and object detection, ideal for compact, low-power use cases.
- XIAO ESP32S3: A versatile, compact board suitable for keyword spotting and motion detection tasks.
- Raspberry Pi: A flexible platform for more computationally intensive tasks, including small language models and various classification and detection applications.
Exercise | Nicla Vision | XIAO ESP32S3 | Raspberry Pi |
---|---|---|---|
Installation & Setup | β | β | β |
Keyword Spotting (KWS) | β | β | |
Image Classification | β | β | β |
Object Detection | β | β | β |
Motion Detection | β | β | |
Small Language Models (SLM) | β |
Lab Structure
Each lab follows a structured approach:
Introduction: Explore the application and its significance in real-world scenarios.
Setup: Step-by-step instructions to configure the hardware and software environment.
Deployment: Guidance on training and deploying the pre-trained ML models on supported devices.
Exercises: Hands-on tasks to modify and experiment with model parameters.
Discussion: Analysis of results, potential improvements, and practical insights.
Recommended Lab Sequence
If youβre new to embedded ML, we suggest starting with setup and keyword spotting before moving on to image classification and object detection. Raspberry Pi users can explore more advanced tasks, like small language models, after familiarizing themselves with the basics.
Troubleshooting and Support
If you encounter any issues during the labs, consult the troubleshooting comments or check the FAQs within each lab. For further assistance, feel free to reach out to our support team or engage with the community forums.
Credits
Special credit and thanks to Prof. Marcelo Rovai for his valuable contributions to the development and continuous refinement of these labs.