Gibbs, MJ, 2025. Multi-task layer sharing optimisations for low-power tinyML devices. PhD, Nottingham Trent University.
Preview |
Text
Michael Gibbs 2025.pdf - Published version Download (3MB) | Preview |
Abstract
Deploying Deep Learning (DL) models on low-power Microcontroller Unit (MCU) devices presents significant challenges due to limited computational resources and storage constraints. This thesis explores novel methods for optimising DL models for edge-based tiny Machine Learning (tinyML) applications, with a focus on Human Activity Recognition (HAR) and stress detection. A multi-model system was implemented on an Arduino Nano 33 BLE Sense, where a HAR model provided contextual information to improve stress detection. The HAR model achieved 98% accuracy in distinguishing between exercise and no exercise, while the stress detection model reached 88% accuracy. Quantisation techniques were employed to reduce memory requirements, allowing both models to fit within 1.3MB of storage.
To further optimise deployment, this thesis introduces the tiny Inception Module (tIM), a DL layer-sharing approach designed for tinyML systems. The tIM enables pre-existing network layers to be repurposed across models, reducing redundancy while maintaining accuracy. A case study using a tIM demonstrated a 28.3% storage saving compared to the initial multi-model system and a 47.8% reduction when compared to non-layer-sharing approaches, with only a 4% accuracy sacrifice to the stress model.
Additionally, Echo State Networks (ESNs) were investigated as an alternative to traditional DL approaches due to their low training cost. However, ESNs have historically been unsuitable for applications due to their high computational complexity. This thesis presents a novel Multi-Task Learning (MTL)-based ESN approach that shares a single reservoir across multiple datasets, including chaotic systems, time-series, and image classification tasks. The shared ESN reservoir achieved competitive performance with state-of-the-art literature while significantly reducing Floating Point Operations (FLOPs) and storage requirements. In a system classifying ten different tasks, the MTL-based ESN implementation required fewer FLOPs than a single convolutional layer used for MNIST classification, while achieving superior efficiency compared to MobileNetV2 and MCU-Net.
Overall, this PhD research demonstrates that both tIM and shared ESN reservoirs offer practical solutions for deploying multiple DL tasks on low-power resource-constrained devices. The proposed methods enable more efficient model deployment, reducing storage and computation costs while maintaining high inference performance, making them promising candidates for future and tinyML applications.
| Item Type: | Thesis |
|---|---|
| Creators: | Gibbs, M.J. |
| Contributors: | Name Role NTU ID ORCID |
| Date: | March 2025 |
| Rights: | The copyright in this work is held by the author. You may copy up to 5% of this work for private study, or personal, non-commercial research. Any re-use of the information contained within this document should be fully referenced, quoting the author, title, university, degree level and pagination. Queries or requests for any other use, or if a more substantial copy is required, should be directed to the author. |
| Divisions: | Schools > School of Science and Technology |
| Record created by: | Laura Borcherds |
| Date Added: | 18 Dec 2025 17:09 |
| Last Modified: | 18 Dec 2025 17:09 |
| URI: | https://irep.ntu.ac.uk/id/eprint/54889 |
Actions (login required)
![]() |
Edit View |
Statistics
Views
Views per month over past year
Downloads
Downloads per month over past year

Tools
Tools





