Benchmarking the NXP i.MX8M+ neural processing unit: smart parking case study

 

Guardado en:
Detalles Bibliográficos
Autores: Chaves-González, Edgar, León-Vega, Luis G.
Formato: artículo original
Fecha de Publicación:2022
Descripción:Nowadays, deep learning has become one of the most popular solutions for computer vision, and it has also included the Edge. It has influenced the System-on-Chip (SoC) vendors to integrate accelerators for inference tasks into their SoCs, including NVIDIA, NXP, and Texas Instruments embedded systems. This work explores the performance of the NXP i.MX8M Plus Neural Processing Unit (NPU) as one of the solutions for inference tasks. For measuring the performance, we propose an experiment that uses a GStreamer pipeline for inferring license plates, which is composed of two stages: license plate detection and character inference. The benchmark takes execution time and CPU usage samples within the metrics when running the inference serially and parallel. The results show that the key benefit of using the NPU is the CPU freeing for other tasks. After offloading the license plate detection to NPU, we lowered the overall CPU consumption by 10x. The performance obtained has an inference rate of 1 Hz, limited by the character inference.
País:RepositorioTEC
Institución:Instituto Tecnológico de Costa Rica
Repositorio:RepositorioTEC
Lenguaje:Inglés
Español
OAI Identifier:oai:repositoriotec.tec.ac.cr:2238/14162
Acceso en línea:https://revistas.tec.ac.cr/index.php/tec_marcha/article/view/6487
https://hdl.handle.net/2238/14162
Palabra clave:Computer vision
AI accelerator
embedded software
smart cameras
convolutional neural networks
TinyYOLO
Rosetta
Visión por computador
Acelerador de AI
software empotrado
cámaras inteligentes