Proposal of an open-source accelerators library for inference of transformer networks in edge devices based on Linux
שמור ב:
| Autores: | , , , , , |
|---|---|
| פורמט: | artículo original |
| סטטוס: | Versión publicada |
| Fecha de Publicación: | 2024 |
| תיאור: | Transformers networks have been a great milestone in the natural language processing field, and have powered technologies like ChatGPT, which are undeniably changing people’s lives. This article discusses the characteristics and computational complexity of Transformers networks, as well as, the potential for improving its performance in low-resource environments through the use of hardware accelerators. This research has the potential to significantly improve the performance of Transformers in edge and low-end devices. In addition, Edge Artificial Intelligence, Hardware Acceleration, and Tiny Machine Learning algorithms are explored. The proposed methodology includes a software and hardware layer, with a Linux-based minimal image built on top of a synthesized RTL. The proposal also includes a library of hardware accelerators that can be customized to select the desired accelerators based on the device’s resources and operations to be accelerated. |
| País: | Portal de Revistas TEC |
| מוסד: | Instituto Tecnológico de Costa Rica |
| Repositorio: | Portal de Revistas TEC |
| שפה: | Inglés |
| OAI Identifier: | oai:ojs.pkp.sfu.ca:article/7225 |
| גישה מקוונת: | https://revistas.tec.ac.cr/index.php/tec_marcha/article/view/7225 |
| מילת מפתח: | Artificial intelligence driver FPGA hardware accelerator Linux transformers Inteligencia artificial acelerador por hardware |