A study of checkpointing in large scale training of deep neural networks

 

Guardado en:
Detalles Bibliográficos
Autores: Rojas, Elvis, Kahira, Albert Njoroge, Meneses, Esteban, Bautista-Gomez, Leonardo, Badia, Rosa M
Formato: artículo preliminar
Fecha de Publicación:2021
Descripción:Deep learning (DL) applications are increasingly being deployed on HPC systems to leverage the massive parallelism and computing power of those systems. While significant effort has been put to facilitate distributed training by DL frameworks, fault tolerance has been largely ignored. Checkpoint-restart is a common fault tolerance technique in HPC workloads. In this work, we examine the checkpointing implementation of popular DL platforms. We perform experiments with three state-of-theart DL frameworks common in HPC (Chainer, PyTorch, and TensorFlow). We evaluate the computational cost of checkpointing, file formats and file sizes, the impact of scale, and deterministic checkpointing. Our evaluation shows some critical differences in checkpoint mechanisms and exposes several bottlenecks in existing checkpointing implementations. We provide discussion points that can aid users in selecting a fault-tolerant framework to use in HPC. We also provide take-away points that framework developers can use to facilitate better checkpointing of DL workloads in HPC.
País:Repositorio UNA
Institución:Universidad Nacional de Costa Rica
Repositorio:Repositorio UNA
Lenguaje:Inglés
OAI Identifier:oai:https://repositorio.una.ac.cr:11056/26772
Acceso en línea:http://hdl.handle.net/11056/26772
https://doi.org/10.48550/arXiv.2012.00825
Access Level:acceso abierto
Palabra clave:APRENDIZAJE PROFUNDO
RESILIENCIA
REDES NEURONALES
COMPUTACIÓN DE ALTO RENDIMIENTO
DEEP LEARNING
RESILIENCE
NEURAL NETWORKS
HIGH PERFORMANCE COMPUTING