A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems Prudencio, Rafael Figueiredo; Maximo, Marcos R. O. A.; Colombini, Esther Luna Em: IEEE Transactions on Neural Networks and Learning Systems, pp. 1-0, 2023. @article{10078377,
title = {A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems},
author = {Rafael Figueiredo Prudencio and Marcos R. O. A. Maximo and Esther Luna Colombini},
doi = {10.1109/TNNLS.2023.3250269},
year = {2023},
date = {2023-01-01},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {1-0},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Dispositivos, Eu Escolho Vocês: Seleção de Clientes Adaptativa para Comunicação Eficiente em Aprendizado Federado (Best Paper) Souza, Allan; Bittencourt, Luiz; Cerqueira, Eduardo; Loureiro, Antonio; Villas, Leandro Em: Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pp. 1–14, SBC, Brasília/DF, 2023, ISSN: 2177-9384. @inproceedings{sbrc,
title = {Dispositivos, Eu Escolho Vocês: Seleção de Clientes Adaptativa para Comunicação Eficiente em Aprendizado Federado (Best Paper)},
author = {Allan Souza and Luiz Bittencourt and Eduardo Cerqueira and Antonio Loureiro and Leandro Villas},
doi = {10.5753/sbrc.2023.499},
issn = {2177-9384},
year = {2023},
date = {2023-01-01},
booktitle = {Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos},
pages = {1–14},
publisher = {SBC},
address = {Brasília/DF},
abstract = {O aprendizado federado (Federated Learning – FL) é uma abordagem distribuída para o treinamento colaborativo de modelos de aprendizado de máquina. O FL requer um alto nível de comunicação entre os dispositivos e um servidor central, assim gerando diversos desafios, incluindo gargalos de comunicação e escalabilidade na rede. Neste trabalho, introduzimos DEEV, uma solução para diminuir os custos gerais de comunicação e computação para treinar um modelo no ambiente FL. DEEV emprega uma estratégia de seleção de clientes que adapta dinamicamente o número de dispositivos que treinam o modelo e o número de rodadas necessárias para atingir a convergência. Um caso de uso no conjunto de dados de reconhecimento de atividades humanas é realizado para avaliar DEEV e compará-lo com outras abordagens do estado da arte. Avaliações experimentais mostram que DEEV reduz eficientemente a sobrecarga geral de comunicação e computação para treinar um modelo e promover sua convergência. Em particular, o DEEV reduz em até 60% a comunicação e em até 90% a sobrecarga de computação em comparação com as abordagens da literatura, ao mesmo tempo em que fornece boa convergência mesmo em cenários em que os dados são distribuídos de forma não independente e idêntica entre os dispositivos clientes.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O aprendizado federado (Federated Learning – FL) é uma abordagem distribuída para o treinamento colaborativo de modelos de aprendizado de máquina. O FL requer um alto nível de comunicação entre os dispositivos e um servidor central, assim gerando diversos desafios, incluindo gargalos de comunicação e escalabilidade na rede. Neste trabalho, introduzimos DEEV, uma solução para diminuir os custos gerais de comunicação e computação para treinar um modelo no ambiente FL. DEEV emprega uma estratégia de seleção de clientes que adapta dinamicamente o número de dispositivos que treinam o modelo e o número de rodadas necessárias para atingir a convergência. Um caso de uso no conjunto de dados de reconhecimento de atividades humanas é realizado para avaliar DEEV e compará-lo com outras abordagens do estado da arte. Avaliações experimentais mostram que DEEV reduz eficientemente a sobrecarga geral de comunicação e computação para treinar um modelo e promover sua convergência. Em particular, o DEEV reduz em até 60% a comunicação e em até 90% a sobrecarga de computação em comparação com as abordagens da literatura, ao mesmo tempo em que fornece boa convergência mesmo em cenários em que os dados são distribuídos de forma não independente e idêntica entre os dispositivos clientes. |
Compressed Client Selection for Efficient Communication in Federated Learning Mohamed, Aissa Hadj; Assumpçáo, Nícolas R. G.; Astudillo, Carlos A.; Souza, Allan M.; Bittencourt, Luiz F.; Villas, Leandro A. Em: 2023 IEEE 20th Consumer Communications & Networking Conference (CCNC), pp. 508-516, 2023, ISSN: 2331-9860. @inproceedings{10059659,
title = {Compressed Client Selection for Efficient Communication in Federated Learning},
author = {Aissa Hadj Mohamed and Nícolas R. G. Assumpçáo and Carlos A. Astudillo and Allan M. Souza and Luiz F. Bittencourt and Leandro A. Villas},
doi = {10.1109/CCNC51644.2023.10059659},
issn = {2331-9860},
year = {2023},
date = {2023-01-01},
booktitle = {2023 IEEE 20th Consumer Communications & Networking Conference (CCNC)},
pages = {508-516},
abstract = {Federated learning (FL) is a distributed approach that enables collaborative training of a shared machine learning (ML) model for a given task. FL requires bandwidth-demanding communication between devices and a central server, which is a cause of many issues such as communication bottlenecks and scaling in the network. Therefore, we introduce the CCS (Compressed Client Selection) algorithm aimed at decreasing the overall communication costs for fitting a model in the FL environment. CCS employs a biased client selection strategy that reduces the number of devices training the ML model and the number of rounds required to reach convergence. In addition, the compression method Count Sketch is implemented to reduce the overhead in client-to-server communication. A use case on the Human Activity Recognition dataset is performed to evaluate CCS and compare it with other state-of-the-art approaches. Experimental evaluations show that CCS efficiently reduces the overall communication overhead for fitting a model and its convergence in a FL environment. In particular, CCS reduces up to 90% the communication overhead compared to literature approaches while providing good convergence even in scenarios where the data are not-independently and identically distributed among client devices.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Federated learning (FL) is a distributed approach that enables collaborative training of a shared machine learning (ML) model for a given task. FL requires bandwidth-demanding communication between devices and a central server, which is a cause of many issues such as communication bottlenecks and scaling in the network. Therefore, we introduce the CCS (Compressed Client Selection) algorithm aimed at decreasing the overall communication costs for fitting a model in the FL environment. CCS employs a biased client selection strategy that reduces the number of devices training the ML model and the number of rounds required to reach convergence. In addition, the compression method Count Sketch is implemented to reduce the overhead in client-to-server communication. A use case on the Human Activity Recognition dataset is performed to evaluate CCS and compare it with other state-of-the-art approaches. Experimental evaluations show that CCS efficiently reduces the overall communication overhead for fitting a model and its convergence in a FL environment. In particular, CCS reduces up to 90% the communication overhead compared to literature approaches while providing good convergence even in scenarios where the data are not-independently and identically distributed among client devices. |
NeuralMatch: Identificando a Similaridade de Clientes baseado em Modelos no Aprendizado Federado Talasso, Gabriel; Souza, Allan; Villas, Leandro Em: Anais Estendidos do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pp. 176–183, SBC, Brasília/DF, 2023, ISSN: 2177-9384. @inproceedings{sbrc_estendido,
title = {NeuralMatch: Identificando a Similaridade de Clientes baseado em Modelos no Aprendizado Federado},
author = {Gabriel Talasso and Allan Souza and Leandro Villas},
doi = {10.5753/sbrc_estendido.2023.808},
issn = {2177-9384},
year = {2023},
date = {2023-01-01},
booktitle = {Anais Estendidos do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos},
pages = {176–183},
publisher = {SBC},
address = {Brasília/DF},
abstract = {O aprendizado federado é uma técnica de aprendizado de máquina distribuído que permite que vários dispositivos colaborem no treinamento de um modelo de dados comum, enquanto preserva a privacidade dos dados do usuário. No entanto, o aprendizado federado apresenta desafios relacionados aos dados não identicamente distribuídos e balanceados, o que pode resultar em modelos menos precisos. Dessa forma, foi proposto o NeuralMatch, um arcabouço para identificar similaridade de modelos para aprendizado federado, capaz de identificar a similaridade entre os clientes sem o compartilhamento de dados. O arcabouço proposto pode ajudar a desenvolver soluções mais eficientes de aprendizado federado para lidar com os problemas de dados não identicamente balanceados e distribuídos.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O aprendizado federado é uma técnica de aprendizado de máquina distribuído que permite que vários dispositivos colaborem no treinamento de um modelo de dados comum, enquanto preserva a privacidade dos dados do usuário. No entanto, o aprendizado federado apresenta desafios relacionados aos dados não identicamente distribuídos e balanceados, o que pode resultar em modelos menos precisos. Dessa forma, foi proposto o NeuralMatch, um arcabouço para identificar similaridade de modelos para aprendizado federado, capaz de identificar a similaridade entre os clientes sem o compartilhamento de dados. O arcabouço proposto pode ajudar a desenvolver soluções mais eficientes de aprendizado federado para lidar com os problemas de dados não identicamente balanceados e distribuídos. |
FedPredict: Combining Global and Local Parameters in the Prediction Step of Federated Learning Capanema, Cláudio G. S.; Souza, Allan M.; Silva, Fabrício A.; Villas, Leandro A.; Loureiro, Antonio A. F. Em: IEEE 19th International Conference on Distributed Computing in Smart Systems and Internet of Things (DCOSS), IEEE, Pafos/Cyphrus, 2023. @inproceedings{fed_predict,
title = {FedPredict: Combining Global and Local Parameters in the Prediction Step of Federated Learning},
author = {Cláudio G. S. Capanema and Allan M. Souza and Fabrício A. Silva and Leandro A. Villas and Antonio A. F. Loureiro},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
booktitle = {IEEE 19th International Conference on Distributed Computing in Smart Systems and Internet of Things (DCOSS)},
publisher = {IEEE},
address = {Pafos/Cyphrus},
abstract = {In traditional Federated Learning (FL), such as FedAvg, the main objective is to compute a generalized model applied to all clients. This approach is not effective in the non-IID scenario, where each client has a specific data distribution. As an alternative, personalized FL has proven to be an important research direction for dealing with clients' particularities. However, part of these solutions must be reexamined when a new client (i.e., a few times trained or never trained) is added to the FL process. To address these problems, we propose FedPredict, a simple but effective federated learning approach that combines global and local (i.e., personalized) model parameters of neural networks, considering their evolution and update levels. This combination is essential because our method is a plugin that operates in the prediction/inference step on the FL client side, which means that there is no modification in the learning process, and it can be coupled with other techniques. Compared to state-of-the-art solutions, FedPredict converges faster while achieving greater accuracy in various scenarios, including when new clients are added.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In traditional Federated Learning (FL), such as FedAvg, the main objective is to compute a generalized model applied to all clients. This approach is not effective in the non-IID scenario, where each client has a specific data distribution. As an alternative, personalized FL has proven to be an important research direction for dealing with clients' particularities. However, part of these solutions must be reexamined when a new client (i.e., a few times trained or never trained) is added to the FL process. To address these problems, we propose FedPredict, a simple but effective federated learning approach that combines global and local (i.e., personalized) model parameters of neural networks, considering their evolution and update levels. This combination is essential because our method is a plugin that operates in the prediction/inference step on the FL client side, which means that there is no modification in the learning process, and it can be coupled with other techniques. Compared to state-of-the-art solutions, FedPredict converges faster while achieving greater accuracy in various scenarios, including when new clients are added. |
Resource Aware Client Selection for Federated Learning in IoT Scenarios Maciel, Filipe; Souza, Allan M.; Bittencourt, Luiz F.; Villas, Leandro A. Em: IEEE 19th International Conference on Distributed Computing in Smart Systems and Internet of Things (DCOSS), IEEE, Pafos/Cyphrus, 2023. @inproceedings{rawcs,
title = {Resource Aware Client Selection for Federated Learning in IoT Scenarios},
author = {Filipe Maciel and Allan M. Souza and Luiz F. Bittencourt and Leandro A. Villas},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
booktitle = {IEEE 19th International Conference on Distributed Computing in Smart Systems and Internet of Things (DCOSS)},
publisher = {IEEE},
address = {Pafos/Cyphrus},
abstract = {Machine learning optimizes performance in many embedded applications. A weak point of many learning solutions is the intensive use of data and computational resources required for training the model. By default, client devices send data to a solution developer's server to execute the training process in a more computationally powerful environment. However, this approach can compromise the client's privacy, as data is transmitted to third parties for processing. Federated learning solves this problem by training the model on the client devices, thus without sharing data. The trained models are then aggregated on the server to create a generalized version that can run on every client. The federated learning protocol involves selecting which clients will participate in each training round, with selection criteria focused on maximizing the number of clients per round, controlling fairness, lowering round discards, and managing resources. However, existing selection algorithms neglect the minimization of battery consumption, which is critical in scenarios where clients have limited resources. In this paper we propose a client selection mechanism for a federated learning protocol that considers energy, processing capacity, and network quality as determinant criteria for decision. Compared to a state-of-the-art selection technique, our algorithm saves resources while maintaining the model's accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Machine learning optimizes performance in many embedded applications. A weak point of many learning solutions is the intensive use of data and computational resources required for training the model. By default, client devices send data to a solution developer's server to execute the training process in a more computationally powerful environment. However, this approach can compromise the client's privacy, as data is transmitted to third parties for processing. Federated learning solves this problem by training the model on the client devices, thus without sharing data. The trained models are then aggregated on the server to create a generalized version that can run on every client. The federated learning protocol involves selecting which clients will participate in each training round, with selection criteria focused on maximizing the number of clients per round, controlling fairness, lowering round discards, and managing resources. However, existing selection algorithms neglect the minimization of battery consumption, which is critical in scenarios where clients have limited resources. In this paper we propose a client selection mechanism for a federated learning protocol that considers energy, processing capacity, and network quality as determinant criteria for decision. Compared to a state-of-the-art selection technique, our algorithm saves resources while maintaining the model's accuracy. |
FLEXE: Investigating Federated Learning in Connected Autonomous Vehicle Simulations Lobato, Wellington; Costa, Joahannes B. D. Da; de Souza, Allan M.; Rosário, Denis; Sommer, Christoph; Villas, Leandro A. Em: 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), pp. 1-5, 2022, ISSN: 2577-2465. @inproceedings{10012905,
title = {FLEXE: Investigating Federated Learning in Connected Autonomous Vehicle Simulations},
author = {Wellington Lobato and Joahannes B. D. Da Costa and Allan M. de Souza and Denis Rosário and Christoph Sommer and Leandro A. Villas},
doi = {10.1109/VTC2022-Fall57202.2022.10012905},
issn = {2577-2465},
year = {2022},
date = {2022-09-01},
booktitle = {2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall)},
pages = {1-5},
abstract = {Due to the increased computational capacity of Connected and Autonomous Vehicles (CAVs) and worries about transferring private information, it is becoming more and more appealing to store data locally and move network computing to the edge. This trend also extends to Machine Learning (ML) where Federated learning (FL) has emerged as an attractive solution for preserving privacy. Today, to evaluate the implemented vehicular FL mechanisms for ML training, researchers often disregard the impact of CAV mobility, network topology dynamics, or communication patterns, all of which have a large impact on the final system performance. To address this, this work presents FLEXE, an Open Source extension to Veins that offers researchers a simulation environment to run FL experiments in realistic scenarios. FLEXE combines the popular Veins framework with the OpenCV library. Using the example of traffic sign recognition, we demonstrate how FLEXE can support investigations of FL techniques in a vehicular environment.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Due to the increased computational capacity of Connected and Autonomous Vehicles (CAVs) and worries about transferring private information, it is becoming more and more appealing to store data locally and move network computing to the edge. This trend also extends to Machine Learning (ML) where Federated learning (FL) has emerged as an attractive solution for preserving privacy. Today, to evaluate the implemented vehicular FL mechanisms for ML training, researchers often disregard the impact of CAV mobility, network topology dynamics, or communication patterns, all of which have a large impact on the final system performance. To address this, this work presents FLEXE, an Open Source extension to Veins that offers researchers a simulation environment to run FL experiments in realistic scenarios. FLEXE combines the popular Veins framework with the OpenCV library. Using the example of traffic sign recognition, we demonstrate how FLEXE can support investigations of FL techniques in a vehicular environment. |