Busca avançada
Ano de início
Entree


A novel aggregation method to promote safety security for poisoning attacks in Federated Learning

Texto completo
Autor(es):
Barros, Pedro H. ; Ramos, Heitor S. ; IEEE
Número total de Autores: 3
Tipo de documento: Artigo Científico
Fonte: 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022); v. N/A, p. 6-pg., 2022-01-01.
Resumo

Federated Learning enables devices to collaboratively learn a shared prediction model while keeping all the training data on the local device and promoting clients' privacy. Vanilla federated learning models are susceptible to model poisoning attacks, where malicious nodes can inject fake models weight in order to deviate the global model to its objective. With this attack, malicious nodes intend to generate inaccurate global models or even to generate global models that make wrong inferences. This work proposes a new similarity function for federated learning applications to tackle the model poisoning vulnerability. Our method uses a new security aggregation proposal for local models based on the quantification of the heterogeneity of the data. In addition, this quantifier benefits from theoretical results found in models that use the auxiliary space proposed in our model. To assess the general behavior of our method, we evaluate our proposal in a federated scenario under non-iid data where all local models are honest, where we outperformed the vanilla model by 52.84% and 58.88% on two real-world dataset, respectively. In addition, our proposal reached 81.79% and 73.92% of F1-Score in a model poisoning experiment and outperformed the other state-of-the-art methods by 8.60% and 5.54%, respectively. (AU)

Processo FAPESP: 20/05121-4 - Análise de dados heterogêneos em computação urbana
Beneficiário:Heitor Soares Ramos Filho
Modalidade de apoio: Auxílio à Pesquisa - Regular