Skip to content
Home » Archives for Alberto Huertas Celdran

Isaac Marroquí Penalva

RepuNet: A Reputation System for Mitigating Malicious Clients in DFL

Federated Learning (FL) has become a key approach for training machine learning models without sharing raw data. However, when moving toward decentralized federated learning (DFL)—where there is no central server—new security challenges emerge. In such systems, participants (clients) directly exchange… Read More »RepuNet: A Reputation System for Mitigating Malicious Clients in DFL

DEMO: NEBULA – Decentralized Federated Learning for Heterogeneous Networks

Federated learning (FL) has emerged as a key approach for training machine learning models without sharing raw data, making it highly relevant for privacy-sensitive applications. However, many existing FL frameworks rely on a central coordinator, which can introduce bottlenecks and… Read More »DEMO: NEBULA – Decentralized Federated Learning for Heterogeneous Networks

S-VOTE: Similarity-based Voting for Client Selection in Decentralized Federated Learning

This paper introduces S-VOTE, a similarity-based voting mechanism designed to improve both efficiency and model performance in Decentralized Federated Learning (DFL). Unlike traditional federated learning, DFL operates without a central server, relying on peer-to-peer communication. While this avoids bottlenecks and… Read More »S-VOTE: Similarity-based Voting for Client Selection in Decentralized Federated Learning

HyperDtct: Hypervisor-Based Ransomware Detection using System Calls

This paper presents HyperDtct, a hypervisor-based framework for detecting ransomware by monitoring system call behavior from outside the guest operating system. Rather than relying on in-guest agents or signature-based methods, both of which can be evaded by modern ransomware, HyperDtct… Read More »HyperDtct: Hypervisor-Based Ransomware Detection using System Calls

ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes

This paper introduces ProFe, a new algorithm designed to make Decentralized Federated Learning (DFL) more communication-efficient without compromising model performance. In DFL, clients collaborate without a central server, which avoids single-point failures but creates significant communication overhead—especially when nodes have… Read More »ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes