Member-only story
Is Decentralized Learning Truly the Guardian of Our Privacy?
Decentralized learning — a concept that promises collaborative model-building without pooling everyone’s data in one vulnerable location — has often been hailed as a fortress of confidentiality. Instead of sending raw data to a single central server, users in these systems share only updates to a common model with a select group of neighbors. At first glance, this appears far more private than a traditional setup. But could our trust in “neighbor-only” communications be overly optimistic? A remarkable study entitled “Privacy Attacks in Decentralized Learning,” conducted by a team of researchers from EPFL, Université de Lille, Inria, Centrale Lille, CNRS, and Inria–Univ Montpellier, reveals that even in a fully decentralized system, malicious individuals can pry open your data — even if you are many steps away from them on the network. This investigation throws a bright spotlight on the vulnerabilities of gossip-based protocols and suggests that we should not merely assume our data is safe just because we share updates among a small circle of neighbors.
Is there a Problem in Decentralized Learning?
Over the past few years, institutions ranging from hospital networks to social media platforms have scrambled to protect user privacy while still enabling collective machine learning. Indeed…