Artificial intelligence (AI) is making significant changes globally, aiding in disease diagnosis in hospitals and detecting fraud in financial systems. However, it also raises pressing questions.
As G7 leaders prepare to convene in Alberta, a critical concern surfaces: how can we develop robust AI systems while safeguarding individual privacy?
The G7 summit presents an opportunity to outline how democratic nations will approach the management of emerging technologies. Although regulations are progressing, they require reliable technical solutions to be effective.
We believe that federated learning (FL) is a highly promising yet often overlooked tool that should be central to these discussions.
As experts in AI, cybersecurity, and public health, we have encountered the data dilemma intimately. AI relies heavily on information, much of which is sensitive—such as medical histories and financial records. The greater the centralization of data, the higher the risk of leaks, misuse, or cyberattacks. In the UK, the National Health Service halted a profitable AI initiative due to concerns about data management. Canada has raised alarms about storing sensitive personal information, like health and immigration records, in foreign cloud services. Trust in AI systems is tenuous; once trust is lost, innovation stalls.
Why Centralized AI is Risky
The prevailing method for training AI involves consolidating all data in one centralized location, which may seem efficient in theory but poses security challenges in practice. Centralized systems are prime targets for hackers, difficult to regulate, and consolidate excessive power among a few large data holders or tech companies.
In contrast, FL enables the algorithm to be brought to the data, allowing each local entity—whether a hospital or government agency—to train an AI model using its own data. Only updates to the model, rather than the raw data, are sent to a central system. This approach minimizes the risk of data breaches while still permitting insights from large-scale trends.
Real-world Applications of Federated Learning
FL could revolutionize how we use AI. When combined with methods such as differential privacy and secure multiparty computation, it can significantly mitigate the risk of data leaks. In Canada, FL has facilitated the training of cancer detection models across provinces without transferring sensitive health records. Projects like the Canadian Primary Care Sentinel Surveillance Network demonstrate how FL can effectively predict chronic diseases while keeping patient data securely within provincial limits. Financial institutions are leveraging it to identify fraud without revealing customer identities, while cybersecurity teams are exploring collaborative approaches to enhance security without exposing critical logs.
The Urgency for G7 Action
Globally, governments are racing to implement AI regulations. Major advancements, such as Canada’s proposed Artificial Intelligence and Data Act and the EU’s AI Act, are promising. Yet, without secure methods to collaborate on data-intensive issues—such as pandemics and cybersecurity—these initiatives might falter. FL allows various jurisdictions to collaborate on shared challenges without sacrificing local control or sovereignty. It transforms policy into practice by facilitating technical collaboration while respecting legal and privacy frameworks.
Moreover, adopting FL communicates a vital message: democracies can lead in technology innovation as well as in ethical governance. The G7 summit’s location in Alberta is significant, as the province hosts a vibrant AI ecosystem capable of generating valuable data across multiple industries.
A Foundation for Trust in AI
The trustworthiness of AI is contingent on the systems that underlie it. Many current systems depend on outdated notions of centralization and control. Federated learning lays a new groundwork—one that aligns privacy, transparency, and innovation. We don’t have to wait for a crisis to respond. The necessary tools already exist; what is required is the political resolve to transition them from mere prototypes to standard practices. If the G7 genuinely aims to foster a safer and more equitable AI landscape, making FL an integral component of its agenda is crucial rather than a mere footnote.