Keynote35min
The Promise of Trustworthy AI
The session challenges the trade‑off between data privacy and AI performance, presenting a system that unites Federated Learning and Fully Homomorphic Encryption. It enables collaborative model training without sharing raw data, treating privacy as a built‑in property while addressing centralized training’s security, compliance, and data‑gravity limitations.
César Soto ValeroSEB Group
talkDetail.whenAndWhere
Tuesday, March 24, 09:15-09:50
Room 4
talks.roomOccupancytalks.noOccupancyInfo
Teams building with AI are often presented with a false choice: share data to get frontier models, or protect privacy and accept weaker results. It is a convenient story, especially for anyone who benefits from accessing your data, but it is not the full story.
This session introduces a counterintuitive paradigm where AI models can improve without ever collecting your raw data, and where organizations can collaborate without giving up control. By combining Federated Learning, the idea of training locally while learning globally, with cryptographic computation on encrypted updates using Homomorphic Encryption, the result is a system that treats privacy not as a policy or a feature, but as a structural property by design.
Through practical examples, this session explores why centralized training creates hidden constraints like security exposure, compliance friction, and data gravity that limit real-world adoption. You will learn how Federated Learning flips the classic “bring data to the code” approach, and how Fully Homomorphic Encryption closes the final subtle leak: what model updates can reveal, even when your valuable data never leaves your yard.
This session introduces a counterintuitive paradigm where AI models can improve without ever collecting your raw data, and where organizations can collaborate without giving up control. By combining Federated Learning, the idea of training locally while learning globally, with cryptographic computation on encrypted updates using Homomorphic Encryption, the result is a system that treats privacy not as a policy or a feature, but as a structural property by design.
Through practical examples, this session explores why centralized training creates hidden constraints like security exposure, compliance friction, and data gravity that limit real-world adoption. You will learn how Federated Learning flips the classic “bring data to the code” approach, and how Fully Homomorphic Encryption closes the final subtle leak: what model updates can reveal, even when your valuable data never leaves your yard.
César Soto Valero
César Soto Valero is currently a Data Scientist at SEB Group. He has experience building and maintaining AI/ML systems serving millions of customers in real-time. César earned a PhD in Computer Science from KTH Royal Institute of Technology, where his research focused on inventing novel low-level program analysis techniques to mitigate software bloat and improve the efficiency, security, and maintainability of large codebases. With a background spanning both academic research and industrial engineering, César bridges the gap between theory and practice by translating research breakthroughs into robust, production-grade software systems. He is passionate about engineering, AI, and the practical challenges of deploying scalable, reliable software solutions that serve real user needs.
talkDetail.shareFeedback
talkDetail.feedbackNotYetAvailable
talkDetail.feedbackAvailableAfterStart
talkDetail.signInRequired
talkDetail.signInToFeedbackDescription
occupancy.title
occupancy.votingNotYetAvailable
occupancy.votingAvailableBeforeStart
talkDetail.signInRequired
occupancy.signInToVoteDescription
comments.speakerNotEnabledComments