SecurityConference45min
Who to blame? The AI, The Programmer, or The Prompt?
This interactive session uses real-world AI security vulnerabilities in a courtroom-style debate, engaging the audience to assign responsibility among users, developers, and providers. By analyzing case studies and technical root causes, it highlights architectural flaws in AI systems and concludes with actionable shared-responsibility best practices for securing AI tools.
Makan SepehrifarCode Nomads
talkDetail.whenAndWhere
Tuesday, March 24, 13:00-13:45
Room 3
talks.roomOccupancytalks.noOccupancyInfo
This interactive session transforms AI security education into an engaging courtroom-style debate. We'll present five critical vulnerabilities affecting modern AI development tools, and after each case study, the audience becomes the jury—voting on who's responsible: the careless user, the negligent developer, or the inadequate service provider.
We'll dissect real CVEs including GitHub Copilot's wormable RCE (CVE-2025-53773), MCP server command injections (CVE-2025-53107, CVE-2025-5277), GitHub's private repository leak via prompt injection, and Microsoft Copilot's zero-click data exfiltration (CVE-2025-32711). Each case reveals technical root causes through collaborative analysis.
The conclusion challenges the "blame game" itself: these vulnerabilities expose fundamental architectural weaknesses in agentic AI systems where traditional security models fail. We'll establish that securing AI tools demands a shared responsibility framework—developers must code defensively, providers must architect securely, and users must understand AI-specific risks. The session culminates with actionable best practices for each stakeholder.
We'll dissect real CVEs including GitHub Copilot's wormable RCE (CVE-2025-53773), MCP server command injections (CVE-2025-53107, CVE-2025-5277), GitHub's private repository leak via prompt injection, and Microsoft Copilot's zero-click data exfiltration (CVE-2025-32711). Each case reveals technical root causes through collaborative analysis.
The conclusion challenges the "blame game" itself: these vulnerabilities expose fundamental architectural weaknesses in agentic AI systems where traditional security models fail. We'll establish that securing AI tools demands a shared responsibility framework—developers must code defensively, providers must architect securely, and users must understand AI-specific risks. The session culminates with actionable best practices for each stakeholder.
Makan Sepehrifar
Makan Sepehrifar is a software architect with over 20 years of experience across multiple domains, whose fascination with organizational behavior and cognitive psychology has led him to redefine how technology interfaces with human understanding. Through his public speaking on developer experience and the role of generative AI in shaping its future, he advocates for a fundamental truth: every major technological shift demands an equally major shift in mindset. His work explores how social and technical transformations happen together, believing that this replication of human mindset, organizational, and technological change is the true meaning of agile.
talkDetail.shareFeedback
talkDetail.feedbackNotYetAvailable
talkDetail.feedbackAvailableAfterStart
talkDetail.signInRequired
talkDetail.signInToFeedbackDescription
occupancy.title
occupancy.votingNotYetAvailable
occupancy.votingAvailableBeforeStart
talkDetail.signInRequired
occupancy.signInToVoteDescription
comments.speakerNotEnabledComments