Data & AIConference40min
How to build your own fun and absurd pair programmer
This session explores building a playful, sarcastic AI assistant using LLMs, Spring Boot, and vector databases. Attendees will learn about Retrieval-Augmented Generation, fast context retrieval, and file system integration, gaining practical insights into developing agentic LLM workflows that go beyond typical, soulless assistants—combining humor, interactivity, and advanced technology.
Alexander ChatzizachariasJDriven
talkDetail.whenAndWhere
Friday, April 24, 16:25-17:05
Skalkotas
talks.roomOccupancytalks.noOccupancyInfo
Tired of AI assistants that are always so boring and soulless? Alexander was. So, he decided to build his own. An AI assistant with personality, flair, and a healthy dose of sarcasm. Imagine a pair programmer that offers sarcastic feedback, makes absurd suggestions, and threatens to blow up your code when it disagrees with your changes. And when things get too quiet, it might even challenge you to a game of tic-tac-toe.
This session is for anyone who believes the best way to learn new technologies is by playfully breaking them. If you’re curious about LLMs and agents beyond the typical use cases, this talk is for you. You’ll leave with practical insights into building your own agentic LLM workflows using Spring Boot, vector databases, and locally running models. Alexander will talk about Retrieval-Augmented Generation (RAG) flows that feed LLMs the right context, multi-vector search for fast context retrieval, and Model Context Protocol (MCP) integrations that let the assistant directly meddle with your file system.
Come to learn, chuckle, and get inspired to create your own dysfunctional digital sidekick.
This session is for anyone who believes the best way to learn new technologies is by playfully breaking them. If you’re curious about LLMs and agents beyond the typical use cases, this talk is for you. You’ll leave with practical insights into building your own agentic LLM workflows using Spring Boot, vector databases, and locally running models. Alexander will talk about Retrieval-Augmented Generation (RAG) flows that feed LLMs the right context, multi-vector search for fast context retrieval, and Model Context Protocol (MCP) integrations that let the assistant directly meddle with your file system.
Come to learn, chuckle, and get inspired to create your own dysfunctional digital sidekick.
Alexander Chatzizacharias
Alexander, a 35-year-old Software Engineer at JDriven, holds dual Dutch and Greek nationality. He earned his master’s degree in Game Studies from the University of Amsterdam, where he discovered his passion for gamification and software engineering. Alexander aims to bridge the gap between game development and software engineering, believing that both industries have much to learn from each other. He is dedicated to integrating technologies and methodologies from both fields. Additionally, he enjoys experimenting with new technologies and cutting-edge sdk's.
talkDetail.shareFeedback
talkDetail.feedbackNotYetAvailable
talkDetail.feedbackAvailableAfterStart
talkDetail.signInRequired
talkDetail.signInToFeedbackDescription
occupancy.title
occupancy.votingNotYetAvailable
occupancy.votingAvailableBeforeStart
talkDetail.signInRequired
occupancy.signInToVoteDescription
comments.speakerNotEnabledComments