Byte Size15min
One AI Chat to Rule Them All: Connecting Enterprise Tools with MCP
This talk demonstrates integrating enterprise tools like Jira, GitLab, and Slack with an on-premise AI assistant using the Model Context Protocol (MCP). It covers MCP basics, a real-world CERN implementation with open-source models, a live demo, key takeaways, and practical insights for engineers interested in local AI-powered workflow integration.
Karthik SayapparajuCERN
talkDetail.whenAndWhere
Tuesday, February 10, 12:20-12:35
Room C
talks.roomOccupancytalks.noOccupancyInfo
Tired of juggling Jira, Confluence, GitLab, and Slack/Mattermost? What if one AI could search tickets, find docs, check PRs, and summarize discussions - while keeping data on-premise and avoiding subscription costs?
I'll show the audience how I have been testing this at CERN using the Model Context Protocol (MCP) - an emerging open standard for AI-tool integration.
What the audience will see:
MCP Introduction (3 min): What is MCP, how old is it, who's driving it forward (Anthropic, growing community), and why it matters - the "USB-C for AI tools."
Architecture Basics (2 min): How MCP clients, servers, and the protocol work together.
Our On-Premise Implementation (3 min): The stack I have built: Ollama + Docker serving open-source models (Llama, DeepSeek, Mistral) running on CERN GPU Servers, Open WebUI for the interface, mcpo proxy as MCP client, connecting to MCP servers for Atlassian, GitLab, Mattermost, Figma, Obsidian.
Live Demo (5 min): Real queries across systems: "Find my open tickets about EDH, check our docs, and summarize team discussions."
Key Takeaways (2 min):
I'll show the audience how I have been testing this at CERN using the Model Context Protocol (MCP) - an emerging open standard for AI-tool integration.
What the audience will see:
MCP Introduction (3 min): What is MCP, how old is it, who's driving it forward (Anthropic, growing community), and why it matters - the "USB-C for AI tools."
Architecture Basics (2 min): How MCP clients, servers, and the protocol work together.
Our On-Premise Implementation (3 min): The stack I have built: Ollama + Docker serving open-source models (Llama, DeepSeek, Mistral) running on CERN GPU Servers, Open WebUI for the interface, mcpo proxy as MCP client, connecting to MCP servers for Atlassian, GitLab, Mattermost, Figma, Obsidian.
Live Demo (5 min): Real queries across systems: "Find my open tickets about EDH, check our docs, and summarize team discussions."
Key Takeaways (2 min):
- Basic understanding of MCP protocol
- See a real POC connecting to actual enterprise tools
- Running open-source LLMs locally using Ollama
- Honest assessment: what worked, what didn't
- Blueprint of the architecture to experiment with
Karthik Sayapparaju
Software Engineer with nearly 4 years of experience in data management, backend services, web development, and performance analysis across domains such as Scientific Research, Retail, and AI. My core expertise is in Java and Spring Boot, with a new found love for AI-integrated systems.
Currently a Full-Stack Developer in the Finance and Administrative sector, focused on building and maintaining in-house systems and third-party integrations within the Domain of Human Resources.
Currently a Full-Stack Developer in the Finance and Administrative sector, focused on building and maintaining in-house systems and third-party integrations within the Domain of Human Resources.
talkDetail.shareFeedback
talkDetail.feedbackNotYetAvailable
talkDetail.feedbackAvailableAfterStart
talkDetail.signInRequired
talkDetail.signInToFeedbackDescription
occupancy.title
occupancy.votingNotYetAvailable
occupancy.votingAvailableBeforeStart
talkDetail.signInRequired
occupancy.signInToVoteDescription
comments.speakerNotEnabledComments