
Conference50min
The LLM Smörgåsbord
This talk provides an up-to-date overview of the vast landscape of large language models (LLMs), their multimodal capabilities, practical uses, and deployment options. Attendees will learn how to choose, run, and maximize LLMs both locally and in the cloud, with code examples and guidance for users and developers.

John DaviesIncept5
talkDetail.whenAndWhere
talks.scheduleTBD
Room 7
talks.roomOccupancytalks.noOccupancyInfo
Believe it or not, there are already nearly two million LLMs available on Hugging Face, this doesn't include the public-facing GPTs like ChatGPT, Claude, Gemini, Grok, Mistral, Qwen, DeepSeek, and many more.
LLMs are not just text, chat-bots or code generators, an increasing number can process speech, images, video and often several of these at once. Some fit on a mobile phone, some require a car-size investment in hardware but most will run comfortably on modern laptops.
The talk will bring you up to date with all the latest models (literally up to date), what they're good at, what people are using them for and what's not so good. What's worth trying out, a quick "How to get started" and where the future might be going. We will cover how to run LLMs locally and in-cloud, the advantages of local models and how get the best out of them. Plenty of code, plenty of examples but a LOT more LLMs.
This is for everyone especially those programming with or using LLMs, you should walk away with a clear knowledge of the choices available and the LLMs that might get you to the next level.
LLMs are not just text, chat-bots or code generators, an increasing number can process speech, images, video and often several of these at once. Some fit on a mobile phone, some require a car-size investment in hardware but most will run comfortably on modern laptops.
The talk will bring you up to date with all the latest models (literally up to date), what they're good at, what people are using them for and what's not so good. What's worth trying out, a quick "How to get started" and where the future might be going. We will cover how to run LLMs locally and in-cloud, the advantages of local models and how get the best out of them. Plenty of code, plenty of examples but a LOT more LLMs.
This is for everyone especially those programming with or using LLMs, you should walk away with a clear knowledge of the choices available and the LLMs that might get you to the next level.

John Davies
After a degree in Astrophysics at UCL John started in hardware then assembler, C, C++ and later Java. Almost exclusively in finance he ran FX at Paribas was a global chief architect at JP Morgan, BNP Paribas and VISA. John has co-founded four successful startups since 2000, selling one of them twice to Nasdaq & LSE listed companies. After co-founding Velo Payments with the former president of VISA John has spun off the AI company Incept5. John has co-authored several Java books and is a frequent speaker at technical and banking conferences around the world. He is married to a French wife and has three boys in their 20s.