Data & (Gen)AIData & (Gen)AI
Conference50min
INTERMEDIATE

Avoid common LLM pitfalls

This talk will overview the latest advancements and limitations in multi-modal Large Language Models (LLMs). It will also discuss techniques to overcome common LLM issues, including response schemas, Retrieval-Augmented Generation (RAG), Function Calling, and Grounding. The focus will be on enhancing LLM outputs and linking them to verifiable sources.

Mete Atamel
Mete AtamelGoogle

talkDetail.whenAndWhere

Thursday, October 10, 17:40-18:30
Room 9
talks.description
It’s easy to generate content with a Large Language Model (LLM), but the output is often badly formatted, suffers from hallucinations (fake content), outdated information (not based on the latest data), reliance on public data only (no private data), and a lack of citations back to original sources. Not ideal for real-world applications. In this talk, we’ll provide a quick overview of the latest advancements in multi-modal LLMs, highlighting their capabilities and limitations. We’ll then explore various techniques to overcome common LLM pitfalls, including response schemas to tame the LLM outputs, Retrieval-Augmented Generation (RAG) to enhance prompts with relevant data, Function Calling to enhance LLMs with external APIs, and Grounding to link LLM outputs to verifiable information sources, and more.
Multi-modal LLMs
Function Calling
Retrieval-Augmented Generation
Large Language Model
talks.speakers
Mete Atamel

Mete Atamel

Google

United Kingdom

I’m a Software Engineer and a Developer Advocate at Google in London. I build tools, demos, tutorials, and give talks to educate and help developers to be successful on Google Cloud.

talkDetail.rateThisTalk

talkDetail.ratingExpired

talkDetail.ratingWindowExpired

occupancy.title

occupancy.votingClosed

occupancy.votingWindowExpired

comments.title

comments.speakerNotEnabledComments