Observing AI Applications with OpenLit
Observability is the ability to measure the current state of a system. The rapid emergence of LLMs and GenAI applications used in production scenarios means we need tools to capture not only logs of our applications, but tracing and metrics to help us understand the usage and errors coming back from LLMs within our application ecosystem.
Join me as I dive into best practices in observing production applications using LLMs. I'll cover an example instrumenting an AI agent application written in TypeScript using OpenLit to generate OpenTelemetry signals, and the data we can capture to help us identify and remediate common issues in production GenAI applications.
talkDetail.whenAndWhere
Join me as I dive into best practices in observing production applications using LLMs. I'll cover an example instrumenting an AI agent application written in TypeScript using OpenLit to generate OpenTelemetry signals, and the data we can capture to help us identify and remediate common issues in production GenAI applications.
Carly Richmond
She enjoys cooking, photography, drinking tea, and chasing after her young son in her spare time.
talkDetail.shareFeedback
talkDetail.feedbackNotYetAvailable
talkDetail.feedbackAvailableAfterStart
talkDetail.signInRequired
talkDetail.signInToFeedbackDescription
occupancy.title
occupancy.votingNotYetAvailable
occupancy.votingAvailableBeforeStart
talkDetail.signInRequired
occupancy.signInToVoteDescription
comments.speakerNotEnabledComments