BigData, Machine-learning, AI & AnalyticsConference50min
Deploying a self-hosted Large Language Model (LLM)
This session explores deploying self-hosted Large Language Models (LLMs) with DevOps principles to enhance AI capabilities securely. It covers automating LLM lifecycles, including infrastructure and CI/CD, building reproducible environments, automating data pipelines, and using containerization and orchestration for scalable, secure, and governed AI integration.
Sergio RuaDigitalis.io
talkDetail.whenAndWhere
Friday, June 20, 13:45-14:35
NT
In an era where artificial intelligence is reshaping our digital landscape, we find ourselves at a critical juncture. While AI offers unprecedented convenience and efficiency, it also presents new challenges to data privacy and security.
One viable approach is deploying a self-hosted Large Language Model (LLM) within your organization, trained with your organizational data, from PDFs to logs.
This session explores the practical application of DevOps principles to the deployment and management of self-hosted Large Language Models (LLMs). We delve into how automating the LLM lifecycle—from infrastructure provisioning to continuous integration/continuous deployment (CI/CD) of model updates—enables organizations to securely and scalably integrate AI capabilities. Learn how to build reproducible environments, automate data pipelines, and leverage containerization and orchestration technologies to maximize the value of LLMs while maintaining security, control, and governance.
One viable approach is deploying a self-hosted Large Language Model (LLM) within your organization, trained with your organizational data, from PDFs to logs.
This session explores the practical application of DevOps principles to the deployment and management of self-hosted Large Language Models (LLMs). We delve into how automating the LLM lifecycle—from infrastructure provisioning to continuous integration/continuous deployment (CI/CD) of model updates—enables organizations to securely and scalably integrate AI capabilities. Learn how to build reproducible environments, automate data pipelines, and leverage containerization and orchestration technologies to maximize the value of LLMs while maintaining security, control, and governance.
Sergio Rua
Sergio's career as a DevOps Engineer is built on a strong foundation of consulting experience, where he's consistently delivered solutions for complex, large-scale environments. This exposure has fostered a versatile skillset, spanning programming, networking, and beyond. However, it's in the realm of DevOps that Sergio truly shines. He's passionate about automation, and his expertise in Kubernetes makes him a valuable asset in any modern cloud-native infrastructure.
comments.speakerNotEnabledComments