Watch on demand

First Name
Last Name
Phone Number
Job Title
Postal Code
Email Opt-In
Thank you!
Error - something went wrong!

Optimize Amazon SageMaker deployment strategies

April 19, 2022
High-performance and cost-effective model deployment strategies help maximize your organisation's ML investments. Learn about different deployment options and strategies using Amazon SageMaker, including optimized infrastructure choices; real-time, asynchronous, and batch inferences; multi-container endpoints; multi-model endpoints; auto scaling; model monitoring; and CI/CD integration for your ML workloads. Discover how to choose a better inference option for your ML use case, and hear from Docebo about how they leverage Amazon SageMaker and AWS AI services for fast, low-latency, and scalable ML deployment.
Previous Video
Practical guide to incident response on AWS with NXP
Practical guide to incident response on AWS with NXP

Learn how organisations can leverage native AWS tools, including AWS Systems Manager Incident Manager and A...

Next Video
Reinventing without a roadmap
Reinventing without a roadmap

Discover how culture, leadership, and technology enable organizations to be more resilient by making custom...