Minimizing the Production Impact of Ml Model Updates Wshadow Testing Aim343

Title

AWS re:Invent 2022 - Minimizing the production impact of ML model updates w/shadow testing (AIM343)

Summary

  • Talun Sairam, a senior product manager with the SageMaker inference team, and Ching-Wei Lee from HIR Technologies presented on shadow testing for ML model updates.
  • The session covered the ML lifecycle, focusing on model validation and deployment, comparing existing options, and introducing shadow testing.
  • Shadow testing involves deploying a shadow model alongside a production model to mirror traffic and compare performance without impacting customers.
  • Amazon SageMaker now supports shadow testing natively, simplifying implementation and offering cost control.
  • Ching-Wei Lee demonstrated setting up a shadow test in SageMaker, including creating shadow variants, configuring traffic sampling, and monitoring through a live dashboard.
  • Vivek Kumar from HERE Technologies shared their use of shadow testing, emphasizing its importance for critical services like emergency response where model accuracy is paramount.
  • HERE Technologies integrates SageMaker into their HERE workspace, allowing users to leverage location data and ML capabilities, including shadow testing.
  • The session concluded with an open Q&A.

Insights

  • Shadow testing is a critical practice for safely deploying ML models by mirroring production traffic to a shadow model and analyzing its performance.
  • Amazon SageMaker's new feature simplifies the shadow testing process, making it accessible to more users and reducing the complexity of implementation.
  • The ability to control the percentage of mirrored traffic and the duration of shadow tests helps manage costs and risks associated with deploying new models.
  • HERE Technologies' adoption of SageMaker and shadow testing highlights the industry's need for reliable ML model deployment strategies, especially in scenarios where accurate predictions are crucial for safety and efficiency.
  • The integration of shadow testing into existing ML workflows can accelerate model deployment from weeks to days, enabling businesses to innovate more rapidly while maintaining high standards of reliability and performance.