Title: AWS re:Inforce 2024 - Shielding innovation: Safeguarding cloud and AI development (SEC222-S)
Insights:
- Importance of Early Security: Securing cloud environments from the development and software pipeline stage is crucial due to the increasing number of attack vectors originating from software bases.
- Code to Cloud Correlation: Understanding the relationship between code and cloud environments is essential for detecting risks that traditional tools might miss.
- Case Study - Microsoft AI: An example highlighted 38 terabytes of sensitive data exposed by the Microsoft AI research team due to a misconfigured SaaS token in a GitHub repository.
- Limitations of Traditional Tools: Traditional AppSec solutions and CSPM tools lack the necessary context to fully understand and mitigate risks that span from code to cloud environments.
- DevSecOps Approach: A new DevSecOps approach is needed, focusing on understanding code and pipeline security posture, identifying risks from code to cloud, and maintaining a good security posture.
- Risk Detection and Remediation: Detecting risks involves understanding the code to cloud correlation, pipeline security, and code vulnerabilities. Remediation requires prioritizing critical risks with context and empowering developers with the necessary information.
- Maintaining Security Posture: Long-term security involves shifting left by integrating guardrails into the development lifecycle to prevent risks from being deployed.
- Key Capabilities for Solutions: Solutions should offer cloud to code visibility, a unified policy engine, context-driven remediation, the ability to shift left, and detection of pipeline misconfigurations and AI pipeline risks.
- Securing AI Development: AI introduces new risks such as model poisoning, data leakage, and model vulnerabilities. Security teams need visibility into AI services and technologies, and a risk-based approach to prioritize and mitigate these risks.
- AI Security Posture Management (AISPM): AISPM involves agentless scanning for AI services, built-in configuration rules for AI services, and extending deep analysis to AI pipelines to detect and prioritize risks.
- Empowering AI Engineers: Centralized AI security dashboards can help AI engineers and data scientists manage and mitigate risks without needing to become security experts.
Quotes:
- "It's very important to secure a cloud environment, but why is it so important to start securing it from the development and the software pipeline?"
- "We realize that a lot of legacy approaches fall short when it comes to detecting these types of risks."
- "Traditional tools like CSPM that just look at your cloud environment can tell you, for example, hey you have an unencrypted bucket here, but they have no additional context as to are there any risks that exist in the code that could lead someone to that data."
- "Organizations need to adopt a new DevSecOps approach that is really built for the cloud."
- "We want to understand our code and pipeline security posture."
- "We need to understand the code to cloud and cloud to code correlation."
- "AI today is similar to where the cloud was five to ten years ago. Everyone wants to innovate and use it but not a lot of people know how to secure it."
- "Visibility is the foundation. So as a security team, I need to be able to tell, hey these are all the AI services and technologies in my environment."
- "We want to empower AI engineers, data scientists to own the security of the resources that they run."
- "Do I know what AI services and technologies are in my environment? Do I have full visibility?"