Title: AWS re:Inforce 2024 - Security controls for generative AI use cases (GAI221)
Insights:
- The session focuses on security strategies for AI-specific applications, emphasizing vulnerability and threat modeling.
- Adam Shostak's four questions for building a vulnerability matrix are highlighted: "What are we working on?", "What can go wrong?", and "What are we going to do about it?"
- The scoping matrix is introduced to provide a common language for discussing AI security, ranging from consumer applications (Scope 1) to fully custom-built models (Scope 5).
- Scope 1 and Scope 2 involve minimal changes to existing security practices, focusing on standard controls like role-based access controls (RBAC) and cloud access security brokers (CASB).
- Scope 3 involves using pre-trained models and emphasizes the importance of retrieval augmented generation (RAG) for providing relevant information securely.
- Scope 4 includes fine-tuning pre-trained models with domain-specific data, necessitating additional security measures to protect proprietary data and model artifacts.
- Scope 5 involves creating custom models from scratch, requiring rigorous control over training data to avoid biases and ensure responsible AI practices.
- Technical controls such as IAM, KMS, and Bedrock guardrails are essential for securing AI applications across different scopes.
- Non-technical controls, including governance, processes, and training, are equally important for comprehensive AI security, as emphasized by the NIST Cybersecurity Framework (CSF) version 2.0.
Quotes:
- "What we're talking about in terms of first principles today is vulnerability or threat modeling."
- "What the matrix does is it helps you zero in on what type of application are we talking about specifically."
- "In the wild, as it were, we have not seen really a host of new vulnerabilities."
- "If something has made its way into the model weights, it can and will come out."
- "Use your application to enforce those boundaries using the retrieval stage."
- "You need to make sure that you protect those fine-tuned artifacts and the model inference endpoint."
- "Not all controls are going to be technical."
- "NIST CSF actually introduced an entirely new function called govern, which is all about processes, people, training, basically preparing your organization to adopt technologies in a safe manner."