Title: AWS re:Inforce 2024 - Mitigate OWASP Top 10 for LLM risks with a Zero Trust approach (GAI323)
Insights:
- Overview of LLM-based Application Architecture: The presentation begins with a notional diagram of a generic LLM-based application, highlighting key components such as application services, training data, model weights, and downstream services.
- OWASP Top 10 for LLM: The OWASP Top 10 for LLM applications includes issues like prompt injection, downstream application vulnerabilities, poisoned training data, denial of service, supply chain vulnerabilities, sensitive information disclosure, inadequate access controls, excessive agency, over-reliance on LLM outputs, and model theft.
- Scenario Introduction: The presenters introduce a patient information assistance agent with two personas (receptionist and doctor) to demonstrate different mitigation strategies.
- Role-Playing Mitigation Strategies: Two approaches are discussed:
- Generative AI-based Guardrails: Using prompt engineering to define user roles and access permissions within the LLM. This approach was found to be vulnerable to prompt injection attacks.
- Zero Trust Approach: Implementing traditional security measures such as private link, security groups, least privilege policies, resource-based policies, VPC Lattice, API Gateway, verified permissions, and multi-factor authentication. This approach treats the LLM as an untrusted entity and ensures rigorous authorization checks.
- Layered Security: Emphasis on the importance of layered security, combining both AI-specific and traditional security controls to mitigate vulnerabilities effectively.
- Key Takeaways:
- Avoid providing sensitive data as input to LLMs.
- Perform rigorous authorization for data accessed by LLMs.
- Integrate traditional security models into LLM applications.
- Use tools like Amazon Bedrock Guardrails and prompt engineering in conjunction with traditional security measures.
Quotes:
- "We're going to do a little bit of role-playing up here. So we're going to get started since we only have 20 minutes to talk about mitigating OWASP Top 10 for LLM with a zero trust approach."
- "If you Google for OWASP top ten for LLM, you'll be able to pick that up."
- "Generative AI-based guardrails, right? So what am I going to do when I first build out the system as a proof of concept and start, you know, kind of testing it out and so forth? I'm going to take a prompt engineering approach."
- "We applied zero-trust principles to the LLM applications. And we did it from the point that in addition to the end user that we always consider it untrusted, right? We always authenticate the user. We always authorize the API request from these users in checking end-user device security posture. Now we have another untrusted entity in the center of our application. It's LLM."
- "Do not provide sensitive data as input to LLM that user not supposed to be seeing. Always perform rigorous authorization for the data that is coming to the LLM."
- "Traditional security models have a great place, playing a great role in the building of LLM applications. Use the tools that we know, use the tools that we are aware, and implement them in your applications."
- "While there are AI-specific vulnerabilities, some of them can actually be mitigated better with traditional security controls."