Title: AWS re:Inforce 2024 - Outpacing threats w/ CrowdStrike, Anthropic Claude & Amazon Bedrock (TDR202-S)
Insights:
- Generative AI in Security: Generative AI extends human capabilities and impacts security positively by enhancing detection and response mechanisms. However, it also introduces new challenges, such as the need to secure AI systems and mitigate AI-specific risks.
- Modern Threat Landscape: Adversarial threats are becoming more sophisticated and faster, with a significant increase in the use of credentials and insider threats. CrowdStrike has identified 34 new adversaries in 2023, bringing the total to 232.
- Credential Compromise: A notable trend is adversaries logging in with compromised credentials rather than breaking in, often using native tools to evade detection and appear as normal operations.
- Insider Threats: Insider threats, including those acting on behalf of nation-states, pose a significant challenge. Monitoring identity and access patterns is crucial for detecting and responding to these threats.
- Speed of Attacks: The average breakout time for adversaries is 62 minutes, with the fastest observed at just over two minutes, highlighting the need for rapid detection and response systems.
- Cloud Security: Attackers are becoming increasingly cloud-savvy, exploiting cloud-specific resources and accelerating attacks in cloud environments.
- Generative AI Risks: Generative AI democratizes sophisticated attacks, enabling adversaries to leverage AI for faster and more innovative attack methods.
- Data Handling and Privacy: Ensuring data privacy and handling is critical, especially with generative AI. Anthropic emphasizes the importance of not using customer data for training models and maintaining strict data retention policies.
- AI in Security Operations: AI and machine learning enhance security operations by improving detection fidelity, providing richer threat context, and accelerating investigations and responses.
- Charlotte AI: CrowdStrike's conversational AI assistant, Charlotte AI, helps analysts by automating workflows, providing context, and enabling faster decision-making, which is crucial given the skills shortage in cybersecurity.
- Responsible AI Use: Anthropic employs constitutional AI to align models with human values and make them more resistant to jailbreak attempts. They also focus on reducing hallucinations and ensuring the safe deployment of AI models.
- Regulatory Mismatch: The rapid innovation in generative AI outpaces regulatory efforts. Companies like Anthropic are proactively implementing safety levels and responsible scaling policies to manage AI risks effectively.
Quotes:
- "Adversarial threats are becoming more sophisticated. They're moving more and more quickly."
- "They're not breaking in; they're actually logging in."
- "Insider threats are the most devastatingly difficult thing to contend with."
- "The average breakout time is 62 minutes. The fastest we've observed is two minutes and seven seconds."
- "Generative AI is presenting new security challenges where adversaries are also using it to democratize sophisticated attacks."
- "We have to worry about threat actors not only coming after our product but also abusing our product."
- "Generative AI is not occurring in a vacuum. It's actually part of a broader application."
- "We cannot reach into your VPC and interact with your data in that environment."
- "The biggest misconception is around the data handling."
- "We are safeguarding not only the model in our care but also the experience that users have with the model."
- "Anthropic was founded on the idea that this level of research is necessary, and we needed to build and fund a gen AI company in order to get ahead of that risk."