Loading…
Attending this event?
THE MUST ATTEND EVENT FOR CYBERSECURITY PROFESSIONALS
Room: Seacliff AB clear filter
arrow_back View All Dates
Thursday, September 26
 

10:30am PDT

Striding Your Way to LINDDUN: Threat Modeling for Privacy
Thursday September 26, 2024 10:30am - 11:15am PDT
The safeguarding of personal data in modern digital systems can no longer be an afterthought. It must be a consideration from the beginning. It is imperative that the preservation of privacy be a principal objective, and privacy safeguards must be by design.


LINDDUN, an acronym for Linking, Identifying, Non-repudiation, Detecting, Data Disclosure, Unawareness, and Non-compliance, encapsulates the core privacy threats that are prevalent in modern software systems. The LINDDUN privacy threat modeling framework supports privacy engineering by providing a structured approach to identifying, analyzing and mitigating threats to privacy in software systems, enabling the inclusion of privacy safeguards as an inherent part of software design and architecture.


In this presentation we will illustrate how adopting LINDDUN can uncover privacy risks and enable privacy by design. We will navigate through the threat modeling process, applying the LINDDUN framework to a fictional application to demonstrate how LINDDUN serves as a critical tool in identifying and analyzing privacy risks. Whether you’re a seasoned professional or new to the field, this presentation will equip you with the foundational knowledge to effectively implement privacy threat modeling with LINDDUN and elevate your privacy engineering efforts to new heights.

Speakers
avatar for Shanni Prutchi

Shanni Prutchi

Professional Advisory Services Consultant, CrowdStrike
Shanni Prutchi is an information security consultant specializing in incident response preparedness and application security. She currently delivers incident response tabletop exercises and cybersecurity maturity assessment at CrowdStrike, and previously focused on threat modeling... Read More →
avatar for Chris Bush

Chris Bush

Application Security Architect, TEKsystems
Chris has extensive experience in IT and information security consulting and solutions delivery, with expertise in application security, including performing secure code review, web and mobile application penetration testing, architecture reviews and threat modeling.He has been a... Read More →
Thursday September 26, 2024 10:30am - 11:15am PDT
Room: Seacliff AB

11:30am PDT

Under the Radar: How we found 0-days in the Build Pipeline of OSS Packages
Thursday September 26, 2024 11:30am - 12:15pm PDT
Beyond the buzzword of 'supply chain security,' lies a critical, frequently ignored area: the Build Pipelines of Open Source packages. In this talk, we discuss how we’ve developed a large scale data analysis infrastructure that targets these overlooked vulnerabilities in Open Source projects. Our efforts have led to the discovery of countless 0-days in critical OSS projects, such as AWS-managed Kubernetes Operators, Google OSS Fuzz, RedHat OS Build, hundreds of popular Terraform providers and modules and popular GitHub Actions. We will present a detailed attack tree for GitHub Actions pipelines, offering a much deeper analysis than the prior art, and outlining attacks and mitigations. In addition, we will present three Open Source projects that complement our research and provide actionable insights to Builders and Defenders: the 'Living Off the Pipeline' (LOTP) project, the 'poutine' build pipeline scanner and the 'messypoutine' CTF-style training.
Speakers
avatar for François Proulx

François Proulx

Senior Product Security Engineer, BoostSecurity
François is a Senior Product Security Engineer for BoostSecurity, where he leads the Supply Chain research team. With over 10 years of experience in building AppSec programs for large corporations (such as Intel) and small startups he has been in the heat of the action as the DevSecOps... Read More →
Thursday September 26, 2024 11:30am - 12:15pm PDT
Room: Seacliff AB

1:15pm PDT

Don’t Make This Mistake: Painful Learnings of Applying AI in Security
Thursday September 26, 2024 1:15pm - 2:00pm PDT
Leveraging AI for AppSec presents promise and danger, as let’s face it, you cannot do everything with AI, especially when it comes to security. At our session, we’ll delve into the complexities of AI in the context of auto remediation. We’ll begin by examining our research, in which we used OpenAI to address code vulnerabilities. Despite ambitious goals, the results were underwhelming and revealed the risk of trusting AI with complex tasks. 


Our session features real-world examples and a live demo that exposes GenAI’s limitations in tackling code vulnerabilities. Our talk serves as a cautionary lesson against falling into the trap of using AI as a stand-alone solution to everything. We’ll explore the broader implications, communicating the risks of blind trust in AI without a nuanced understanding of its strengths and weaknesses.


In the second part of our session, we’ll explore a more reliable approach to leveraging GenAI for security relying on the RAG Framework. RAG stands for Retrieval-Augmented Generation. It's a methodology that enhances the capabilities of generative models by combining them with a retrieval component. This approach allows the model to dynamically fetch and utilize external knowledge or data during the generation process.

Attendees will leave with a clear understanding of how to responsibly and effectively deploy AI in their programs — and how to properly vet AI tools.

Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co Founder, Mobb
Eitan Worcel is the co-founder and CEO of Mobb, the recent Black Hat StartUp Spotlight winner. He has over 15 years of experience in the application security field as a developer, product management leader, and now business leader. Throughout his career, Eitan has worked with numerous... Read More →
Thursday September 26, 2024 1:15pm - 2:00pm PDT
Room: Seacliff AB

2:15pm PDT

Threat Modeling in the Age of AI
Thursday September 26, 2024 2:15pm - 3:00pm PDT
This session equips participants with the methodology and knowledge to proactively manage risks and improve the security posture of their AI systems. Threat modeling is a systematic approach to identifying potential threats and vulnerabilities in a system. This session will delve into threat modeling for AI systems, and how it differs from traditional applications. Participants will learn what threat modeling is & isn’t, including an overview of terms & methodologies, and then dive into how threat modeling for AI actually works. The presenter is part of the OWASP AI Exchange team of experts who developed the OWASP AI Exchange threat framework, and has extensive experience with threat modeling of mission-critical AI. With that knowledge and experience participants will be guided in applying the threat framework to various types of AI architectures, to cover AI attacks such as data poisoning and indirect prompt injection. 
Speakers
avatar for Susanna Cox

Susanna Cox

Aerospace & Safety Critical AI Systems Engineer, ARCS Aviation
Susanna Cox has spent her career on the cutting edge of AI security, applying her passions for cybersecurity & aviation to engineering mission-critical AI for aerospace and defense. With patents pending in AI security, Susanna’s primary focus is on research & development of safety-critical... Read More →
Thursday September 26, 2024 2:15pm - 3:00pm PDT
Room: Seacliff AB

3:30pm PDT

OWASP Top 10 for Large Language Models: Project Update
Thursday September 26, 2024 3:30pm - 4:15pm PDT
Since its launch in May 2023, the OWASP Top 10 for Large Language Models (LLMs) project has gained remarkable traction across various sectors, including mainstream commercial entities, government agencies, and media outlets. This project addresses the rapidly growing field of LLM applications, emphasizing the critical importance of security in AI development. Our work has resonated deeply within the community, leading to widespread adoption and integration of the Top 10 list into diverse AI frameworks and guidelines.


As we advance into the development of version 2 (v2) of the OWASP Top 10 for LLMs, this session will provide a comprehensive update on the progress made so far. Attendees will gain insights into how version 1 (v1) has been embraced by the wider community, including practical applications, case studies, and testimonials from key stakeholders who have successfully implemented the guidelines.


The session will dive into several key areas:

Adoption and Impact of v1: 

  • Overview of how v1 has been utilized in various sectors.
  • Case studies showcasing the integration of the Top 10 list into commercial, governmental, and academic projects.
  • Feedback from users and organizations on the effectiveness and relevance of the list.



Progress on v2 Development: 

  • An in-depth look at the ongoing development process for v2.
  • Key changes and updates from v1 to v2, reflecting the evolving landscape of LLM security challenges.
  • Methodologies and criteria used to refine and expand the list.



Community Involvement and Contributions: 

  • Ways in which the community can get involved in the project.
  • Opportunities for contributing to the development of v2, including participation in working groups, submitting case studies, and providing feedback.
  • Upcoming events, webinars, and collaboration opportunities for those interested in shaping the future of LLM security.



Future Directions and Goals: 

  • Long-term vision for the OWASP Top 10 for LLMs project.
  • Strategic goals for enhancing the list’s impact and reach.
  • Exploration of potential new areas of focus, such as emerging threats and mitigation strategies.



Attendees will leave this session with a clear understanding of the significant strides made since the project’s inception and the vital role it plays in ensuring secure AI application development. Additionally, they will be equipped with the knowledge and resources to actively participate in and contribute to the ongoing evolution of the OWASP Top 10 for LLMs.

This session is ideal for developers, security professionals, AI researchers, and anyone interested in the intersection of AI and cybersecurity. Join us to learn more about this critical initiative and discover how you can play a part in advancing the security of large language models.


By attending this session, participants will gain actionable insights and practical guidance on integrating the OWASP Top 10 for LLMs into their projects, ensuring robust security measures are in place to address the unique challenges posed by AI technologies.

Speakers
avatar for Steve Wilson

Steve Wilson

Chief Product Officer, Exabeam
Steve is the founder and project leader at the Open Web Application Security Project (OWASP) Foundation, where he has assembled a team of more than 1,000 experts to create the leading comprehensive reference for Generative AI security called the “Top 10 List for Large Language ... Read More →
Thursday September 26, 2024 3:30pm - 4:15pm PDT
Room: Seacliff AB
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -