Loading…
Attending this event?
THE MUST ATTEND EVENT FOR CYBERSECURITY PROFESSIONALS
Room: Seacliff AB clear filter
Thursday, September 26
 

10:30am PDT

Striding Your Way to LINDDUN: Threat Modeling for Privacy
Thursday September 26, 2024 10:30am - 11:15am PDT
The safeguarding of personal data in modern digital systems can no longer be an afterthought. It must be a consideration from the beginning. It is imperative that the preservation of privacy be a principal objective, and privacy safeguards must be by design.


LINDDUN, an acronym for Linking, Identifying, Non-repudiation, Detecting, Data Disclosure, Unawareness, and Non-compliance, encapsulates the core privacy threats that are prevalent in modern software systems. The LINDDUN privacy threat modeling framework supports privacy engineering by providing a structured approach to identifying, analyzing and mitigating threats to privacy in software systems, enabling the inclusion of privacy safeguards as an inherent part of software design and architecture.


In this presentation we will illustrate how adopting LINDDUN can uncover privacy risks and enable privacy by design. We will navigate through the threat modeling process, applying the LINDDUN framework to a fictional application to demonstrate how LINDDUN serves as a critical tool in identifying and analyzing privacy risks. Whether you’re a seasoned professional or new to the field, this presentation will equip you with the foundational knowledge to effectively implement privacy threat modeling with LINDDUN and elevate your privacy engineering efforts to new heights.

Speakers
avatar for Shanni Prutchi

Shanni Prutchi

Professional Advisory Services Consultant, CrowdStrike
Shanni Prutchi is an information security consultant specializing in incident response preparedness and application security. She currently delivers incident response tabletop exercises and cybersecurity maturity assessment at CrowdStrike, and previously focused on threat modeling... Read More →
avatar for Chris Bush

Chris Bush

Application Security Architect, TEKsystems
Chris has extensive experience in IT and information security consulting and solutions delivery, with expertise in application security, including performing secure code review, web and mobile application penetration testing, architecture reviews and threat modeling.He has been a... Read More →
Thursday September 26, 2024 10:30am - 11:15am PDT
Room: Seacliff AB

11:30am PDT

Under the Radar: How we found 0-days in the Build Pipeline of OSS Packages
Thursday September 26, 2024 11:30am - 12:15pm PDT
Beyond the buzzword of 'supply chain security,' lies a critical, frequently ignored area: the Build Pipelines of Open Source packages. In this talk, we discuss how we’ve developed a large scale data analysis infrastructure that targets these overlooked vulnerabilities in Open Source projects. Our efforts have led to the discovery of countless 0-days in critical OSS projects, such as AWS-managed Kubernetes Operators, Google OSS Fuzz, RedHat OS Build, hundreds of popular Terraform providers and modules and popular GitHub Actions. We will present a detailed attack tree for GitHub Actions pipelines, offering a much deeper analysis than the prior art, and outlining attacks and mitigations. In addition, we will present three Open Source projects that complement our research and provide actionable insights to Builders and Defenders: the 'Living Off the Pipeline' (LOTP) project, the 'poutine' build pipeline scanner and the 'messypoutine' CTF-style training.
Speakers
avatar for François Proulx

François Proulx

Senior Product Security Engineer, BoostSecurity
François is a Senior Product Security Engineer for BoostSecurity, where he leads the Supply Chain research team. With over 10 years of experience in building AppSec programs for large corporations (such as Intel) and small startups he has been in the heat of the action as the DevSecOps... Read More →
Thursday September 26, 2024 11:30am - 12:15pm PDT
Room: Seacliff AB

1:15pm PDT

Don’t Make This Mistake: Painful Learnings of Applying AI in Security
Thursday September 26, 2024 1:15pm - 2:00pm PDT
Leveraging AI for AppSec presents promise and danger, as let’s face it, you cannot do everything with AI, especially when it comes to security. At our session, we’ll delve into the complexities of AI in the context of auto remediation. We’ll begin by examining our research, in which we used OpenAI to address code vulnerabilities. Despite ambitious goals, the results were underwhelming and revealed the risk of trusting AI with complex tasks. 


Our session features real-world examples and a live demo that exposes GenAI’s limitations in tackling code vulnerabilities. Our talk serves as a cautionary lesson against falling into the trap of using AI as a stand-alone solution to everything. We’ll explore the broader implications, communicating the risks of blind trust in AI without a nuanced understanding of its strengths and weaknesses.


In the second part of our session, we’ll explore a more reliable approach to leveraging GenAI for security relying on the RAG Framework. RAG stands for Retrieval-Augmented Generation. It's a methodology that enhances the capabilities of generative models by combining them with a retrieval component. This approach allows the model to dynamically fetch and utilize external knowledge or data during the generation process.

Attendees will leave with a clear understanding of how to responsibly and effectively deploy AI in their programs — and how to properly vet AI tools.

Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co Founder, Mobb
Eitan Worcel is the co-founder and CEO of Mobb, the recent Black Hat StartUp Spotlight winner. He has over 15 years of experience in the application security field as a developer, product management leader, and now business leader. Throughout his career, Eitan has worked with numerous... Read More →
Thursday September 26, 2024 1:15pm - 2:00pm PDT
Room: Seacliff AB

2:15pm PDT

Threat Modeling in the Age of AI
Thursday September 26, 2024 2:15pm - 3:00pm PDT
This session equips participants with the methodology and knowledge to proactively manage risks and improve the security posture of their AI systems. Threat modeling is a systematic approach to identifying potential threats and vulnerabilities in a system. This session will delve into threat modeling for AI systems, and how it differs from traditional applications. Participants will learn what threat modeling is & isn’t, including an overview of terms & methodologies, and then dive into how threat modeling for AI actually works. The presenter is part of the OWASP AI Exchange team of experts who developed the OWASP AI Exchange threat framework, and has extensive experience with threat modeling of mission-critical AI. With that knowledge and experience participants will be guided in applying the threat framework to various types of AI architectures, to cover AI attacks such as data poisoning and indirect prompt injection. 
Speakers
avatar for Susanna Cox

Susanna Cox

Aerospace & Safety Critical AI Systems Engineer, ARCS Aviation
Susanna Cox has spent her career on the cutting edge of AI security, applying her passions for cybersecurity & aviation to engineering mission-critical AI for aerospace and defense. With patents pending in AI security, Susanna’s primary focus is on research & development of safety-critical... Read More →
Thursday September 26, 2024 2:15pm - 3:00pm PDT
Room: Seacliff AB

3:30pm PDT

OWASP Top 10 for Large Language Models: Project Update
Thursday September 26, 2024 3:30pm - 4:15pm PDT
Since its launch in May 2023, the OWASP Top 10 for Large Language Models (LLMs) project has gained remarkable traction across various sectors, including mainstream commercial entities, government agencies, and media outlets. This project addresses the rapidly growing field of LLM applications, emphasizing the critical importance of security in AI development. Our work has resonated deeply within the community, leading to widespread adoption and integration of the Top 10 list into diverse AI frameworks and guidelines.


As we advance into the development of version 2 (v2) of the OWASP Top 10 for LLMs, this session will provide a comprehensive update on the progress made so far. Attendees will gain insights into how version 1 (v1) has been embraced by the wider community, including practical applications, case studies, and testimonials from key stakeholders who have successfully implemented the guidelines.


The session will dive into several key areas:

Adoption and Impact of v1: 

  • Overview of how v1 has been utilized in various sectors.
  • Case studies showcasing the integration of the Top 10 list into commercial, governmental, and academic projects.
  • Feedback from users and organizations on the effectiveness and relevance of the list.



Progress on v2 Development: 

  • An in-depth look at the ongoing development process for v2.
  • Key changes and updates from v1 to v2, reflecting the evolving landscape of LLM security challenges.
  • Methodologies and criteria used to refine and expand the list.



Community Involvement and Contributions: 

  • Ways in which the community can get involved in the project.
  • Opportunities for contributing to the development of v2, including participation in working groups, submitting case studies, and providing feedback.
  • Upcoming events, webinars, and collaboration opportunities for those interested in shaping the future of LLM security.



Future Directions and Goals: 

  • Long-term vision for the OWASP Top 10 for LLMs project.
  • Strategic goals for enhancing the list’s impact and reach.
  • Exploration of potential new areas of focus, such as emerging threats and mitigation strategies.



Attendees will leave this session with a clear understanding of the significant strides made since the project’s inception and the vital role it plays in ensuring secure AI application development. Additionally, they will be equipped with the knowledge and resources to actively participate in and contribute to the ongoing evolution of the OWASP Top 10 for LLMs.

This session is ideal for developers, security professionals, AI researchers, and anyone interested in the intersection of AI and cybersecurity. Join us to learn more about this critical initiative and discover how you can play a part in advancing the security of large language models.


By attending this session, participants will gain actionable insights and practical guidance on integrating the OWASP Top 10 for LLMs into their projects, ensuring robust security measures are in place to address the unique challenges posed by AI technologies.

Speakers
avatar for Steve Wilson

Steve Wilson

Chief Product Officer, Exabeam
Steve is the founder and project leader at the Open Web Application Security Project (OWASP) Foundation, where he has assembled a team of more than 1,000 experts to create the leading comprehensive reference for Generative AI security called the “Top 10 List for Large Language ... Read More →
Thursday September 26, 2024 3:30pm - 4:15pm PDT
Room: Seacliff AB
 
Friday, September 27
 

10:30am PDT

OWASP AI Exchange experts on the future of security for AI
Friday September 27, 2024 10:30am - 11:15am PDT
By participating in this panel, attendees will gain an understanding of the crucial role of OWASP AI Exchange in securing AI technologies and how they can contribute to and benefit from this vital initiative. 
Speakers
avatar for Dan Sorenson

Dan Sorenson

Dan Sorensen is a seasoned cybersecurity leader with over 22 years of experience as a CISO and cybersecurity engineer in aerospace. A U.S. Air Force and Air National Guard veteran, he specializes in risk management, AI-driven defense strategies, and ethical AI integration. Dan has... Read More →
avatar for Chloé Messdaghi

Chloé Messdaghi

CEO & Founder, SustainCyber
Chloé Messdaghi is a cybersecurity leader dedicated to building strong relationships that drive the development of security standards and policies. She spearheads initiatives to strengthen AI security measures and fosters collaborative efforts to enhance industry-wide practices... Read More →
avatar for Susanna Cox

Susanna Cox

Aerospace & Safety Critical AI Systems Engineer, ARCS Aviation
Susanna Cox has spent her career on the cutting edge of AI security, applying her passions for cybersecurity & aviation to engineering mission-critical AI for aerospace and defense. With patents pending in AI security, Susanna’s primary focus is on research & development of safety-critical... Read More →
avatar for Aruneesh Salhotra

Aruneesh Salhotra

Aruneesh Salhotra is a seasoned technologist and servant leader, renowned for his extensive expertise across cybersecurity, DevSecOps, AI, Business Continuity, Audit, Sales. His impactful presence as an industry thought leader is underscored by his contributions as a speaker and panelist... Read More →
Friday September 27, 2024 10:30am - 11:15am PDT
Room: Seacliff AB

11:30am PDT

Millions Of Public Certificates Are Reusing Old Private Keys
Friday September 27, 2024 11:30am - 12:15pm PDT
TLS Certificates are re-using private keys by the millions. We'll demonstrate that key re-use in TLS certificates is systemic and undermines one of the foundational protections offered in modern web security


We looked at 7 billion certs logged in Certificate Transparency and found millions of certs re-using private keys. We identified orgs like Verizon that re-used the same key for 10 years, despite revoking it in the first year! We found cases of organizations continuing to re-use the same private key to issue new certs, despite having had that key compromised. Picture a short lived cert that only lasts 90 days, but the same key is re-used on all future certs for a decade 

We also analyzed SSH key re-use for authentication to GitHub. We looked at 58 million GitHub user’s keys and found >100k SSH keys re-used between multiple GitHub account


We’ll show the extent of private key reuse, show re-use of keys from revoked certificates, and open-source a tool to identify certs that reuse private keys. We'll provide examples of common cert generation frameworks that repeatedly use the same key, despite the security risks


Keys are even sometimes used for TLS certs and repurposed as SSH keys on GitHub 

This talk dives deep into a world of systemic private encryption key re-use, the dangers, and current threats it poses

Speakers
avatar for Dylan Ayrey

Dylan Ayrey

CEO, TruffleHog
Dylan is the original author of the open source version of TruffleHog, which he built after recognizing just how commonly credentials and other secrets were exposed in Git. Coming most recently from the Netflix security team, Dylan has spoken at a number of popular information security... Read More →
JL

Joseph Leon

Security Researcher, Truffle Security
Joe Leon is a security researcher at Truffle Security where he works to identify new sources of leaked secrets and contributes to the open-source security community. Previously, Joe led application security assessments for an offensive security consulting firm. Joe has taught technical... Read More →
Friday September 27, 2024 11:30am - 12:15pm PDT
Room: Seacliff AB

1:15pm PDT

Slack’s Vulnerability Aggregator: How we built a platform to manage vulnerabilities at scale
Friday September 27, 2024 1:15pm - 2:00pm PDT
Managing vulnerabilities effectively in a diverse tooling environment posed significant challenges for Slack's Security team. Historically, disparate tools generated varied scan results, severity assessments, and reporting formats, complicating triage and remediation processes. This fragmented approach led to inefficiencies, coverage gaps, and increased workload for security engineers and developers.




To address these challenges, we developed a comprehensive vulnerability aggregation platform. This platform centralizes all tooling findings, standardizes processing methodologies, and streamlines reporting across Slack's ecosystem. We hope you can apply the insights from our presentation to simplify vulnerability management tasks within your own organization.

Speakers
avatar for Atul Gaikwad

Atul Gaikwad

Staff Security Engineer, Salesforce
Atulkumar Gaikwad has 15+ years of experience in Application/Cloud security, DevSecOps, Third party risk management and consulting. He currently works as a Staff Product Security Engineer at Salesforce helping to make devs life easy with security automation. He loves to break things... Read More →
avatar for Nicholas Lin

Nicholas Lin

Software Security Engineer, Salesforce
After graduating from the University of Virginia, Nicholas began his career as a Software Engineer on the Product Security team at Slack. Over the past two years, he has developed systems that empower risk owners to remediate security risks at scale. Nicholas is dedicated to building... Read More →
Friday September 27, 2024 1:15pm - 2:00pm PDT
Room: Seacliff AB

2:15pm PDT

Escaping Vulnerability Hell: Bridging the Gap Between Developers and Security Teams
Friday September 27, 2024 2:15pm - 3:00pm PDT
Fixing web application security vulnerabilities is critical but often frustrating, leading to what we call "Vulnerability Hell." This talk covers the main challenges of false positives and difficult fixes, their impact on developers and security teams, and practical solutions involving AI, penetration testing, and application-level attacks. Discover how better tools, automated suggestions, integrated workflows, and improved collaboration can help.
Speakers
avatar for Ahmad Sadeddin

Ahmad Sadeddin

CEO, Corgea
Ahmad is a 3x founder (1x exit) and is currently the CEO at Corgea. He led various products at Coupa for over 6 years after they acquired his last startup (Riskopy). Corgea was born from his frustration at the manual and inefficient processes that companies take around security.
Friday September 27, 2024 2:15pm - 3:00pm PDT
Room: Seacliff AB

3:30pm PDT

Maturing Your Application Security Program with ASVS-Driven Development
Friday September 27, 2024 3:30pm - 4:15pm PDT
Application security requires a systematic and holistic approach. However, organizations typically struggle in creating an effective application security (AppSec) program. They often end up in the rabbit hole of fixing security tool-generated vulnerabilities. We believe that leveraging ASVS as a security requirements framework as well as a guide to unit and integration testing is amongst the highest added value security practices. By turning security requirements into “just requirements” organizations can enable a common language shared by all stakeholders involved in the SDLC.

In this talk, we would like to present the case of ASVS-driven development. Firstly, we have analyzed the completed ASVS to determine how much of it could be transformed into security test cases. Our analysis indicates that 162 ASVS requirements (58%) can be automatically verified using unit, integration and acceptance tests. Secondly, we have designed an empirical study where we have added 98 ASVS requirements to the sprint planning of a relatively large web application. We have implemented unit and integration tests for 90 ASVS requirements in 10 man-days that are now part of the security regression test suites.

Our study demonstrates that leveraging ASVS for deriving security test cases can create a common theme across all stages of the software development lifecycle making security everyone’s responsibility.








Speakers
avatar for Aram Hovsepyan

Aram Hovsepyan

Founder and CEO, Codific
Aram is the founder and CEO of Codific - a Flemish cybersecurity product firm. With over 15 years of experience, he jas a proven track record in building complex software systems by explicitly focusing on software security. Codific’s flagship product, Videolab, is a secure multimedia... Read More →
Friday September 27, 2024 3:30pm - 4:15pm PDT
Room: Seacliff AB