Konferenzprogramm

Die im Konferenzprogramm des GTD 2024 angegebenen Uhrzeiten entsprechen der Central European Time (CET).

Ihr benötigt mehr Übersicht vor Ort?
» Zur Programmübersicht als PDF (Mittwoch)
» Zum Raum- und Expoplan

Konferenzprogramm 2024

Unit Tests on Steroids: Leveraging Fuzz Testing and Generative AI for a Scalable Testing Strategy

Building secure and reliable software is an essential and challenging endeavor that requires extensive testing. Due to development teams' time and resource constraints, testing falls short, and necessary tests are even skipped altogether. Feedback-based fuzzing is the most practical dynamic testing method to find bugs and security vulnerabilities in software. In this talk, I'll provide an overview of fuzzing and show how we can leverage large language models to generate the test harnesses needed for fuzzing automatically. This enables an automated and scalable testing strategy for modern software.

Target Audience: Developers, testers, engineering managers, CTOs
Prerequisites: Basic knowledge of Java
Level: Basic

Extended Abstract:
Dynamic testing methods, including feedback-based fuzzing, are the most effective approach for finding software bugs and security vulnerabilities. Fuzzing has uncovered thousands of bugs and vulnerabilities in both open-source and enterprise software. The self-learning aspect of feedback-based fuzzing makes it suitable to integrate into the development process to provide quick feedback to developers, thus enabling them to fix them quickly. Despite this incredible track record, one barrier has been left hindering the broad adoption of dynamic white-box testing: The manual engineering effort required to identify relevant interfaces and develop the corresponding test harnesses. In this talk, I'll provide an overview of feedback-based fuzzing and show how it can automatically uncover functional and security issues. I will also discuss the self-learning aspect of automatically generating test cases that explore the program and maximize code coverage. Next, I'll address the overhead of writing test harnesses for fuzzing and show how we can leverage the code generation capabilities of large language models to automate this step. This opens the door to building an automated and scalable testing strategy for our software. I'll demonstrate the discussed approach in a live demo.

Khaled Yakdan is the Chief Scientist and Co-Founder at Code Intelligence. Holding a Ph.D. in Computer Science and having spent over nine years in academia, Khaled now oversees the implementation of research outcomes in AI, usable security, and vulnerability detection into Code Intelligence’s products. He worked and contributed to research in reverse engineering, vulnerability finding, and concolic executions. His papers are published at top-tier international security conferences.

Khaled Yakdan
10:35 - 11:10
Vortrag: Mi2.1

Vortrag Teilen