Secure Your Code with AI and NLPSEI Speaking Free Online Access
Jun 4, 2019 · Webcast
In this talk, we’ll discuss how a branch of artificial intelligence called Natural Language Processing, or NLP, is being applied to computer code.
As every software engineer knows, writing secure software is an incredibly difficult task. There are many techniques available to assist developers in finding bugs hiding in their code, but none are perfect, and an adversary only needs one to cause problems. In this talk, we’ll discuss how a branch of artificial intelligence called Natural Language Processing, or NLP, is being applied to computer code. Using NLP, we can find bugs that aren’t visible to existing techniques, and we can start to understand better what our computers are creating. While this field is still young, advances are coming rapidly, and we talk about the current state of the art and what we expect to see in the near future.
Attendees will learn:
- The basics of Natural Language Processing
- Techniques for finding bugs in software code using NLP-derived tools
- Other interesting applications of NLP to code, including automated code generation and automated documentation generation
- Areas of active research in this field
Who should attend?
- Software engineers, team leads, and managers
- Quality assurance engineers, team leads, and managers
- Acquisition consultants
- Technical leadership
About the Speakers
Eliezer Kanal is a technical manager at the Software Engineering Institute’s CERT Division, focused on applying machine learning techniques to the cybersecurity domain. His team has contributed to a wide variety of projects, including developing statistical visualization tools to assist with malware reverse engineering, developing metrics for the efficacy of cyber attack forecasting techniques, automatic identification of true positive/false positive labels for static code analysis vulnerabilities, and automated classification of netflow traffic types, among others.
Dr. Nathan VanHoudnos (van-HOD-ness) is a senior machine learning research scientist at the CERT Division of the Software Engineering Institute at Carnegie Mellon University. He is primarily interested in helping to develop the field of AI engineering from its current ad hoc Wild West state to a field with defined and repeatable processes that can be optimized. His current research focuses on training neural networks to be robust to post-training evasion attacks, and more generally, to verifying that a given neural network conforms to a defined set of security properties.
Benjamin Cohen is a machine learning research scientist with a background in computational and theoretical neuroscience. After receiving his bachelor's degree, he worked at the National Institutes of Health where he developed a spiking neural network model of the cortex which simultaneously explains noise found at multiple scales in the brain, as well as perception of illusory stimuli. He is interested in techniques and ideas at the intersection of neuroscience and machine learning, with applications in cyber security. His main project involves applying machine learning techniques to source code to identify vulnerabilities.