Maria Christakis

Photo of Maria Christakis

Talk: Building Program Analyzers for Developers


Date and Time
August 08, 2016 at 3:30pm
Location

CSE 305

Abstract

Large software companies have recently started building program analysis ecosystems, like Google’s Tricorder or Microsoft’s CloudBuild. These ecosystems allow for running several program analyzers in the cloud, each with its own attributes, for instance, speed of the analysis, type of detected code issues, number of true or false positives, etc.

Designers of such ecosystems need to decide which analyzers should run and when, e.g., in the editor, as part of the build, during code review, etc. But how should these decisions be made? Which kinds of program analyzers are valuable to software engineers, rather than a waste of time? How do they fit in the development process? How should their results be reported?

Users of these ecosystems need to decide which program analyzers to trust, as each analyzer is oblivious to the verification results of the others. If analyzer X reports a warning for a particular piece of code and analyzer Y doesn’t, which one should be taken seriously?

In this talk, I will attempt to answer these questions both through a multi-method empirical investigation at Microsoft but also through my experience as a program analysis designer.

Bio

I am a post-doctoral researcher in the Tools for Software Engineers (TSE) and Research in Software Engineering (RiSE) groups at Microsoft Research Redmond. I am interested in software reliability through program analysis and more specifically through automatic test case generation, static analysis and verification, and software security. At Microsoft, I am mainly working on CloudBuild, a cloud-based build and test system, used by thousands of developers in product groups across the company.

I received my Ph.D. at the Chair of Programming Methodology, in the Department of Computer Science of ETH Zurich, Switzerland. I was very fortunate to be advised by Peter Müller. My thesis focuses on Narrowing the Gap between Verification and Systematic Testing in two directions.

In the first direction, I explore how to effectively combine unsound static analysis with systematic testing, so as to guide dynamic test generation toward properties that have not been checked (soundly) by a static analyzer. This combination significantly reduces the test effort while checking more unverified properties.

In the second direction, I push systematic testing toward verification. In particular, I attempt to prove memory safety of the ANI Windows image parser using only dynamic test generation, first manually and then automatically. This work is in a very gratifying collaboration with Patrice Godefroid.

Talk Recording