Explaining Program Analysis Output to Developers, and Beyond: A Human-centered Approach

Within integrated development environments (IDEs), such as Visual Studio and Eclipse, program analysis tools perform sophisticated analysis to identify defects in source code. These defects are then reported to developers using a variety of text and visual mechanisms, such as the familiar wavy red underline. Unfortunately, the error messages and the explanations these tools produce remain perplexing for developers to comprehend and to resolve. In contrast to traditional programming language research which typically focuses on properties such as correctness, soundness, and completeness, in this talk, I present a human-centered perspective for human-friendly program analysis output.

By framing program analysis messages through theories of explanation, I present my research on how developers understand and resolve error messages within their IDE. First, I present an interview study conducted at Google on challenges that data scientists face when using a declarative language to the domain of malware analysis. Second, using eye-tracking as a lens into developer understanding, I present research on how developers read and attend to compiler error messages in their IDE. Third, I present the results of a company-wide case study at Microsoft, in which I investigate difficulties with working and interpreting the results of log and telemetry analysis across different roles at Microsoft as the company continues to embrace a data-driven culture. Through these studies, I conclude with implications for tool designers on presenting error messages to developers and identify opportunities for incorporating explanatory techniques into tools beyond those of traditional software development environments.

See more on this video at https://www.microsoft.com/en-us/research/video/explaining-program-analysis-output-developers-beyond-human-centered-approach-design-program-analysis-error-reporting/