Is your software on dope?: Formal analysis of surreptitiously “enhanced” programs
Yes, I basically use the same talk in three (EDIT: five!) different places. Luckily the audience was pairwise disjoint 😊! I am really glad I was invited to give these talks. They have been very good opportunities to disseminate this great new work. Moreover, I have enjoyed all three (five!) presentations and had a great feedback. These are the places:
- [21-Feb-2017] UNLP, at the Computer Science Department invited by LIFIA.
- [1-Feb-2017] RWTH Aachen, at the Computer Science Department invited by the Software Modeling and Verification Group (MOVES).
- [17-Jan-2017] Universiy of Twente while visiting the FMT group. I actually gave a talk for a larger audience including undergrad students since it was organized by Interactief and it included a lunch (Dutch style!). I got a nice present from them (thanks, guys!). You can see the announcement of the talk here or here.
- [15-Nov-2016] Universidad Nacional de Río Cuarto for the Computer Science Department.
- [11-Nov-2016] Universidad de Buenos Aires for the LaFHIS group and other members of the Computer Science Department.
The talk presents a work that has recently been accepted for ESOP 2017 and that I have jointly coauthored with Gilles Barthe, Sebastian Biewer, Bernd Finkbeiner, and Holger Hermanns. Here is the infromation of the talk:
Title: Is your software on dope?: Formal analysis of surreptitiously “enhanced” programs.
Abstract: Usually, it is the software manufacturer who employs verification or testing to ensure that the software embedded in a device meets its main objectives. However, these days we are confronted with the situation that economical or technological reasons might make a manufacturer become interested in the software slightly deviating from its main objective for dubious reasons. Examples include lock-in strategies and the NOx emission scandals in automotive industry. This phenomenon is what we call software doping. It is turning more widespread as software is embedded in ever more devices of daily use.
The primary contributions of the presentation is to provide a hierarchy of simple but solid formal definitions that enable to distinguish whether a program is clean or doped. Moreover, we show that these characterisations provide an immediate framework for analysis by using already existing verification techniques. We exemplify this by applying self-composition on sequential programs and model checking of HyperLTL formulas on reactive models.