Workshop Theme

Software development is going through a paradigm shift, where decision making is increasingly shifting from hand-coded program logic to Deep Learning (DL)--popular applications of Speech Processing, Image Recognition, Robotics, Go game, etc. are using Deep Learning as their core components. Such a wide adoption of DL techniques comes with concerns about the reliability of these systems, especially when DLs are used in safety-critical systems like autonomous cars, medical diagnosis, and aircraft collision avoidance systems. Thus, it has become crucial to rigorously test these DL applications with realistic corner cases to ensure high reliability. Moreover, DLs are also applied in diverse program analysis and software testing techniques resulting in malware detection, fuzz testing, bug-finding, type-checking, etc.; DLs tend to improve the existing testing strategies by learning from past experiences. Therefore, systematically testing Deep Learning applications is an emerging and important Software Engineering problem, especially given their increasing deployment in safety-critical systems.

DeepTest is a workshop targeting research at the intersection of testing and deep learning. This interdisciplinary workshop will explore issues related to:

  1. Deep learning applied to software testing.

  2. Testing applied to Deep Learning applications. 

The workshop will consist of invited talks, presentations based on abstract submissions, and a panel discussion, where we invite all participants to contribute. Although the main focus is on deep learning we encourage submissions that are related more broadly to machine learning and testing and the relationship between the two.



List of Accepted Papers

  • Applying Multimorphic Testing to Deep Learning Systems by Paul Temple, Hugo Martin, Mathieu Acher and Jean-Marc Jézéquel.
  • Robustness Testing of Deep Neural Networks by Yuchi Tian, Ripon Saha, Baishakhi Ray and Mukul Prasad.
  • Testing of Deep Learning based classfiers: The Need, Challenges & Current Directions by Anurag Dwarakanath and Sanjay Podder.
  • Customizing Adversarial Machine Learning to test Deep Learning techniques by Paul Temple, Gilles Perrouin, Benoit Frénay and Pierre-Yves Schobbens.
  • Checking Probabilistic Properties of Neural Networks via Symbolic Methods and Sampling by Ravi Mangal, Aditya Nori and Alessandro Orso.
  • Deep Sequence Learning for Software Testing by Hadi Hemmati, Soheila Zangeneh and Maryam Vahdat Pour.
  • Large-Scale Exhaustive Testing of Visual Invariances by Kexin Pei, Linjie Zhu, Yinzhi Cao, Junfeng Yang, Carl Vondrick and Suman Jana.
  • Perspectives in Testing Deep RL by Nicolás Cardozo, Ivana Dusparic and Mario Linares-Vásquez.
  • Towards Continuous Evaluation for Deep Learning by Vijay Walunj, Gharib Gharibi, Priyanka Gaikwad, Sirisha Rella and Yugyung Lee.
  • Efficient Verifiably Robust Training of Neural Networks by Shiqi Wang, Yizheng Chen, Ahmed Abdou and Suman Jana.
  • A Framework for Online Testing of Deep Neural Networks using Bayesian Statistics and Active Learning by Yuning He and Johann Schumann.  
  • Property Inference for Neural Networks by Divya Gopinath, Ankur Taly, Hayes Converse and Corina Pasareanu.