top of page

FoMLAS 2019
Jul 14, 9:00 AM

2nd Workshop on Formal Methods for ML-Enabled Autonomous Systems

Affiliated with CAV 2019

Important Dates

  • Abstracts (optional): April 22

  • Full papers: May 4 (extended)

  • Author notification: June 10

  • Workshop: July 14


The workshop's proceedings are available here

Scope and Topics of Interest

In recent years, machine learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.

The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and examples.

The topics covered by the workshop include, but are not limited to, the following:

  • Formal specifications for systems with ML components

  • SAT-based and SMT-based methods for analyzing systems with ML components

  • Mixed-integer Linear Programming and optimization-based methods for the verification of systems with ML components

  • Testing approaches to ML components

  • Statistical approaches to the verification of systems with ML components

  • Approaches for enhancing the explainability of ML-based systems

  • Techniques for analyzing hybrid systems with ML components

Paper Submission and Proceedings

Three categories of submissions are invited:

  • Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.

  • Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.

  • Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. They may range in length from very short (a couple of pages) to the full 10 pages and they will be judged based on the expected level of interest for the community.

All accepted papers will be posted online as part of informal proceedings on the day of the conference. 

Papers in all categories will be peer-reviewed. Papers should not exceed 10 pages and should be in standard-conforming PDF. Technical details may be included in an appendix to be read at the reviewers' discretion. Final versions should be prepared in LaTeX using the LNCS style
. (The 10 page limit does not include references.)

To submit a paper, use EasyChair.

Invited Talk

The workshop will feature an invited talk by Swarat Chaudhuri of Rice University.

Title: Policy Synthesis for Uncertain Autonomous Robots

bstract: The automatic discovery of policies — functions that direct the behavior of an agent exploring an environment — is a central challenge in autonomous systems. Traditionally, policy synthesis was framed as an optimization problem: "Find a policy that maximizes the agent's quantitative rewards." However, an emerging body of work has begun to study policy synthesis modulo formal requirements such "The agent never crashes into a barrier" or "The agent reaches a specific goal configuration with a probability above a threshold". In this talk, I will discuss some of our recent work on this topic, in the setting of robots that execute in uncertain environments.  


In the problem formulation that I will present, interactions between a robot and its environment are modeled as a Partially Observable Markov Decision Process (POMDP). The robot is required to satisfy a probabilistic safe-reachability constraint, and in certain versions, also maximize a quantitative reward objective. We will present a series of algorithms, combining symbolic and statistical verification techniques and POMDP-solving methods such as policy iteration, for synthesizing policies. These algorithms cover contexts in which the environment is known a priori, as well as ones in which synthesis happens online, as the robot explores the environment. I will conclude with a discussion of how this work complements other ongoing efforts on verifying learning-enabled autonomous systems.

bottom of page