top of page

Accepted Papers

​

  1. Reasoning about Convolutional Neural Networks via Transfer Learning and Convex Programming

    Authors: Dario Guidotti, Francesco Leofante, Luca Pulina and Armando Tacchella

    Abstract: Recent public calls for the development of explainable and verifiable AI led to a growing interest in formal verification and repair of machine-learned models. Despite the impressive progress that the learning community has made, models such as deep neural networks remain vulnerable to adversarial attacks, and their sheer size still represents a major obstacle to formal analysis. In this paper, we present our current efforts to tackle repair of deep convolutional neural networks using ideas borrowed from Transfer Learning. Using preliminary results obtained on toy architectures, we show how models of deep convolutional neural networks can be transformed into simpler ones preserving their accuracy, and we discuss how formal repair could benefit from this process.

     

  2. Conformal Predictions for Hybrid System State Classification

    Authors: Luca Bortolussi, Francesca Cairoli, Nicola Paoletti, Scott Smolka and Scott Stoller

    Abstract: Neural State Classification (NSC) [19] is a scalable method for the analysis of hybrid systems, which consists in learning a neural network-based classifier able to detect whether or not an unsafe state can be reached from a certain configuration of a hybrid system. NSC has very high accuracy, yet it is prone to prediction errors that can affect system safety. To overcome this limitation, we present a method, based on the theory of conformal prediction, that complements NSC predictions with statistically sound estimates of prediction uncertainty. This results in a principled criterion to reject potentially erroneous predictions a priori, i.e., without knowing the true reachability values. Our approach is highly efficient (with runtimes in the order of milliseconds) and effective, managing in our experiments to successfully reject almost all the wrong NSC predictions.

     

  3. Formal verification of neural network controlled autonomous systems

    Authors: Xiaowu Sun, Haitham Khedr and Yasser Shoukry

    Abstract: In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial states such that a robot trajectory starting from these initial states is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of safe initial states. To mathematically model the imaging function, that maps the robot position to the LiDAR image, we introduce the notion of imaging-adapted partitions of the workspace in which the imaging function is guaranteed to be affine. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) nonlinearity, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible assignments of different ReLUs. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space of feasible ReLU assignments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with the increasing complexity of the neural network controller.

     

  4. Verification of RNN-Based Neural Agent-Environment Systems

    Authors: Michael Akintunde, Andreea Kovorchian, Alessio Lomuscio and Edoardo Pirovano

    Abstract: We introduce agent-environment systems where the agent is stateful and executing a ReLU recurrent neural network. We define and study their verification problem by providing equivalences of recurrent and feed-forward neural networks on bounded execution traces. We give a sound and complete procedure for their verification against properties specified in a simplified version of LTL on bounded executions. We present an implementation and discuss the experimental results obtained.

     

  5. Better AI through Logical Scaffolding

    Authors: Nikos Arechiga, Jonathan DeCastro, Soonho Kong and Karen Leung

    Abstract: We describe the concept of logical scaffolds, which can be used to improve the quality of software that relies on AI components. We explain how some of the existing ideas on runtime monitors for perception systems can be seen as a specific instance of logical scaffolds. Furthermore, we describe how logical scaffolds may be useful for improving AI programs beyond perception systems, to include general prediction systems and agent behavior models.

     

  6. Improving Deep Neural Network Verification Using Specification-Guided Search

    Authors: Joshua Smith, Xiaowei Huang, Viswanathan Swaminathan and Zhen Zhang

    Abstract: Providing safety guarantees is extremely important for neural networks used in safety critical applications. Due to their high dimensionality, deep neural networks pose a difficult challenge to modern formal verification techniques. Adversarial robustness, a safety property describing the robustness of a network to adversarial examples, can be formally defined as the non-existence of adversarial examples within a region around a tested point in the input space. Even though the region being verified is limited to a relatively small radius, the continuous nature of weights and biases in neural networks makes exhaustive search for adversarial examples infeasible. On the other hand, discretization of the region can significantly speed up the search. Adding a feature dimension heuristic to the search has successfully improved the performance and feasibility of adversarial robustness verification. This paper describes a specification-guided dimension selection method that utilizes the training data of the network to guide a search of the robustness region. Comparison with a state-of-the-art verification tool demonstrates improvement in both efficiency and confidence in adversarial example detection.

     

  7. Safety Verification in Reinforcement Learning Control

    Authors: Hoang Dung Tran, Feiyang Cai, Diego Manzanas Lopez, Patrick Musau, Taylor T Johnson and Xenofon Koutsoukos

    Abstract: This paper proposes a new forward reachability analysis approach to verify the safety of cyber-physical systems (CPS) with reinforcement learning controllers. The foundation of our approach lies on two efficient, exact and over-approximate reachability algorithms for neural network control systems using star set which is an efficient representation of polyhedron. Using these algorithms, we determine the initial conditions for which a safety-critical system with a neural network controller is safe by incrementally searching a critical initial condition where the safety of the system cannot be proved. Our approach produces tight over-approximation error and it is computational efficient which allows the application to practical CPS with learning enable components (LECs). We implement our approach in NNV [18], a recent verification tool for neural network control systems, and evaluate its advantages and applicability by verifying the safety of a practical Advanced Emergency Braking System (AEBS) with a reinforcement learning (RL) controller trained using deep deterministic policy gradient (DDPG) method. The experimental results show that our new reachability algorithms are much less conservative than the polyhedron-based approach [18, 21, 22]. We successfully determine the entire region of the initial conditions of the AEBS with the RL controller such that the safety of the system is guaranteed while the polyhedron-based approach cannot prove the safety properties of the system.

     

  8. Logic Extraction for Explainable AI

    Authors: Susmit Jha

    Abstract: In this paper, we investigate logic extraction as a means for building explainable AI. Blackbox AI system is used as an oracle that can label inputs with positive or negative label depending on its decision. We formulate generating explanations as the problem of learning Boolean formulae from examples obtained by actively querying such an oracle. This problem has exponential worst-case complexity in the general case as well as for many restrictions. In this paper, we focus on learning sparse Boolean formulae which depend on only a small (but unknown) subset of the overall vocabulary of atomic propositions. We propose two algorithms - first, based on binary search in the Hamming space, and the second, based on random walk on the Boolean hypercube, to learn these sparse Boolean formulae with a given confidence. This assumption of sparsity is motivated by the problem of mining explanations for decisions made by artificially intelligent (AI) algorithms, where the explanation of individual decisions may depend on a small but unknown subset of all the inputs to the algorithm. We demonstrate the use of these algorithms in automatically generating explanations of these decisions. We show that the number of examples needed for both proposed algorithms only grows logarithmically with the size of the vocabulary of atomic propositions. We illustrate the practical effectiveness of our approach on a diverse set of case studies. In this paper, we summarize the results presented in our recent work [20, 19].

bottom of page