A tool to analyze the robustness and safety of neural networks.

Situation
Aircraft approaching, suggest maneuver to avoid collision.
ACAS_Xu.onnx
Neural network
Decision
Expected maneuver
Result provided by the DNN
Angle
Precision
0.1
Analysis result
  • Robust
  • Unknown
  • Unsafe
Ready
Situation
Do these images present cancerous cells?
Image-Analyzing.onnx
Neural network
Decision
Expected answer
Result provided by the DNN
Brightness
±0
Precision
0%
Analysis result
Ready
  • Fig. 01 – Airborne Collision Avoidance System

    01
    Situation
    Making the right decision in complex situations

    How to act in a given situation? To answer this question, the use of neural networks (NNs) to analyse different parameters (inputs) and inferring recommendations (outputs) is becoming increasingly frequent.

    As a result, they are increasingly being used in many fields, such as healthcare, automotive, or monitoring physical systems. For this collision avoidance problem with unmaned vehicules (ACAS Xu), neural networks are a lightweight solution that can be embedded to give real time headings and prevent collision.

  • Fig. 02 – Representation of the layers of a neural network

    02
    Neural Network
    An intelligent system but difficult to understand

    By construction, a neural network is a sequence of layers, sometimes non-linear, and neurons that use matrix operations to calculate an output from the provided inputs.

    These networks are then trained using a training algorithm, which allows them to "learn" from known examples given as input and then generalise to new unknown inputs.

    Nevertheless, their complexity makes them difficult to understand for users, who may see them as "black boxes": a system that can be used but whose inner workings are not understood.

  • Fig. 03 – A safety property on ACAS Xu (from G. Katz et al. (2017), Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks)

    03
    Decision
    How can we be sure that the results are reliable?

    The accuracy and results of NNs are getting closer to achieving acceptable performance in an industrial environment. However, as many academic studies have shown, NNs are still very sensitive to disturbances on their inputs. These can be unintentional (image compression, inaccuracy of source sensors) or malicious (adversarial attack).

    To prevent these attacks, tools must be used to check their behaviour in the face of these potential variations! This is the purpose of PyRAT: to guarantee the safety of the network by formally proving properties on the outputs even with potential variations on the inputs.

  • Fig. 04 – Abstract interpretation over the layers of the network

    04
    Angle
    Abstract interpretation

    PyRAT is based on abstract interpretation techniques. It over-approximates the behaviour of a network (and its non-linear functions) in order to obtain the reachable set of its outputs.

    The orange beam in front of the plane represents all the directions in which the plane can fly w.r.t. an intruder, and for which we want to prove that the NN always behaves in the desired way. The width of the orange beam together with the distance to the intruder can be represented in a 2D space as an orange rectangle. If several input parameters vary the possible input space for the plane will be represented by a hyper-rectangle.

    From this initial input space, PyRAT will test a safety property on the NN, i.e. whether the transformation by the model of the initial space fits into the expected output space (in purple).

  • Fig. 05 – Subdivision tree of the inputs of the network using abstract interpretation

    05
    Precision
    Analysis accuracy

    To increase the accuracy of the analysis, PyRAT will recursively subdivide the input space into smaller subspaces where the analysis will be more accurate. The precision here will therefore correspond to the smallest size allowed for the subdivision. Choosing a higher precision or different methods of subdividing the space (e.g. along width or height) will allow a more precise analysis. Nevertheless, the overall analysis time increases linearly in proportion to the number of subdivisions made.

  • Fig. 06 – 2D view of the results of PyRAT on a problem

    06
    Analyse
    Reliability assessment

    When verifying a safety property with PyRAT on a model, three outputs are possible. If PyRAT has shown the existence of a counterexample to the property, then the property is false. If PyRAT manages to show that all the spaces reached by the subdivisions respect the property, then the property is globally verified. Otherwise, the result is unknown, i.e. we do not know whether the property is true or false and we require more precision.

    This analysis of a NN with PyRAT can be done on an ad hoc basis, for example to certify the robustness of a system to a regulatory body. It can also be integrated directly into the training process of the NN, to improve the AI until a satisfactory level of robustness is achieved, making it tolerant to input variations.

  • Fig. 01 – Breast Histopathology Dataset

    01
    Situation
    Making the right decision in complex situations

    How to act in a given situation? To answer this question, the use of neural networks (NNs) to analyse different parameters (inputs) and inferring recommendations (outputs) is becoming increasingly frequent.

    As a result, they are increasingly being used in many fields, such as healthcare, automotive, or monitoring physical systems. Here, a neural network is used to detect the presence of cancerous cells in images of breast biopsies.

  • Fig. 02 – Representation of the layers of a neural network

    02
    Neural Network
    An intelligent system but difficult to understand

    By construction, a NN is a sequence of layers, sometimes non-linear, and neurons that use matrix operations to calculate an output from the provided inputs.

    These networks are then trained using a training algorithm, which allows them to "learn" from known examples given as input and then generalise to new unknown inputs.

    Nevertheless, their complexity makes them difficult to understand for users, who may see them as "black boxes": a system that can be used but whose inner workings are not understood.

  • Fig. 03.A – Adversarial attacks on images (from Gong, Yuan & Poellabauer, Christian. (2018). An Overview of Vulnerabilities of Voice Controlled Systems)

    Fig. 03.B – Adversarial attacks on voice signals (from Gong, Yuan & Poellabauer, Christian. (2018) – An Overview of Vulnerabilities of Voice Controlled Systems)

    03
    Decision
    How can we be sure that the results are reliable?

    The accuracy and results of NNs are getting closer to achieving acceptable performance in an industrial environment. However, as many academic studies have shown, NNs are still very sensitive to disturbances on their inputs. These can be unintentional (image compression, inaccuracy of source sensors) or malicious (adversarial attack).

    To prevent these attacks, tools must be used to check their behaviour in the face of these potential variations! This is the purpose of PyRAT: to guaranty the safety of the network by formally proving properties on the outputs even with potential variations on the inputs.

  • Fig. 04 – Abstract interpretation over the layers of the network

    04
    Brightness
    Abstract interpretation

    PyRAT is based on abstract interpretation techniques. It over-approximates the behaviour of a network (and its non-linear functions) in order to obtain the reachable set of its outputs.

    The three cubes R, G, and B represent the possible variations on each pixel of the input image, for example due to a change in brightness. Then, PyRAT aims to prove that the NN will always gives the same decision even with a change in brightness.

    Therefore, PyRAT will only evaluate the local robustness of the NN to an image, ensuring that the output does not change for a small neighbourhood around the image. More generally, we will study this local robustness around a large number of images representative of the situation (a test set), which will then give us an estimate of the robustness of the model.

  • Fig. 05 – 2D Visualisation of an abstract interpretation domain

    05
    Precision
    Analysis accuracy

    Here, due to the high dimensionality of the images, the recursive splitting used in scenario one is not possible. Therefore, PyRAT uses another way to increase the accuracy of the analysis in PyRAT by making the shape of our domains used in the analysis more complex. In a 2-dimensional input space, the analysis starts with a simple rectangle. Each non-linearity and layer of the network then adds a side to this rectangle until we reach a final shape for the output.

    The more sides the shape of the subspace has, the more complex it is and thus the more accurate the analysis will be. However, the more sides there are, the longer the analysis will take.

  • Fig. 06 – PyRAT's results over a dataset distribution

    06
    Analyse
    Reliability assessment

    When verifying a safety property with PyRAT on a model, three outputs are possible. If PyRAT has shown the existence of a counterexample to the property, then the image is unsafe. If PyRAT manages to show that all the whole space reached is correctly classified, then the image is robust. Otherwise, the result is unknown, i.e. we do not know whether the image is robust or not and we require more precision.

    For robust images, we prove that all possible variations in the analysis margin cannot lead to a different classification. Thus, it is impossible for a sensor error or an adversarial attack in this margin to affect the classification of the image.

Open explanation page
Start / stop analysis
Increase or decrease angle
Increase or decrease precision
Open explanation page
Start / stop analysis
Increase or decrease brightness
Increase or decrease precision