ECOOP 2026
Mon 29 June - Fri 3 July 2026 Brussels, Belgium

This program is tentative and subject to change.

Wed 1 Jul 2026 16:22 - 16:45 at I.203 - AI & Human-in-the-Loop

Verified explanations are a theoretically-principled way to explain the decisions taken by neural networks, which are otherwise black-box in nature. However, these techniques face significant scalability challenges, as they require multiple calls to neural network verifiers, each of them with an exponential worst-case complexity. We present FaVeX, a novel algorithm to compute verified explanations. FaVeX accelerates the computation by dynamically combining batch and sequential processing of input features, and by reusing information from previous queries, both when proving invariances with respect to certain input features, and when searching for feature assignments altering the prediction. Furthermore, we present a novel and hierarchical definition of verified explanations, termed verifier-optimal robust explanations, that explicitly factors the incompleteness of network verifiers within the explanation. Our comprehensive experimental evaluation demonstrates the superior scalability of both FaVeX, and of verifier-optimal robust explanations, which together can produce meaningful formal explanation on networks with hundreds of thousands of non-linear activations.

This program is tentative and subject to change.

Wed 1 Jul

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

16:00 - 16:45
AI & Human-in-the-LoopTechnical Papers at I.203
16:00
22m
Talk
Meaningful Human-in-the-Loop Checking of GenAI Synthesis for Restricted Languages
Technical Papers
Siddhartha Prasad Brown University, Skyler Austen Brown University, Kathi Fisler Brown University, Shriram Krishnamurthi Brown University
16:22
22m
Talk
Faster Verified Explanations for Neural Networks
Technical Papers
Alessandro De Palma LSE, Greta Dolcetti Ca’ Foscari University of Venice, Caterina Urban Inria - École Normale Supérieure