Luca Marzari

me2.png

šŸ“TU Wien

Vienna, Austria šŸ‡¦šŸ‡¹

I am a Postdoctoral Research Associate at TU Wien and an incoming Principal Investigator for the PROVE-IT FWF ESPRIT fellowship project. I’m also part of the TrustCPS group led by Prof. Ezio Bartocci. During my PhD, I was a visiting researcher at the Robotics Institute of Carnegie Mellon University, working in the Intelligent Control Lab of Prof. Changliu Liu and at Imperial College London in the Center for Explainable AI directed by Dr. Francesco Leofante and Prof. Francesca Toni. I received my PhD (cum laude) in Computer Science from the University of Verona, under the supervision of Prof. Alessandro Farinelli and Prof. Ferdinando Cicalese.

My research interests focus on developing efficient and reliable methods for verifying and enhancing the explainability of deep neural networks (DNNs). I have also designed probabilistic verification algorithms with strong theoretical guarantees to bridge the gap between formal verification and safe deep reinforcement learning. The outcomes of my work have led to several publications in top-tier international conferences and journals in Artificial Intelligence and Verification, and to prestigious collaborations with world-leading universities. You can check out the publications page and my CV for more information on my research activity so far.

Outside of work, I am a rock climberšŸ§—šŸ» and I also love hikingā›°ļø and photographyšŸ“ø.



News šŸ“¢

2026 ā€ƒMay
  • I’m excited to share that my research proposal PROVE-IT (Probabilistic Verification and Counterfactual Explanations for Iterative Decision-Making Tasks) has been awarded the highly competitive Austrian Science Fund FWF ESPRIT Fellowship, with funding of approximately €350K over the next 3 years 🤩 Stay tuned for exciting research! šŸš€
  • Two new papers accepted! šŸŽ‰ ā€œProbabilistic Verification of Recurrent Neural Networks for Single and Multi-Agent Reinforcement Learningā€ and ā€œA Survey on the Verification of Reinforcement Learning Policiesā€ have both been accepted at the 35th International Joint Conference on Artificial Intelligence (IJCAI 2026)! Papers available soon!
ā€ƒApril
  • New journal paper accepted! šŸŽ‰ ā€œĪµ-Retraining Reinforcement Learning Algorithmsā€ has been accepted at the Journal of Autonomous Agents and MultiAgent Systems (JAAMAS)! Paper available soon, stay tuned! šŸš€
  • Excited to share that I’ve officially started a Postdoctoral Researcher position at TU Wien!šŸŽ‰.
ā€ƒMarch
  • Happy to share that I’ve officially graduated with a PhD cum laude!šŸŽ‰ You can read more details on my thesis ā€œAdvanced Neural Networks Verification for Safe and Explainable Intelligent Systemsā€ in this dedicated page.
  • My PhD thesis, ā€œAdvanced Neural Networks Verification for Safe and Explainable Intelligent Systems,ā€ is now online! Check it out here šŸŽ‰.
ā€ƒFebruary
  • Excited to share that I’m visiting the Centre for Explainable AI at Imperial College London, working with Dr. Francesco Leofante. Stay tuned for exciting research! šŸš€
2025 ā€ƒNovember
  • New journal paper accepted! šŸŽ‰ ā€œProbabilistically Robust Counterfactual Explanations under Model Changesā€ has been accepted at the Artificial Intelligence Journal (AIJ)! You can read it here!šŸš€
  • ā€œProbabilistically Tightened Linear Relaxation-based Perturbation Analysis for Neural Network Verificationā€ has been accepted at the Journal of Artificial Intelligence Research (JAIR)! You can find it here!šŸš€
  • ā€œOn the Probabilistic Learnability of Compact Neural Network Preimage Boundsā€ has been accepted and selected for an oral presentation at AAAI 2026!🤩 You can read it here!
ā€ƒOctober
  • ā€œVerifying Online Safety Properties for Safe Deep Reinforcement Learningā€ has been accepted in ACM Transactions on Intelligent Systems and Technology! You can find it here!
ā€ƒJuly
  • ā€œDesigning Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigationā€ availble here has been accepted at IEEE RA-L 2025! šŸ¤–
  • New preprint ā€œFormal Verification of Variational Quantum Circuitsā€ is availble here!
  • ā€œAdvancing Neural Network Verification through Hierarchical Safety Abstract Interpretationā€ availble here has been accepted at ECAI 2025! šŸš€
  • New preprint ā€œProbabilistically Tightened Linear Relaxation-based Perturbation Analysis for Neural Network Verificationā€ is availble here!
ā€ƒMay
  • ModelVerification.jl the first Julia cutting-edge toolbox that contains a suite of state-of-the-art methods for verifying DNNs made during my research period abroad at CMU has been accepted at CAV 2025!🤩 The toolbox is available here.
  • New preprint ā€œAdvancing Neural Network Verification through Hierarchical Safety Abstract Interpretationā€ is availble here!
  • RobustX made in collaboration with Imperial College people has been accepted at IJCAI 2025!šŸš€ We propose a novel Python library to generate robust counterfactual explanations! Check out the paper and the code!
  • Our paper ā€œDesigning Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigationā€ is now available in preprint here! We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms that ensure safe reinforcement learning navigation policies.šŸ¤–
ā€ƒFebruary
  • The journal extension of our paper ā€œRigorous Probabilistic Guarantees for Robust Counterfactual Explanationsā€ in collaboration with Francesco Leofante of Centre for Explainable AI is out! Read it herešŸš€
  • RobustX is out!šŸš€ A new Python library to generate robust counterfactual explanations! Check out the paper and the code made in collaboration with Imperial College people!
ā€ƒJanuary
  • Our proposal on ā€œTackling Environmental Sustainability Challenges via Reinforcement Learning and Counterfactual Explanationsā€ has been accepted for discussion at the AAAI 2025 Bridge on Explainable AI, Energy, and Critical Infrastructure Systems. Looking forward to engaging discussions on Explainable AI for environmental sustainability!
  • Our paper Improving Policy Optimization via ε-Retrain made during my research period at CMU under the supervision of Prof. Changliu Liu, has been accepted at AAMAS 2025 🤩 Happy to have collaborated with Enrico Marchesini and Prof. Priya Donti of the Laboratory for Information & Decision Systems at MIT for this project!
2024 ā€ƒOctober
  • ECAI 2024 was a blast!šŸš€ I made an oral presentation on ā€œRigorous Probabilistic Guarantees for Robust Counterfactual Explanationsā€ and an outreach activity with Francesco. We showed how model changes can invalidate counterfactuals and pose challenges when these explanations are used to provide recourse recommendations! In case you missed it, check out my GitHub page; I’ll post the demo used during the outreach soon!
ā€ƒJuly ā€ƒJune ā€ƒJanuary
  • Our paper ā€œEnumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guaranteesā€ has been selected for an oral presentation at AAAI 2024 🤩.
2023 ā€ƒDec
  • New paper accepted at AAAI 2024: ā€œEnumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guaranteesā€.
ā€ƒNov
  • ā€œScaling #DNN-Verification Tools with Efficient Bound Propagation and Parallel Computingā€ has been accepted at the 10th Italian Workshop on Artificial Intelligence and Robotics (AIRO 2023) , co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023).
ā€ƒJun
  • Our paper ā€œFormal Verification for Counting Unsafe Inputs in Deep Neural Networksā€ has been accepted at the 2nd Workshop on Formal Verification of Machine Learning (WFVML 2023) at ICML 2023!
  • Our paper ā€œConstrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigationā€ has been accepted at IROS 2023! šŸ¤–
ā€ƒMay
  • Excited to share that in July I’ll start a research visit at the Intelligent Controll Lab part of the Robotics Institute at Carnegie Mellon University(CMU) šŸ‡ŗšŸ‡ø, under the supervision of Prof. Changliu Liu.
ā€ƒApril
  • Our paper ā€œThe #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networksā€ has been accepted at IJCAI 2023 (15% acceptance rate) 🤩.
ā€ƒJanuary
  • Our paper ā€œVerifying Learning-Based Robotic Navigation Systemsā€ in collaboration with The Katz Lab has been accepted at ETAPS TACAS 2023 šŸš€.
  • Our paper ā€œOnline Safety Property Collection and Refinement for Safe Deep Reinforcement Learning in Mapless Navigationā€ has been accepted at ICRA 2023.
  • Our paper ā€œSafe Deep Reinforcement Learning by Verifying Task-Level Propertiesā€ has been accepted at AAMAS 2023.
2022 ā€ƒOctober
  • Excited to start a PhD in Computer Science advised by Prof. Alessandro Farinelli and Prof. Ferdinando Cicalese at the Department of Computer Science, Verona.
ā€ƒApril
  • I Started a Research Fellowship under the supervision of Prof. Alessandro Farinelli at the Department of Computer Science, Verona.
2021 ā€ƒDecember
  • 1 poster paper accepted at ACM SAC IRMAS (< 25% acceptance rate) on ā€œCurriculum Learning For Safe Mapless Navigationā€.
ā€ƒSeptember
  • 1 paper accepted at IEEE ICAR on ā€œTowards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasksā€.


Selected publications šŸ“š

  1. AAAI
    On the Probabilistic Learnability of Compact Neural Network Preimage Bounds
    Luca Marzari,Ā Manuele Bicego,Ā Ferdinando Cicalese,Ā andĀ Alessandro Farinelli
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2026
  2. AIJ
    Probabilistically Robust Counterfactual Explanations under Model Changes
    Luca Marzari,Ā Francesco Leofante,Ā Ferdinando Cicalese,Ā andĀ Alessandro Farinelli
    Artificial Intelligence Journal (AIJ), 2025
  3. JAIR
    Probabilistically Tightened Linear Relaxation-based Perturbation Analysis for Neural Network Verification
    Luca Marzari,Ā Ferdinando Cicalese,Ā andĀ Alessandro Farinelli
    Journal of Artificial Intelligence Research (JAIR), 2025
  4. AAAI
    Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees
    Luca Marzari,Ā Davide Corsi,Ā Enrico Marchesini,Ā Alessandro Farinelli,Ā andĀ Ferdinando Cicalese
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2024
  5. IJCAI
    The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks
    Luca Marzari,Ā Davide Corsi,Ā Ferdinando Cicalese,Ā andĀ Alessandro Farinelli
    Internation Joint Conference on Artificial Intelligence (IJCAI), 2023