News_2025
July
-
“Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation” availble here! has been accepted at IEEE RA-L 2025! 🤖
-
New preprint “Formal Verification of Variational Quantum Circuits” is availble here!
-
“Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation” availble here has been accepted at ECAI 2025! 🚀
-
New preprint “Probabilistically Tightened Linear Relaxation-based Perturbation Analysis for Neural Network Verification” is availble here!
May
-
ModelVerification.jl the first Julia cutting-edge toolbox that contains a suite of state-of-the-art methods for verifying DNNs made during my research period abroad at CMU has been accepted at CAV 2025!🤩 The toolbox is available here.
-
New preprint “Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation” is availble here!
-
RobustX made in collaboration with Imperial College people has been accepted at IJCAI 2025!🚀 We propose a novel Python library to generate robust counterfactual explanations! Check out the paper and the code!
-
Our paper “Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation” is now available in preprint here! We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms that ensure safe reinforcement learning navigation policies.🤖
February
-
The journal extension of our paper “Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations” in collaboration with Francesco Leofante of Centre for Explainable AI is out! Read it here🚀
-
RobustX is out!🚀 A new Python library to generate robust counterfactual explanations! Check out the paper and the code made in collaboration with Imperial College people!
January
-
Our proposal on “Tackling Environmental Sustainability Challenges via Reinforcement Learning and Counterfactual Explanations” has been accepted for discussion at the AAAI 2025 Bridge on Explainable AI, Energy, and Critical Infrastructure Systems. Looking forward to engaging discussions on Explainable AI for environmental sustainability!
-
Our paper Improving Policy Optimization via ε-Retrain made during my research period at CMU under the supervision of Prof. Changliu Liu, has been accepted at AAMAS 2025 🤩 Happy to have collaborated with Enrico Marchesini and Prof. Priya Donti of the Laboratory for Information & Decision Systems at MIT for this project!