My research focuses on the formal verification of generative systems, specifically the application of software engineering principles to stochastic agents.
The central thesis of this work is that Large Language Models (LLMs) should be treated as components of the environment rather than decision-making agents. This perspective is formalized in the Dual-State Architecture, which separates workflow state (deterministic control) from environment state (stochastic generation).
Research Implementation
This theoretical framework is implemented in AtomicGuard, a library that enforces deterministic verification loops on generative models. The work demonstrates that reliability in neuro-symbolic systems can be achieved without sacrificing the creative utility of the underlying models.
Industrial Application
These principles are currently being operationalized at Manta Technologies, a DeepTech venture supported by BPI France and Inria.
As Chief Technology Officer, the focus is on architecting Federated Learning infrastructures that enable secure model training across distributed environments. This role provides a practical environment for testing formal methods at scale, bridging the gap between theoretical correctness and industrial viability.