Our mission
Empowering discovery. Respecting human judgement.
We research AI solutions that support scientific discovery and human decision-making. We aim to harness the power of AI to uncover meaningful patterns in arbitrary data, yielding novel insights and inspiring testable hypotheses. We believe that AI should help humans discover knowledge, not replace their judgment.
Our research:
- [August 2025 – ] – We continue our research on the excitation pullbacks, working to improve the method and enhance the network training – with the aim to boost both generalization and interpretability. Our goal is to apply these techniques in medical diagnostics, particularly in radiology and for early cancer detection. We’re open to collaboration and support – join us in shaping the future of human-aligned AI!
- [July 2025, paper, code] – We propose a novel way to interpret deep neural networks, generating refined local gradients of remarkable perceptual alignment. Specifically, we show that popular ImageNet-pretrained DNNs store meaningful information in highly-activated paths. We hypothesize that these paths become stable early in training, making the network behave like a kernel machine. In particular, we introduce excitation pullbacks, which faithfully represent the network’s decisions via input-space features.
- [March 2024, paper, code] – We introduce semantic features as a candidate building block for transparent neural networks and build fully explainable and adversarially robust PoC neural network for MNIST.