๐ŸŒ– Durinn โ€” AI Security

Durinn builds AI security infrastructure for high-assurance and regulated environments.
Our work focuses on calibration, dataset poisoning detection, and
neuro-symbolic vulnerability analysis for safer, more predictable agents.

We contribute research datasets, calibration tools, and security-focused evaluation
pipelines designed for GxP, healthcare, and enterprise LLM deployments.


๐Ÿงช Research Focus

Our work spans:

Our Hacktoberfest-derived dataset supports real-world model calibration and
has demonstrated meaningful improvements when applied to production-grade PI classifiers.


๐Ÿงญ Agent Safety, Guardrails & Calibration

Durinn calibrates state-of-the-art prompt-injection classifiers, including models
widely deployed in production security pipelines.

Calibration improves:

These calibrated guardrails can be deployed in:


๐Ÿงฌ Dataset Poisoning & Model-Integrity Defense

Our work includes:

We emphasize verifiable integrity for teams who cannot rely on opaque model behavior.


๐Ÿ” Neuro-Symbolic Vulnerability Detection

Durinn develops hybrid detection approaches that combine:

This architecture improves reliability without altering underlying model weights.


๐Ÿ“š Key Repositories


Durinn โ€” Secure, calibrated, and trustworthy AI for environments where accuracy and integrity matter.