Durinn builds AI security infrastructure for high-assurance and regulated environments.
Our work focuses on calibration, dataset poisoning detection, and
neuro-symbolic vulnerability analysis for safer, more predictable agents.
We contribute research datasets, calibration tools, and security-focused evaluation
pipelines designed for GxP, healthcare, and enterprise LLM deployments.
Our work spans:
Our Hacktoberfest-derived dataset supports real-world model calibration and
has demonstrated meaningful improvements when applied to production-grade PI classifiers.
Durinn calibrates state-of-the-art prompt-injection classifiers, including models
widely deployed in production security pipelines.
Calibration improves:
These calibrated guardrails can be deployed in:
Our work includes:
We emphasize verifiable integrity for teams who cannot rely on opaque model behavior.
Durinn develops hybrid detection approaches that combine:
This architecture improves reliability without altering underlying model weights.
durinn-calibration โ Tools and experiments for calibrating security-critical classifiers, including prompt-injection detectors and safety-critic models. Contains evaluation scripts, threshold-optimization utilities, and datasets for benchmarking calibrated decisions in regulated AI environments.durinn-sandbox โ A high-assurance execution environment for analyzing model behavior, running controlled adversarial tests, and validating agent outputs. Provides reproducible sandboxes for measuring failure modes, safety drift, and poisoning-related anomalies.durinn-agent-infrastructure โ Shared infrastructure components for constructing and evaluating secure AI agents. Includes model wrappers, risk-scoring pipelines, input-validation hooks, telemetry collection, and integration utilities for enterprise inference stacks.durinn-ai-code-remediation โ Research agent for neuro-symbolic vulnerability detection and compliant secure-rewrite workflows. Designed for GxP and regulated industries requiring traceability, safety justification, and audit-aligned remediation artifacts.Durinn โ Secure, calibrated, and trustworthy AI for environments where accuracy and integrity matter.