Skip to main content

Enterprise AI Security Use Cases

Dorcha addresses the critical security challenges that enterprises face when connecting internal systems and staff to AI services. As organizations experiment with Large Language Models (LLMs), two immediate problems emerge: first, it becomes difficult to enforce the same access controls and change discipline they expect for any production dependency; second, they lose reliable visibility into what is being sent to and returned from third‑party AI models.

Dorcha sits in the path between internal callers and external (or on‑prem) model endpoints and solves these near‑term problems with a policy‑enforced gateway, strong request authentication, and comprehensive audit trails. Our platform applies lightweight prompt and secret detection to reduce obvious risks without breaking developer velocity, ensuring your AI initiatives remain secure and compliant.

Controlled access to external models

Many teams begin with direct SDK usage (e.g., OpenAI, Anthropic, or an on‑prem service such as Ollama) from web apps, services, or notebooks. That pattern quickly turns into unmanaged API keys scattered across repos and environments. Dorcha replaces ad‑hoc usage with a single gateway endpoint and a simple, cryptographically authenticated request format. Each internal caller such as a user‑facing service, batch job, or research notebook presents HMAC‑based credentials issued by the security team. The gateway evaluates a declarative policy that binds the caller to the specific agentic service, model family, and direction of traffic that are allowed. This gives enterprises a concrete control point: who may talk to which model, for what purpose, and under what constraints.

Auditability from day one

Security and compliance stakeholders usually ask for a record of prompts, responses, and the operational context in which they were sent. Dorcha provides durable, chronologically chained audit logs at the gateway. Each entry captures the normalized request metadata, policy decision, target service, timing information, and a redacted view of content where required by policy. Because logging occurs in the path, teams do not need to retrofit every application to emit its own audit trail. This baseline visibility is often the fastest way to move an AI pilot from a local experiment into a controlled internal rollout.

Guardrails that reduce obvious risk without blocking work

Early AI integrations fail not only from malicious input, but also from accidental disclosure of credentials and sensitive strings that ride along with prompts. Dorcha’s request normalization pipeline can apply prompt‑injection heuristics and secret scanning to inbound traffic before it reaches a model endpoint. When a match is detected, the gateway can block the request outright or redact the offending token sequence according to policy. These checks are deliberately lightweight: they are intended to catch the most common and costly mistakes (for example, committing an API key into a prompt template or pasting internal URLs) while preserving low latency for interactive use.

Scope and near‑term roadmap

Dorcha intentionally focuses on controls that can be deployed quickly and operated reliably: authenticated request brokerage, policy enforcement, configurable logging, and lightweight prompt/secret checks. Over the next six months we expect to deepen policy expressiveness (richer per‑caller constraints and cleaner configuration tooling), broaden tested backend coverage, and make redaction options more flexible. We are also building out a management interface for easier enforcement of Dorcha configurations.


Dorcha is currently in development. Early Access is intended for feedback and design pilot exploration only.