Enterprise AI Security Gateway Features
Dorcha is a next‑generation zero‑trust AI gateway that sits between enterprise applications and Large Language Models (LLMs). Our platform provides a single, secure endpoint for internal services and staff to reach external AI providers (such as OpenAI, Anthropic) or on‑prem deployments (such as Ollama). The gateway authenticates callers, evaluates policy, normalizes requests, and records durable audit trails for comprehensive compliance and security monitoring.
Core AI Security Capabilities
The sections below describe the enterprise-grade AI security capabilities that are available today or are planned in the near term, designed to protect your AI workflows while maintaining operational efficiency.
Zero‑Trust Enforcement
Dorcha places a cryptographic contract between internal callers and any AI backend. Each request carries an HMAC signature derived from a managed key. The gateway verifies the signature and rejects unsigned or malformed traffic. This turns every interaction with a model into an authenticated, policy‑checked exchange rather than an anonymous HTTP call.
Access Policy per Caller and Service
Enterprises often need different rules for different services, teams, or environments. Dorcha evaluates a declarative policy that binds a specific internal entity to a target agentic service and direction of traffic. Policies make it explicit which service may talk to which backend and allow operations teams to enable or disable access centrally without code changes.
Secrets and Sensitive‑String Detection
Prompts and templates occasionally pick up credentials, tokens, and other sensitive strings. Dorcha’s normalization pipeline can scan inbound content for common secret formats and other high‑risk patterns before the request is forwarded. According to policy, the gateway can block the call or redact matched tokens so that obvious mistakes do not reach the model.
Request Normalization
Downstream providers receive a consistent, well‑formed request. Dorcha canonicalizes headers and body format, strips superfluous fields, and produces a stable representation for logging. Normalization simplifies policy evaluation and produces cleaner, comparable audit records across different backends and calling services.
Audit Logging with Operational Context
Every decision at the gateway is logged with caller identity, target backend, timing information, and a redacted view of content where required by policy. Because the log is produced in the path, teams do not need to retrofit each application to emit its own records. The result is a durable, searchable trail that answers who sent what to which model and when.
Prompt‑Injection Heuristics
Beyond obvious secret leakage, poorly formed or adversarial prompts can steer an agent outside intended behavior. Dorcha applies lightweight heuristics during request processing to flag known injection patterns. These checks are designed to be fast and conservative, useful as guardrails without adding noticeable latency to interactive use.
Backend Connectivity without Application Churn
Internal callers integrate once with the gateway and operations teams configure the actual backend. Today Dorcha forwards to commonly used providers such as OpenAI and on‑prem engines such as Ollama. The mapping from callers to backends is managed centrally so teams can change providers or endpoints without touching application code.
Lightweight, Scalable Operational Posture
The gateway is designed to be stateless and easy to run behind a standard load balancer. Horizontal scale comes from adding instances; configuration is supplied via versioned files and environment. This keeps the operational surface small while supporting the concurrency required for production workloads.
Compliance and Observability Foundation
By centralizing model access, Dorcha provides a single vantage point for security review, cost analysis, and compliance reporting. The structured audit stream integrates cleanly with existing log pipelines and SIEM tooling so that compliance teams can inspect AI usage with the same rigor they apply to other production systems.