Runtime AI Security Platform

Your AI agents are acting. Do you know what they're doing?

RAXE detects prompt injection, tool-call abuse, shadow AI, and data exfiltration at runtime, inside your infrastructure. Prompts and detections do not need to leave your control boundary.

0 bytes sent to cloud <0.15ms inference Runs in your VPC, on-prem, or air-gapped
$ pip install raxe
Exposure & Urgency

The clock is running on AI governance.

Regulators are moving faster than most AI security programmes. The cost of getting this wrong is measurable, and mapped to organisations that already have AI in production.

$4.44M[1]
Global average cost of a data breach (2025)
+$670K[2]
Added cost when shadow AI is involved
97%[1]
AI-incident organisations lacked AI access controls
63%[1]
Organisations have no AI governance policy

Regulators are moving. Full compliance mapping for NIST AI RMF, ISO 42001, EU AI Act, SOC 2, GDPR, and DORA on our trust centre →

Defence in Depth

Three layers. One engine.

Runtime protection at every layer of the stack. The same detection engine runs at network perimeter, host infrastructure, and inside your application code.

Depth 0 · Perimeter
RAXE Gateway
Reverse proxy that scans every AI API call. Virtual key management, cost controls, and threat detection before traffic reaches your providers.
API Proxy Key Vault 5 Providers Cost Controls
Depth 1 · Infrastructure
RAXE Sensor [host]
Infrastructure agent that monitors AI workload execution. Detects prompt injection, jailbreak attempts, and data exfiltration at the host level.
Kubernetes Docker Systemd eBPF
Depth 2 · Code
RAXE Sensor [sdk]
Embedded library that wraps your LLM calls with inline threat detection. ML classifiers and energy-based scoring running locally inside your application process.
pip install LangChain CrewAI OpenAI Anthropic
1,000+
Detection Signatures
<0.15ms
Sensor Latency
100%
On-Device Processing
Any LLM
Works With Any Model
Any Framework
OpenAI Protocol + Decorator
Live Detection

See threats as they happen

Real-time enforcement across every AI interaction. Block, flag, or log—your policy, your infrastructure.

Enforcement Log
Sample
Scans Today
24,391
Threats Blocked
47
14:32:12 GW BLOCK Prompt injection via nested instruction override — sales-copilot 0.08ms
14:32:10 SDK PASS Validated tool call: searchProducts() — sales-copilot 0.04ms
14:32:08 SEN BLOCK Unauthorised kubectl exec attempt — infra-ops-agent 0.12ms
14:32:06 GW FLAG PII exfiltration pattern: SSN regex in output — claims-processor 0.09ms
14:32:04 GW PASS Standard summarization request — legal-review-agent 0.06ms
14:32:02 SDK BLOCK Intent mismatch: writeFile but policy allows read-only — code-review-agent 0.11ms
14:32:00 SEN REVIEW Privilege escalation via tool output injection — data-pipeline-v2 0.14ms
14:31:58 GW BLOCK Jailbreak via multi-turn persona hijack — customer-support-bot 0.07ms
14:31:56 SEN PASS Approved API call: BigQuery read on authorised dataset — analytics-agent 0.05ms
14:31:54 SDK BLOCK Recursive tool invocation loop detected (depth 12) — deploy-orchestrator 0.14ms
Detection Engine

Five engines. One verdict.

Classifiers for known attack families. Energy scoring for inputs that don't match a known class. Five domain-specific engines voting together, on-device, with configurable weights.

Prompt Guard
SLM Classifier
Injection, jailbreak, and adversarial input detection. Classifies prompt-level attacks before they reach the model.
Intent Classifier
SLM Classifier
Determines the goal behind ambiguous or dual-use agent actions. Like malware intent analysis, but purpose-built for AI agents.
Tool Policy Monitor
Analyser
Validates every tool call against allowed scope and parameters. Detects unauthorised, dangerous, or out-of-bounds tool invocations.
Behaviour Graph Analyser
Analyser
Traces multi-step agent execution paths. Detects anomalous sequences, recursive loops, and privilege escalation across tool chains.
Output Shield
SLM Classifier
Validates model responses for data leakage, PII exposure, harmful content, and policy violations before they reach end users.
1.6 MB
classifier heads
<0.15ms
inference
4
voting presets
0 bytes
to cloud
Swap the model. Not the sensor.
Add, remove, or re-weight detection engines without redeploying. Your threat landscape changes — your detection adapts, including for threats you haven't seen yet.
RAXE Labs

Research-powered detection

Original threat research feeds directly into detection signatures. Every advisory, every technique mapping, every new signature makes the platform stronger.

RAXE-2026-061
NVIDIA BioNeMo Framework Deserialization of Untrusted Data Enables Remote Code Execution (CVE-2026-24164)
CRITICAL CVSS 9.8
Research Radar

Issue #5 — 4 Papers

Your LLM API router may be stealing your credentials and rewriting your tool calls.

2 Act Now 2 Watch
Threat Intelligence

Monthly Threat Landscape Report

Data-driven analysis of AI threat patterns, attack techniques, and emerging vectors across the RAXE detection network.

S1 Adversarial ML S2 Agent Security
Why RAXE

The complete runtime platform

Capability Gateway-Only SDK Scanners Cloud Inspection RAXE
AI traffic governance Yes No Partial Yes
Runtime execution visibility No Partial No Yes
Novel-behaviour signal Varies Varies Varies Yes
Local-first (no data transit) No Yes No Yes
Infrastructure deployment No No No Yes
Original threat research No No No Yes
Independent vendor Varies Varies No Yes
See full comparison →
Ready to secure your AI?

See RAXE in your environment

Request a Demo Talk to an Engineer
Atmosphere
Customise the background rain effect
Opacity 15%
Speed 0.10
Density 44px
Trail Length 4-8
Fade Rate 0.20
Glow Intensity 8