70%of executiveslack full AI visibility across their organisation
$140Mlost annuallyto AI irregularities per $20B enterprise
12%of organisationshave AI orchestration platforms in place today
99.97%Detection accuracyGLiNER v2 engine, production
0Bytes egressedto vendor infrastructure, ever
< 4 hrTime to first scanfrom credential to findings
Source — IBM Institute for Business Value: AI in Motion, April 2026 · 1,006 senior executives · 20 countries
The governance gap

Governance without enforcement
is just a PDF.

Most organisations have AI usage policies. Almost none have technical controls that enforce them. The IBM Institute for Business Value surveyed 1,006 senior executives across 20 countries. The findings are unambiguous.

SOURCE — IBM IBV: AI IN MOTION, APRIL 2026

"Governance is critical — but only if it is implemented at the beginning. If you add governance at the end, it becomes a bottleneck."Majid Sultan AlMheiri — Chief AI Officer & Director of IT, Dubai Health Authority
70%of executives don't have full visibility into the AI their teams are using — or where it operates
$140Mlost annually by a typical $20B enterprise to AI irregularities — half directly attributable to governance gaps
12%of organisations have AI orchestration platforms in place. The rest are scaling AI without a clear line of control
13×more likely to be scaling AI successfully — organisations with orchestration-led governance vs those without
Two products. One engine.

Close the loop between data at rest and data in motion.

VestraData governs what you hold. VestraShield governs what you send to AI. Both run on vestracore: one ML engine, consistent entity resolution across every surface.

VD-CORE

Privacy intelligence
for your data estate.

Scan every database, file store, and cloud bucket. Find regulated data before an auditor does. Anonymise in-place, generate synthetic exports, govern what leaves your boundary.

  • PII discovery across databases, files, and cloud storage
  • Zero-shot classification, no schema knowledge required
  • In-place anonymisation and synthetic data generation
  • Data airlock for governed partner handoffs
  • GDPR Art. 30 · HIPAA · PCI-DSS audit evidence
Explore VestraData →
VS-CORE

AI governance infrastructure
for every prompt.

Your people are already using AI with sensitive data. VestraShield is the control layer between them and every LLM endpoint: browser, IDE, API, and agentic pipelines.

  • Intercepts prompts and file uploads before they reach any LLM
  • Transforms, not blocks — users stay productive
  • Full audit log: every entity detected, every decision made
  • Covers browser, MCP tools, IDE, and API/SDK planes
  • CISO-grade compliance evidence for regulators
Explore VestraShield →
Both products run on vestracore — the same ML detection and anonymisation engine. One deployment to maintain, two control surfaces, consistent entity resolution across your entire AI governance stack.vestracore · GLiNER v2 · HMAC surrogate vault
VestraData

Six controls. One review queue.

Discovery, remediation, and proof across every data source your organisation uses — all feeding into a single, tamper-evident audit record.

VD-CORE-001

PII Discovery & Classification

Connect any database, file store, or cloud bucket. VestraCore samples schemas, scores fields by name, value, and context, and returns field-level findings with confidence scores and row counts. No schema knowledge required upfront.

PostgreSQL · MySQL · SnowflakeS3 · SharePoint · GCSGLiNER zero-shotAdaptive sampling
VD-SYNTH-002

Synthetic Data Generation

Generate statistically faithful datasets for engineering, QA, and ML pipelines. Referential integrity preserved. Statistical distribution and correlation matched to production. Exports go directly to staging databases or object storage.

FK-preservingDifferential privacyParquet · S3 · direct DBScheduled refresh
VD-AIRLOCK-003

Data Airlock

Watch repositories for new documents and datasets. When a match is found, a governed clean copy is produced automatically so partner handoffs and AI tooling never receive raw PII. Monitors SharePoint, Drive, S3, and SFTP drops.

SharePoint · Drive · S3SFTP watchPre-cleared copiesZero manual review
VD-SHIELD-004

AI Endpoint Protection

Intercept prompts and file uploads before they reach ChatGPT, Claude, or Copilot. Policy can warn, block, or transform without treating every employee as a privacy expert. Audit trail aligned with VestraCore review records.

Browser-level interceptMCP proxyTyped prompts + uploadsPolicy engine
VD-AUDIT-005

Tamper-Evident Audit Log

Every action across every surface writes to a hash-chained, immutable audit record. Regulators see what was found, what changed, who approved it, and when. Evidence is generated automatically, not assembled manually before an audit.

Hash-chained recordsGDPR Art. 30HIPAA §164PCI-DSS 4.0
VD-SDK-006

SDK & Embedded Integration

Run the detection and policy layer directly inside an existing pipeline or product. Python, Node.js, Java, and .NET. The embedded SDK runs the full ML stack in-process. No server required. Event-driven for data marketplace ingestion.

Python · Node.js · Java · .NETOpenAPI specEvent-drivenMulti-tenant
VestraShield

A policy document is not a technical control.

Your people are already using AI with sensitive data. VestraShield is the control layer between them and every LLM endpoint.

Four intercept planes. One policy engine. Zero bytes reach any LLM without being governed first.

01
Browser Extension (Chrome)
ChatGPT, Claude.ai, Gemini, and Microsoft Copilot in the browser. Typed prompts and file uploads are intercepted before they leave the page.
02
MCP Proxy
Claude Desktop, Cursor, Windsurf: tool calls and results intercepted at the MCP protocol layer. Most governance tools stop at the browser. This plane catches the AI your browser extension cannot see.
03
Local HTTP Proxy
IDE completions, API SDK calls, and notebook LLM calls intercepted at the HTTP layer via mitmproxy. No application modification required.
04
SDK Wrappers
LangChain, programmatic API calls, and automated pipelines intercepted via Python and Node SDK wrappers. For pipelines where HTTP intercept is not applicable.
Policy engine — four actions
transformReplace with HMAC-seeded surrogate. Restore in response. Default for most PII.
blockHard redact. Replace with [TYPE] placeholder. Never restore. For card numbers, SSNs, IBANs.
warnPass through but alert CISO dashboard. For grey-area entities requiring human review.
audit_onlyLog detection and take no action. For low-risk monitoring without intervention.
vestrashield-ciso · acme-corp · daily summary
VESTRASHIELD CISO DASHBOARD · tenant: acme-corp · 2026-05-06
──────────────────────────────────────────────────────────────
[09:00–09:14] INTERCEPT claude.ai · 183 prompts · 0 PII egressed
[09:00–09:14] TRANSFORM 14 entities PERSON(6) EMAIL(4) C.REF(4)
[09:00–09:14] BLOCK 3 events CREDIT_CARD(2) SSN(1)
[09:00–09:14] AUDIT 196 records hash-chained · tamper-evident
──────────────────────────────────────────────────────────────
ZERO_BYTES_EGRESSED ✓ confirmed per-session
POLICY_ENFORCED ✓ transform + block + audit
GDPR_ART30 ✓ records complete · export-ready
$ export --format=regulatory-submission _
How it works

Five steps. One consistent audit trail.

Every VestraData workflow follows the same sequence — from credential to evidence — regardless of data source or deployment model.

01ConnectAdd a source. Credentials are encrypted per-tenant. Scope is defined before any scan runs.[ENCRYPTED_CREDENTIAL · TENANT_SCOPED]
02DiscoverLightweight schema pass. Maps tables, estimates volume, and surfaces likely-sensitive fields.[SAMPLE_RATE: adaptive · NO_FULL_SCAN]
03ScanDeep field-level scan with confidence scores, row counts, and context evidence for review.[ENGINE: GLiNER-v2 · ZERO_SHOT: true]
04ActApply the right control: mask, anonymise, generate a synthetic export, or prepare a governed copy.[AUDIT_LOG: tamper_evident · POLICY: enforced]
05ProveThe decision trail is complete. Show regulators what was found, what changed, and who approved it.[GDPR:Art.30 · HIPAA:§164 · PCI:4.0]
Built for regulated industries

Six verticals. Specific obligations.

Regulated industries have specific governance requirements — not generic data privacy problems. Each vertical gets a dedicated page with the exact regulatory context, use cases, and deployment considerations that apply.

Deployment models

Runs where your data lives.

Three models. No vendor lock-in. No requirement to move data to assess it.

Model 01Cloud Appliance

Deploy into your own AWS, Azure, or GCP account. Your networking, your IAM, your storage. No production data routed to vendor infrastructure.

AWS MarketplaceAzure MarketplaceGCP MarketplaceTerraformCloudFormation
Model 02On-Premises / Air-Gap

Run inside a private data centre or restricted network segment. No internet dependency at runtime. For teams where operational data egress is ruled out by policy.

Docker ComposeHelm / KubernetesLDAP · SAMLOffline licenseNo phone-home
Model 03SDK / Embedded

Embed the detection and policy layer directly into an existing pipeline or product when a standalone deployment is not the right fit.

PythonNode.jsJava.NETOpenAPI spec
Design partner programme

Working with a small number of organisations before the public launch.

We are working with a small number of design partners in regulated industries — organisations with a real privacy, compliance, or AI governance problem and a team willing to work closely with us to solve it well.

Design partners get hands-on support from the team, early access to new capabilities, and a shorter loop from feedback to shipped product. For legal design partners, we produce a co-branded SRA response template you can adapt for your own regulatory submissions. Terms are structured for an early partnership and a defined pilot scope.

Apply as a design partner →
Open slots by sector
Healthcare / NHS
Air-gapped hospital network or NHS trust with DSPT and HIPAA requirements.
Open
Financial Services
Bank or investment firm with PCI-DSS scope or synthetic data needs.
Open
Legal / Professional
Law firm or accountancy sharing documents with external AI tools.
Next cohort
Data Marketplace
Platform ingesting third-party datasets that require PII scanning at ingest.
Limited
Technical review

Here is exactly what happens when you book a session.

Not a slide deck. Not a sandboxed environment with fabricated data. We connect to something real in your organisation and you see actual output.

Minutes 0–5We start with one real source
Usually a read-only database credential, file store, or bucket representative enough to answer whether the product fits your environment.
Minutes 5–20We run discovery and scan
You watch it happen live. The schema map builds in real time. Findings appear as the scan progresses. No prepared screenshots.
Minutes 20–35We walk through the findings
What was found, where, the risk level, and the confidence score. We explain any finding you want to understand in more depth.
Minutes 35–45You pressure-test the fit
Deployment, controls, source coverage, air-gap requirements, and what a narrow pilot in your environment would actually involve.

After the session, you should know whether the deployment model works, whether the first workflow is meaningful, and whether a pilot is justified.

Median time to first scan: under 4 hours from credentials.
Minutes 0–10We map your AI tool landscape
We review which AI tools your team actively uses: ChatGPT, Claude, Copilot, Gemini, plus any IDE AI or MCP-connected tools. No credentials or access required at this stage.
Minutes 10–25We run a live intercept demo
We show VestraShield intercepting a real prompt. You see surrogate replacement in action: an entity goes in, the same surrogate comes back consistently in the response. No production data required.
Minutes 25–40We review your policy requirements
Which entity types need transform vs. hard-block? Which user groups need different rules? We map your compliance obligations to the four-action policy engine.
Minutes 40–50We scope the first deployment
Docker Compose, your VPC, or on-premises. LDAP or SAML if needed. We agree what the first deployment covers and what the rollout sequence looks like.

After the session, you should know which intercept planes apply to your environment, what your policy configuration looks like, and whether a 30-day pilot is the right next step.

Target: under 4 hours from deployment to first live intercept.