Security & Trust
Architecture, not adjectives.
Two recent incidents broke trust in AI compliance — one platform shipped boilerplate as audit reports, another leaked confidential drafts in a public spreadsheet. We answer with verifiable architecture: where credentials live, how evidence is signed, what Vera is allowed to do, and how every action is logged.
8
Architectural controls
6
TSA servers (RFC 3161)
0
Write scopes on GitHub
100%
Tool calls audited
On this page
Credential security
Azure Key Vault, retrieved at scan time, never cached.
Evidence integrity
RSA/ECDSA signatures + RFC 3161 timestamps from six TSAs with automatic fallback.
Infrastructure access model
GitHub App with no write scopes. Cloud APIs called read-only. Code scanned in memory, then deleted.
Data isolation
AsyncLocalStorage execution context filters every database call by workspaceId.
API security
BCrypt at rest, timing-safe comparison, CIDR allowlists, per-key scopes, tiered rate limits.
AI agent governance
Role-based tool access, risk-tiered approval gates, and a full audit trail on every invocation.
Audit logging
Logged at the boundary, redacted of secrets, structured for fieldwork.
Encryption
Symmetric encryption for sensitive blobs. Time-bounded URLs for object storage.
Credential security
Cloud credentials never touch our application database.
Azure Key Vault, retrieved at scan time, never cached.
Your AWS, GCP, Azure, GitHub, and Okta credentials live in Azure Key Vault — keyed by workspace and provider. Vera retrieves them at scan time, holds them only as long as the scan runs, and discards them. They are never written to our application database, never logged, never persisted in memory between scans.
Technical detail
- Vault
- Azure Key Vault, FIPS 140-2 validated HSM
- Key path scheme
- workspace/{workspaceId}/provider/{providerId}
- Lifetime in app memory
- Single scan invocation — discarded on completion
- Logged values
- Never. Audit logs reference key handles, not secrets
Evidence integrity
Every evidence pack is cryptographically signed and timestamped.
RSA/ECDSA signatures + RFC 3161 timestamps from six TSAs with automatic fallback.
Each evidence package ships with a per-file SHA-256 manifest, a digital signature (RSA or ECDSA), and an RFC 3161 timestamp from a third-party Time Stamping Authority. The timestamps are legally recognized — DigiCert, Sectigo, GlobalSign, Entrust, plus two backups — with automatic fallback if any TSA is unreachable.
Technical detail
- Signing algorithms
- RSA-PSS-SHA256, ECDSA-P256-SHA256
- TSA servers
- DigiCert, Sectigo, GlobalSign, Entrust + 2 fallback (6 total)
- BYOK modes
- Platform key (default), customer key (AES-256-GCM at rest), AWS/GCP/Azure KMS
- Open Evidence Spec
- Coming soon — verify any Screenata pack without an account
Infrastructure access model
Read-only by construction. Source code is never persisted.
GitHub App with no write scopes. Cloud APIs called read-only. Code scanned in memory, then deleted.
Our GitHub App manifest does not request administration:write — Vera cannot modify your repositories. When she finds a config issue, she returns guidance with a deep link. Code scans pull source files into memory for secret and pattern matching, store the findings, and discard the source. Cloud scans use native provider SDKs — no shell execution, no CLI dependencies.
Technical detail
- GitHub App scopes
- Read-only across contents, metadata, organization. No administration:write.
- Source code persistence
- None. In-memory during scan, dropped after findings extracted.
- Cloud API access
- Native SDK calls (AWS / GCP / Azure / GitHub). No subprocess shells.
- Verify yourself
- Inspect our GitHub App permissions during install — read-only is visible at the consent screen
Data isolation
Workspace boundaries are enforced on every query — by construction.
AsyncLocalStorage execution context filters every database call by workspaceId.
Every service call runs inside an AsyncLocalStorage context that carries the active workspaceId. The data layer rejects any query that doesn't include workspace filtering — there is no path to read another tenant's data, even by mistake. Foreign-key constraints between Organization → Workspace → ComplianceProgram make the boundary structural, not a runtime hope.
Technical detail
- Enforcement layer
- AsyncLocalStorage execution context, checked on every DB call
- Hierarchy
- Organization → Workspace → ComplianceProgram (FK enforced)
- Required scope
- Services throw if no workspace context is set — no implicit global queries
- Code review rule
- Any new query must use the scoped client; CI enforces it
API security
Enterprise API keys are hashed, time-safe, and IP-scoped.
BCrypt at rest, timing-safe comparison, CIDR allowlists, per-key scopes, tiered rate limits.
API keys are stored as BCrypt hashes; verification uses constant-time comparison so attackers cannot enumerate keys by timing. Each key carries a scope (read-only, signing, admin) and an optional CIDR allowlist (IPv4/IPv6) so production traffic must come from approved networks. Production error responses are sanitized — no stack traces, no Prisma errors, no internal paths.
Technical detail
- Hashing
- BCrypt with workload-tuned cost factor
- Comparison
- Constant-time — prevents timing-based enumeration
- IP allowlists
- Per-key CIDR rules, IPv4 and IPv6
- Auth
- TOTP 2FA, email OTP rate-limited (3/min, 10/5min), 30-day sessions
- Cookies
- Secure, HttpOnly, Partitioned
AI agent governance
Vera cannot ship policy, accept risk, or dismiss findings autonomously.
Role-based tool access, risk-tiered approval gates, and a full audit trail on every invocation.
Vera's tools are gated by role (ADMIN / REVIEWER / VIEWER / TESTER / AUDITOR / EMPLOYEE) and by risk tier. The legally consequential actions — approving a policy, accepting a risk, dismissing a finding — always require an explicit human approval. Every tool call goes through createAuditedTool(), which records who triggered it, with what parameters, and what the result was.
Technical detail
- Roles
- ADMIN, REVIEWER, VIEWER, TESTER, AUDITOR, EMPLOYEE
- Always human-gated
- Policy approval, risk acceptance, finding dismissal
- Per-tool tier
- auto / ask / off — workspace admins can override
- Audit instrumentation
- Every tool invocation is wrapped — no exceptions
Audit logging
Every API call, every signing operation, every agent action is logged.
Logged at the boundary, redacted of secrets, structured for fieldwork.
API requests record method, path, status, duration, IP, user agent, and request ID. Signing operations record algorithm, TSA URL, serial number, and verification result. Agent actions record tool name, parameters (with secrets redacted), result, and actor attribution. The audit log is itself SOC 2 evidence — auditable proof that AI agent authority is documented and constrained.
Technical detail
- API request fields
- Method, path, status, duration, IP, user agent, request ID
- Signing fields
- Algorithm, TSA URL, serial number, verification result
- Agent fields
- Tool, params (redacted), result, actor, timestamp
- Anomaly detection
- Consecutive-failure alerts, rate-limit breach alerts
Encryption
AES-256-GCM at rest, HTTPS in transit, signed URLs for evidence access.
Symmetric encryption for sensitive blobs. Time-bounded URLs for object storage.
Sensitive data at rest — share passwords, customer signing keys, BYOK envelopes — is encrypted with AES-256-GCM. Evidence in object storage is served via signed URLs that expire in 15 minutes and are scoped to a workspace prefix. There are no plaintext endpoints; HTTPS is required everywhere, with HSTS preload.
Technical detail
- At-rest cipher
- AES-256-GCM with authenticated additional data
- Object storage
- Cloudflare R2, workspace-scoped key prefixes
- Signed URL TTL
- 15 minutes
- Transport
- HTTPS only, HSTS preload, modern TLS
Operating principles
Three rules behind every architectural decision.
Prove, don't promise.
Every claim on this page maps to a code path or a vendor we depend on. If we can't show it, we don't put it here.
Read-only by default.
Vera does not request write scopes on your repos or your cloud. Guidance with deep links beats auto-fix you can't audit.
Humans for legal acts.
Approving policy, accepting risk, dismissing findings — these are legally consequential. Vera proposes; a human signs.
Due diligence
Ready for your security review.
Bring your questionnaire. We'll walk you through the architecture, share our SOC 2 report under NDA, and demonstrate the controls live. CTOs and procurement teams welcome.