Trust & Governance

Governance you can audit, evidence you can defend.

Advisedly is built so that every control, every piece of evidence, and every AI-generated artifact carries a verifiable trail. Auditors can replay it. Customers can defend it. Regulators can inspect it.

Section 1 — Frameworks

Frameworks Supported

Onboarding ships with eight of the most-requested frameworks pre-mapped. The full Advisedly catalog covers 262 frameworks end to end — control descriptions, evidence requirements, crosswalks, and audit-ready report templates.

CMMC Level 2

DoD contractor compliance — 110 NIST 800-171 controls + 24 CMMC delta.

SOC 2

Trust services criteria for SaaS and customer-facing platforms.

HIPAA Security Rule

Healthcare PHI safeguards, breach timers, OCR audit prep.

ISO 27001:2022

International ISMS standard with Annex A control set.

FedRAMP Moderate

325-control baseline for federal cloud providers.

PCI DSS v4.0

Cardholder data environment safeguards for finance + retail.

NIST 800-53 Rev 5

Federal control catalog underpinning RMF authorization.

NIST CSF 2.0

Cybersecurity Framework — Govern, Identify, Protect, Detect, Respond, Recover.

Plus 254 more — CMMC L1 / L3, FedRAMP Low / High, DFARS, ITAR, GLBA, NY DFS, NIS2, DORA, GDPR, SOX ITGC, CIS Controls v8.1, COBIT 2019, OWASP Top 10, NIST AI RMF, NIST AI 600-1, ISO/IEC 42001, EU AI Act, and the full Secure Controls Framework crosswalk catalog.

Section 2 — AI Governance

AI Governance & Provenance

AI features in Advisedly are governed end-to-end. Every prompt, completion, and downstream artifact is logged with full provenance metadata so customers can satisfy NIST AI RMF, ISO/IEC 42001, and EU AI Act Article 13 transparency obligations out of the box.

Model Cards

Every deployment publishes a model card describing model identity, training-data restrictions, data residency, and opt-out paths. The customer-facing card lives at /dashboard/ai-toolkit/model-card.

Prompt Registry

Every AI endpoint uses a versioned prompt template. Prompt IDs are embedded in the provenance record so an auditor can replay the exact prompt that generated any artifact.

Usage Logging

Token counts, generation latency, cost, and human-review status are persisted to the ai_provenance table for every call. Reports roll up to the AI Governance dashboard.

Use-Case Assessments

Before turning on an AI feature, customers complete an EU AI Act risk assessment captured in the platform — high-risk use cases automatically require human review.

Supported LLM providers (BYOAI)

Advisedly is vendor-agnostic. Customers choose their LLM provider at install time via the first-launch BYOAIwizard. The customer's data is never used to train any model — every supported provider operates under enterprise terms that prohibit training use. Naming the supported providers below is a regulatory transparency requirement (not vendor branding):

  • AWS Bedrock (Anthropic, Meta, Mistral, Cohere)
  • Azure OpenAI Service (Government cloud)
  • Anthropic (direct API, commercial)
  • OpenAI (commercial)
  • Google Vertex AI
  • Mistral AI
  • Cohere
  • Groq
  • xAI
  • Any OpenAI-compatible endpoint (self-hosted gateway, third-party proxy)
  • On-prem vLLM (Llama, Qwen, Mistral open-weights)

The customer's choice of provider is recorded per organization and enforced at the API layer — Advisedly cannot route prompts to a provider the customer has not selected.

Section 2b — AI Feature Inventory

Every AI Feature on the Platform

Below is the complete inventory of customer-facing AI features shipping today. For each one, we list what data is sent to the customer-selected LLM and where the feature is reached in the dashboard. The full Model Card for any feature — including training-data restrictions, retention policy, and human-review gates — is available at /dashboard/ai-toolkit/model-card inside an authenticated session.

FeatureDescriptionData Sent to ModelSurface
Incident Response Playbook & Post-MortemGenerates an IR playbook from incident type + severity + asset class, plus a post-mortem narrative once the incident closes.Incident type, severity, affected asset metadata, timeline events. No raw payloads or PII./dashboard/incidents · /dashboard/ir-advanced
Breach Analysis (HIPAA)OCR-ready breach analysis covering 4-factor risk assessment, notification timer math, and remediation guidance.Breach metadata: discovery date, affected record count, PHI categories. No PHI itself./dashboard/hipaa
Assessment Action-Item ExtractorReads framework gap-analysis output and emits prioritized remediation tasks with control mappings.Control gaps, deficiency descriptions, target framework./dashboard/assessments
Automation Rule GeneratorDrafts pg-boss / scheduler automation rules from a plain-language description + control hooks.Rule prompt, current org automation inventory (names only)./dashboard/automation
Compliance-as-Code GeneratorEmits Terraform / Bicep / OPA Rego rule sets keyed to a framework + control + cloud provider.Framework ID, control ID, cloud provider, rule taxonomy./dashboard/compliance-as-code
Meeting IntelligenceSummarizes audit / compliance meeting transcripts into action items, decisions, and follow-ups.Meeting transcript text the customer uploads./dashboard/meeting-intelligence
Policy GenerationDrafts internal security policies (28 templates) tuned to org metadata, framework scope, and risk posture.Policy template ID, org metadata (name, scope, framework set)./dashboard/policies
SSP Narrative GenerationGenerates System Security Plan control narratives for FedRAMP / CMMC / RMF authorization packages.Information system metadata, control ID, prior narrative (if any), framework target./dashboard/cmmc · /dashboard/fedramp
POA&M EstimationSuggests milestones, completion dates, and resource estimates for a Plan of Action & Milestones entry.Finding description, severity, affected control, asset class./dashboard/poams
Plugin Quality ScoringReviews customer-submitted plugin manifests for safety, evidence-mapping correctness, and security posture.Plugin manifest, declared capabilities, declared evidence outputs./dashboard/plugins
Evidence Sufficiency ScoringEvaluates whether a piece of uploaded evidence satisfies a control requirement and emits a 0-100 score.Evidence metadata + extracted text excerpt, control requirement text./dashboard/evidence
Audit Question GenerationDrafts auditor-style interview questions for a given control, framework, or audit scope.Control ID, framework, audit scope summary./dashboard/audit-management
Threat Model & Architecture ReviewGenerates STRIDE / PASTA threat models and architecture review notes from a system description.System architecture description provided by the customer./dashboard/ai-security-advisor · /dashboard/engineering
TRACE Score AI ExtractExtracts CPE / vendor / product / version triples from CVE descriptions to drive reachability scoring.CVE ID + CVE description text (NVD public data).Background job — surfaces in /dashboard/vulnerabilities
Document Intake (classification, extraction, tagging)Classifies uploaded documents, extracts structured fields, generates summaries, and proposes tags.Customer-uploaded document text./dashboard/documents
Knowledge Base Q&AAnswers customer compliance questions against the platform knowledge base + framework crosswalks.Customer question, retrieved KB chunks (no other org data)./dashboard/knowledge-base

Inventory current as of 2026-05-09. New AI features are added to this list before they ship to production. The full per-feature Model Card including the input/output schema, refusal taxonomy, and human-review thresholds is published inside the customer dashboard.

Section 2c — Provenance Pill

The AI-Output Label You'll See

Every AI-generated artifact in the platform renders with an inline provenance pill so the user knows, at a glance, what produced the content. The pill is the same component on every surface — list views, detail pages, exports, PDF reports — so the disclosure is consistent across the product.

Pill Anatomy
  • 1.Provider of record — the LLM provider the customer chose at install time via the BYOAI bootstrap wizard. Identified by name on the pill so a reviewer can satisfy NIST AI RMF MEASURE-2.8 / EU AI Act Article 13 transparency obligations.
  • 2.Model identifier — the specific model name + version returned by the provider adapter. Locked into the provenance row so an auditor can bind the artifact to an exact model deployment.
  • 3.Prompt template ID — versioned reference to the prompt the platform sent. Combined with the customer's input, an auditor can replay the exact request that produced the artifact (temperature pinned to 0).
  • 4.Generation timestamp — UTC ISO-8601 timestamp recorded on the ai_provenance row. Surfaces in the pill modal alongside the audit log ID.
  • 5.Acceptance decision — once a reviewer accepts, rejects, or modifies the output, the pill updates with the human-in-the-loop verdict and links to the reviewer + decision timestamp. Schema lives in migration 0302-ai-provenance-acceptance-fields.sql.

The component is rendered by AiOutputBadge on every surface that emits AI content. Clicking the info icon opens a provenance modal with the full detail — provider, model, prompt ID, audit-log ID, and a deep link to the AI Governance provenance browser inside the dashboard.

The pill is a regulatory transparency surface (NIST AI RMF, ISO 42001, EU AI Act Article 13). Naming the provider on it is a compliance requirement, not vendor branding.

Section 2d — Opt-Out & Audit Trail

Turning AI Off & Auditing What It Did

How to opt out of AI features

  • Self-managed (SaaS / on-prem): Org admins disable AI globally for their organization at /dashboard/settings/ai by toggling the org-level AI features enabled flag. When disabled, every AI route returns a 403 with a generic opt-out message and no provider call is ever made.
  • Compliance-as-a-Service (CaaS) customers: CaaS-managed customers can request an opt-out from their assigned compliance manager or by emailing trust@advisedly.ai. The opt-out is applied at the org level the same way; the request and effective date are recorded in the customer's audit log.
  • Per-feature opt-out: Customers can also disable individual AI features (for example, keep TRACE Score AI extraction on while disabling generative SSP narratives) via the same settings page. Granularity is per-feature, not per-user, by design — so the audit story is consistent across the org boundary.
  • No silent re-enablement. An opt-out cannot be reversed by Advisedly staff. A customer admin must explicitly re-enable the feature; the action is audit-logged with the actor's identity.

Audit trail for AI activity

Every AI generation writes a row to the ai_provenance table containing the provider of record, model identifier, prompt template ID, prompt hash, output hash, temperature, token counts, and generation timestamp. The acceptance gate (migration 0302-ai-provenance-acceptance-fields.sql) extends each row with a human-in-the-loop decision — accept, reject, or modify — captured atomically with the reviewer ID, decision timestamp, and optional reviewer notes. Customers browse the trail at /dashboard/ai-governance and can filter by feature, reviewer, or decision.

Honest disclosure on coverage

Coverage today (2026-05-09): The acceptance gate is wired into 17+ customer-facing AI surfaces, including the AI Use-Case Assessment, IR playbook generation, breach analyzer, policy generation, SSP narrative generation, automation rule generation, assessment action-item extractor, compliance-as-code generator, meeting intelligence, POA&M estimation, threat model / architecture review, risk-management drafts, AI control drafts, conmon playbooks, contextual NL queries, the AI Security Advisor, and the AI Toolkit surface. Every generation on these surfaces returns its provenanceId in the API envelope and threads it through to the reviewer UI, so the accept / reject / modify decision is captured atomically before the artifact is treated as final. Bulk decisions are also supported at /dashboard/ai-governance for reviewers clearing a feature filter in one pass. Remaining AI endpoints (a smaller tail of niche surfaces) are still on the wave-3 plumbing list — the underlying provenance row is already written for every feature, but the row ID is not yet threaded in every envelope, so the in-line gate cannot block release on those surfaces today. The backstop for those is the pending-review queue at /dashboard/ai-governance/pending. We claim NIST AI RMF MANAGE-1.3 coverage for the wired surfaces only; full-platform coverage lands once the tail is plumbed.

Section 3 — Sovereignty

Customer Data Sovereignty

Advisedly is designed for customers whose data cannot leave their boundary — federal agencies, defense primes, and regulated healthcare and financial institutions.

  • No training on government data. Every supported provider operates under enterprise terms that prohibit using customer prompts or completions to train any model. Army CIO and DoD policy require this; we built for it.
  • Customer chooses the LLM. The BYOAI bootstrap wizard lets the customer select their provider and credentials. Advisedly never ships with a hard-coded vendor.
  • On-prem vLLM for air-gap. For classified or fully disconnected environments, Advisedly runs a customer-selected open-weight model on customer-owned GPUs via vLLM. Customer data never leaves the enclave.
  • Data residency by tier. Commercial SaaS: Azure commercial (US East 2). DoW / federal CUI: Azure Government (USGov Virginia) targeted for the FedRAMP package. On-prem and air-gap: customer-controlled infrastructure.
Section 4 — License & Audit

License Enforcement & Audit Replay

Advisedly carries its own audit story so that auditors do not have to take our word for anything.

AI Provenance Trail

Every AI-generated artifact records: provider, model, prompt_id, and audit_log_id. An auditor can replay any output deterministically — temperature is pinned to 0 and the prompt template is versioned.

License Enforcement

On-prem deployments use ed25519-signed license tokens. The platform shows escalation banners at T−30, T−7, and T−0, and the license itself is verifiable offline. SaaS deployments use Stripe-backed subscriptions instead.

Tamper-Evident Audit Log

All privileged actions are recorded in a structured audit log chained by hash. Exports are signed and tamper-evident. eMASS-style POA&M and ATO workflows roll forward into the same log.

Human-Review Gates

High-risk AI outputs (policies, SSP narratives, incident communications) require named human review before they leave the platform. Reviewer + timestamp are persisted alongside the AI provenance record.

SLSA v1.0 + OpenVEX Supply Chain

Every production container image is signed with a SLSA v1.0 DSSE provenance attestation and paired with an OpenVEX v0.2 vulnerability-disposition statement (both ed25519). Public verification endpoints publish the attestations and signing public key so customers and auditors can verify the build chain offline. Aligned to FedRAMP supply-chain expectations.

Section 5 — Deployment

Deployment Profiles

Advisedly ships in three deployment profiles. Pricing, feature set, and AI architecture differ across them — pick the one that matches your boundary.

SaaS

Multi-tenant Azure Container Apps, FedRAMP-aligned, ACH-default billing. Used by commercial and DIB customers without classified data.

On-Premises

Single-tenant Kubernetes Helm or Docker Compose deployment in your data center. Ed25519-signed license enforcement with T-30 / T-7 / T-0 escalation banners.

Air-Gap / Hybrid

Fully disconnected install with on-prem vLLM for AI features. Government data and AI inference both stay inside the customer enclave. eMASS bidirectional sync available on classified networks.

Detailed profile docs (environment variables, hardening guide, air-gap install steps) are provided to customers under MSA. Contact the trust team below for access.

Section 6 — Contact

Trust Team & Data Subject Requests

Two dedicated mailboxes service trust and data-subject inquiries. Both are staffed by Advisedly staff during US business hours.

Advisedly Compliance LLC · Prospect, TN · SAM.gov UEI XSZ6TYQM2F54 · CAGE 1Z6E9