Advisedly is built so that every control, every piece of evidence, and every AI-generated artifact carries a verifiable trail. Auditors can replay it. Customers can defend it. Regulators can inspect it.
Onboarding ships with eight of the most-requested frameworks pre-mapped. The full Advisedly catalog covers 262 frameworks end to end — control descriptions, evidence requirements, crosswalks, and audit-ready report templates.
DoD contractor compliance — 110 NIST 800-171 controls + 24 CMMC delta.
Trust services criteria for SaaS and customer-facing platforms.
Healthcare PHI safeguards, breach timers, OCR audit prep.
International ISMS standard with Annex A control set.
325-control baseline for federal cloud providers.
Cardholder data environment safeguards for finance + retail.
Federal control catalog underpinning RMF authorization.
Cybersecurity Framework — Govern, Identify, Protect, Detect, Respond, Recover.
Plus 254 more — CMMC L1 / L3, FedRAMP Low / High, DFARS, ITAR, GLBA, NY DFS, NIS2, DORA, GDPR, SOX ITGC, CIS Controls v8.1, COBIT 2019, OWASP Top 10, NIST AI RMF, NIST AI 600-1, ISO/IEC 42001, EU AI Act, and the full Secure Controls Framework crosswalk catalog.
AI features in Advisedly are governed end-to-end. Every prompt, completion, and downstream artifact is logged with full provenance metadata so customers can satisfy NIST AI RMF, ISO/IEC 42001, and EU AI Act Article 13 transparency obligations out of the box.
Every deployment publishes a model card describing model identity, training-data restrictions, data residency, and opt-out paths. The customer-facing card lives at /dashboard/ai-toolkit/model-card.
Every AI endpoint uses a versioned prompt template. Prompt IDs are embedded in the provenance record so an auditor can replay the exact prompt that generated any artifact.
Token counts, generation latency, cost, and human-review status are persisted to the ai_provenance table for every call. Reports roll up to the AI Governance dashboard.
Before turning on an AI feature, customers complete an EU AI Act risk assessment captured in the platform — high-risk use cases automatically require human review.
Advisedly is vendor-agnostic. Customers choose their LLM provider at install time via the first-launch BYOAIwizard. The customer's data is never used to train any model — every supported provider operates under enterprise terms that prohibit training use. Naming the supported providers below is a regulatory transparency requirement (not vendor branding):
The customer's choice of provider is recorded per organization and enforced at the API layer — Advisedly cannot route prompts to a provider the customer has not selected.
Below is the complete inventory of customer-facing AI features shipping today. For each one, we list what data is sent to the customer-selected LLM and where the feature is reached in the dashboard. The full Model Card for any feature — including training-data restrictions, retention policy, and human-review gates — is available at /dashboard/ai-toolkit/model-card inside an authenticated session.
| Feature | Description | Data Sent to Model | Surface |
|---|---|---|---|
| Incident Response Playbook & Post-Mortem | Generates an IR playbook from incident type + severity + asset class, plus a post-mortem narrative once the incident closes. | Incident type, severity, affected asset metadata, timeline events. No raw payloads or PII. | /dashboard/incidents · /dashboard/ir-advanced |
| Breach Analysis (HIPAA) | OCR-ready breach analysis covering 4-factor risk assessment, notification timer math, and remediation guidance. | Breach metadata: discovery date, affected record count, PHI categories. No PHI itself. | /dashboard/hipaa |
| Assessment Action-Item Extractor | Reads framework gap-analysis output and emits prioritized remediation tasks with control mappings. | Control gaps, deficiency descriptions, target framework. | /dashboard/assessments |
| Automation Rule Generator | Drafts pg-boss / scheduler automation rules from a plain-language description + control hooks. | Rule prompt, current org automation inventory (names only). | /dashboard/automation |
| Compliance-as-Code Generator | Emits Terraform / Bicep / OPA Rego rule sets keyed to a framework + control + cloud provider. | Framework ID, control ID, cloud provider, rule taxonomy. | /dashboard/compliance-as-code |
| Meeting Intelligence | Summarizes audit / compliance meeting transcripts into action items, decisions, and follow-ups. | Meeting transcript text the customer uploads. | /dashboard/meeting-intelligence |
| Policy Generation | Drafts internal security policies (28 templates) tuned to org metadata, framework scope, and risk posture. | Policy template ID, org metadata (name, scope, framework set). | /dashboard/policies |
| SSP Narrative Generation | Generates System Security Plan control narratives for FedRAMP / CMMC / RMF authorization packages. | Information system metadata, control ID, prior narrative (if any), framework target. | /dashboard/cmmc · /dashboard/fedramp |
| POA&M Estimation | Suggests milestones, completion dates, and resource estimates for a Plan of Action & Milestones entry. | Finding description, severity, affected control, asset class. | /dashboard/poams |
| Plugin Quality Scoring | Reviews customer-submitted plugin manifests for safety, evidence-mapping correctness, and security posture. | Plugin manifest, declared capabilities, declared evidence outputs. | /dashboard/plugins |
| Evidence Sufficiency Scoring | Evaluates whether a piece of uploaded evidence satisfies a control requirement and emits a 0-100 score. | Evidence metadata + extracted text excerpt, control requirement text. | /dashboard/evidence |
| Audit Question Generation | Drafts auditor-style interview questions for a given control, framework, or audit scope. | Control ID, framework, audit scope summary. | /dashboard/audit-management |
| Threat Model & Architecture Review | Generates STRIDE / PASTA threat models and architecture review notes from a system description. | System architecture description provided by the customer. | /dashboard/ai-security-advisor · /dashboard/engineering |
| TRACE Score AI Extract | Extracts CPE / vendor / product / version triples from CVE descriptions to drive reachability scoring. | CVE ID + CVE description text (NVD public data). | Background job — surfaces in /dashboard/vulnerabilities |
| Document Intake (classification, extraction, tagging) | Classifies uploaded documents, extracts structured fields, generates summaries, and proposes tags. | Customer-uploaded document text. | /dashboard/documents |
| Knowledge Base Q&A | Answers customer compliance questions against the platform knowledge base + framework crosswalks. | Customer question, retrieved KB chunks (no other org data). | /dashboard/knowledge-base |
Inventory current as of 2026-05-09. New AI features are added to this list before they ship to production. The full per-feature Model Card including the input/output schema, refusal taxonomy, and human-review thresholds is published inside the customer dashboard.
Every AI-generated artifact in the platform renders with an inline provenance pill so the user knows, at a glance, what produced the content. The pill is the same component on every surface — list views, detail pages, exports, PDF reports — so the disclosure is consistent across the product.
ai_provenance row. Surfaces in the pill modal alongside the audit log ID.0302-ai-provenance-acceptance-fields.sql.The component is rendered by AiOutputBadge on every surface that emits AI content. Clicking the info icon opens a provenance modal with the full detail — provider, model, prompt ID, audit-log ID, and a deep link to the AI Governance provenance browser inside the dashboard.
The pill is a regulatory transparency surface (NIST AI RMF, ISO 42001, EU AI Act Article 13). Naming the provider on it is a compliance requirement, not vendor branding.
/dashboard/settings/ai by toggling the org-level AI features enabled flag. When disabled, every AI route returns a 403 with a generic opt-out message and no provider call is ever made.Every AI generation writes a row to the ai_provenance table containing the provider of record, model identifier, prompt template ID, prompt hash, output hash, temperature, token counts, and generation timestamp. The acceptance gate (migration 0302-ai-provenance-acceptance-fields.sql) extends each row with a human-in-the-loop decision — accept, reject, or modify — captured atomically with the reviewer ID, decision timestamp, and optional reviewer notes. Customers browse the trail at /dashboard/ai-governance and can filter by feature, reviewer, or decision.
Coverage today (2026-05-09): The acceptance gate is wired into 17+ customer-facing AI surfaces, including the AI Use-Case Assessment, IR playbook generation, breach analyzer, policy generation, SSP narrative generation, automation rule generation, assessment action-item extractor, compliance-as-code generator, meeting intelligence, POA&M estimation, threat model / architecture review, risk-management drafts, AI control drafts, conmon playbooks, contextual NL queries, the AI Security Advisor, and the AI Toolkit surface. Every generation on these surfaces returns its provenanceId in the API envelope and threads it through to the reviewer UI, so the accept / reject / modify decision is captured atomically before the artifact is treated as final. Bulk decisions are also supported at /dashboard/ai-governance for reviewers clearing a feature filter in one pass. Remaining AI endpoints (a smaller tail of niche surfaces) are still on the wave-3 plumbing list — the underlying provenance row is already written for every feature, but the row ID is not yet threaded in every envelope, so the in-line gate cannot block release on those surfaces today. The backstop for those is the pending-review queue at /dashboard/ai-governance/pending. We claim NIST AI RMF MANAGE-1.3 coverage for the wired surfaces only; full-platform coverage lands once the tail is plumbed.
Advisedly is designed for customers whose data cannot leave their boundary — federal agencies, defense primes, and regulated healthcare and financial institutions.
Advisedly carries its own audit story so that auditors do not have to take our word for anything.
Every AI-generated artifact records: provider, model, prompt_id, and audit_log_id. An auditor can replay any output deterministically — temperature is pinned to 0 and the prompt template is versioned.
On-prem deployments use ed25519-signed license tokens. The platform shows escalation banners at T−30, T−7, and T−0, and the license itself is verifiable offline. SaaS deployments use Stripe-backed subscriptions instead.
All privileged actions are recorded in a structured audit log chained by hash. Exports are signed and tamper-evident. eMASS-style POA&M and ATO workflows roll forward into the same log.
High-risk AI outputs (policies, SSP narratives, incident communications) require named human review before they leave the platform. Reviewer + timestamp are persisted alongside the AI provenance record.
Every production container image is signed with a SLSA v1.0 DSSE provenance attestation and paired with an OpenVEX v0.2 vulnerability-disposition statement (both ed25519). Public verification endpoints publish the attestations and signing public key so customers and auditors can verify the build chain offline. Aligned to FedRAMP supply-chain expectations.
Advisedly ships in three deployment profiles. Pricing, feature set, and AI architecture differ across them — pick the one that matches your boundary.
Multi-tenant Azure Container Apps, FedRAMP-aligned, ACH-default billing. Used by commercial and DIB customers without classified data.
Single-tenant Kubernetes Helm or Docker Compose deployment in your data center. Ed25519-signed license enforcement with T-30 / T-7 / T-0 escalation banners.
Fully disconnected install with on-prem vLLM for AI features. Government data and AI inference both stay inside the customer enclave. eMASS bidirectional sync available on classified networks.
Detailed profile docs (environment variables, hardening guide, air-gap install steps) are provided to customers under MSA. Contact the trust team below for access.
Two dedicated mailboxes service trust and data-subject inquiries. Both are staffed by Advisedly staff during US business hours.
Vulnerability disclosures, security questionnaires, third-party audit requests, AI governance inquiries.
Data Subject Access Requests (GDPR / CCPA), DPAs, contract questions, subpoenas, data deletion requests.
Advisedly Compliance LLC · Prospect, TN · SAM.gov UEI XSZ6TYQM2F54 · CAGE 1Z6E9