The Reality of "Shadow AI" in 2026

Most organizations believe they have three or four "official" AI projects. In reality, recent audits across global enterprises suggest that up to 40% of departments are utilizing unauthorized AI tools for tasks ranging from code generation to HR screening.

In a world of fragmented data sovereignty, an undocumented AI tool used by a team in Singapore could inadvertently trigger a compliance failure in the UK. Governance cannot exist without visibility.

Step 1: Building your AI-BOM (AI Bill of Materials)

Just as software supply chains require an SBOM, your AI ecosystem requires an AI-BOM. This is a machine-readable record of every component in your AI stack.

To build a defensible inventory, your AI-BOM must track:

  • Model Provenance: Is it an in-house model, a fine-tuned open-source model, or a third-party SaaS API?
  • Data Lineage: Where did the training data come from? Does it include personal data from any of the 120+ jurisdictions you operate in?
  • Deployment Context: Who is using it, and for what? (e.g., Is it used for "High-Risk" decisions like hiring or credit scoring?)
  • The "Role" Factor: Under frameworks like the EU AI Act, are you the Provider (the one who created it) or the Deployer (the one using it)? Your legal liability changes significantly based on this distinction.

Step 2: Risk Classification

Once your inventory is built, you must categorize each tool into a risk tier. In 2026, we recommend a four-tier approach aligned with ISO/IEC 42001:

  • Prohibited: Systems that contravene local ethics or laws (e.g., certain biometric surveillance).
  • High-Risk: Systems affecting human rights, safety, or legal status.
  • Limited Risk: Chatbots or synthetic content that require transparency labels.
  • Minimal Risk: Back-end optimization tools with no personal data interaction.

Formiti Pro-Tip: Don't wait for a regulatory audit to find your gaps. Use an automated discovery tool to scan your network for "AI signatures" and compare them against your official registry.

AI-BOM Q&A 

Q: What is the first step in introducing an AI compliance framework?

A: The first step is creating a comprehensive Global AI Inventory. Organizations must identify all AI systems, including "Shadow AI," and document them in an AI Bill of Materials (AI-BOM) to ensure visibility across all 120+ jurisdictions.

Q: Why is an AI-BOM necessary for 2026 data privacy compliance?

A: An AI-BOM provides the technical and legal lineage of an AI system. It allows compliance teams to track data sources, model versions, and third-party dependencies, which is essential for meeting the transparency and auditability requirements of modern global privacy laws.

Q: How do you classify AI risk in a multinational organization?

A: AI risk should be classified by assessing the system's impact on individuals and its alignment with international standards like ISO/IEC 42001. Systems are typically categorized as Prohibited, High-Risk, Limited Risk, or Minimal Risk depending on their use case and the data they process.

Next Step: In our next article, we will move from Discovery to Design. We'll look at The Framework – Mapping 120+ Jurisdictions to One Global Standard.

Get your Institutional AI Governance roadmap here