Governance and Compliance Strategies for Hybrid AI-Conventional Systems
Abstract
Most organizations don’t operate in a clean, AI-native environment. They operate at the intersection of decades-old infrastructure and rapidly evolving artificial intelligence, a condition broadly referred to as the hybrid AI-conventional, or hybrid AI-IT, environment.
This whitepaper examines how organizations can govern and remain compliant within these hybrid environments, drawing on principles of Responsible AI, established IT governance frameworks, and the emerging legal landscape reshaping institutional accountability.
Defining the Hybrid AI-Conventional Environment
A hybrid AI-conventional system refers to any operational environment in which modern artificial intelligence capabilities such as machine learning, natural language processing (NLP), intelligent automation, predictive analytics, etc., are integrated with or layered onto existing conventional IT infrastructure. That infrastructure typically includes relational databases, ERP systems, legacy mainframes, on-premises servers, and purpose-built applications that may be years or even decades old. The hybrid nature of the environment is not a temporary condition on the way to a fully AI-native future. For the foreseeable future, it is the dominant operating reality for most enterprises, government agencies, and regulated institutions.
Conventional systems were built for deterministic processing; but AI systems, by contrast, are probabilistic. AI systems learn from patterns in data, generate outputs that may not be directly traceable to a single input, and they evolve over time in ways that legacy architectures were never designed to deal with. When these two paradigms are joined, for example, with APIs, middleware, or data pipelines, the governance and compliance challenges become very complex. Questions of data integrity, model accountability, audit trails, and regulatory responsibility become substantially more complex than they were in a purely conventional environment.
Understanding this fundamental tension is the starting point for any serious governance strategy. Organizations that ignore it risk deploying AI systems whose decisions cannot be explained, audited, or defended legally or operationally.
The Governance Imperative
Governance in hybrid AI-IT environments encompasses the policies, structures, roles, and processes that direct how AI systems are developed, deployed, monitored, and retired. Governance is not only an IT concern. Effective governance requires engagement from legal, compliance, operations, finance, and executive leadership. When AI is embedded in legacy infrastructure, governance need to account for both the inherited constraints of that infrastructure and the novel risks introduced by the AI layer.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), published in 2023, provides a widely adopted structure for organizations seeking to govern AI responsibly.[1] The framework organizes governance activity into four core functions: Govern, Map, Measure, and Manage. Each of these functions has specific implications in a hybrid environment. In hybrid systems, the Govern function needs to explicitly address how responsibility is divided between AI decision-making and the conventional systems that provide inputs to, or act on outputs from, those models. The absence of clear ownership at this boundary is one of the most common governance failures in hybrid deployments.
The ISO/IEC 42001 standard, also published in 2023, establishes an AI management system standard that complements the NIST framework and provides internationally recognized criteria for AI governance.[2] Organizations operating in regulated industries such as financial services, healthcare, and critical infrastructure, should treat both frameworks not as aspirational guidance but as practical baselines.
Governance frameworks need to also address what happens to governance when legacy systems are updated, decommissioned, or replaced. AI models trained on data from a legacy system can produce unreliable outputs once that system changes. A governance program that lacks version control, change management protocols, and revalidation procedures for AI models operating alongside evolving infrastructure is incomplete and potentially dangerous.
Compliance Obligations in a Hybrid World
Compliance in hybrid AI-IT environments is a layered challenge. Organizations need to simultaneously satisfy existing regulatory requirements, such as sector-specific rules that predate AI, and prepare for new obligations specifically designed to address AI risks. These two layers interact in ways that require careful legal and technical coordination.
The Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), the Fair Credit Reporting Act (FCRA), and sector-specific regulations enforced by the Federal Trade Commission (FTC), the Office of the Comptroller of the Currency (OCC), and other agencies all have provisions that touch AI when it processes regulated data or influences decisions affecting individuals.[3] In many cases, these regulations were written without AI in mind, but their core provisions, such as data accuracy, decision accountability, and consumer rights, apply when an AI system is involved in a covered function.
The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, signed in October 2023, was a significant shift in the federal view of AI governance, directing agencies to develop AI standards and requiring safety testing disclosures for AI systems.[4] Federal agencies have since been developing specific guidance across sectors. The FTC has made clear that algorithmic accountability falls within its consumer protection mandate. The Consumer Financial Protection Bureau (CFPB) has said that existing fair lending laws apply to AI-driven credit decisions.
State-level activity has also accelerated and several states have enacted comprehensive AI legislation that imposes transparency, impact assessment, and human oversight requirements on automated decision systems. Organizations operating across multiple states need to map their compliance obligations carefully, as the patchwork of state requirements creates significant complexity, especiallyy for legacy systems that were not designed with state-by-state configurability from the start.
For compliance purposes, organizations should treat AI systems operating within legacy environments as subject to the full scope of existing regulatory obligations, while simultaneously building capacity to respond to new and emerging requirements. Waiting for regulatory certainty before investing in compliance infrastructure is a high-risk strategy.
Responsible AI Principles Applied to Legacy Integration
Responsible AI is a set of principles that are established in both academic literature and institutional frameworks: transparency, accountability, safety, reliability, and human oversight. In hybrid environments, applying these principles requires translating abstract values into operational practices that accommodate the technical constraints of legacy infrastructure.
Transparency means that AI decisions have to be explainable to the humans affected by the decisions and to the regulators responsible for oversight. Legacy systems frequently present a transparency barrier because the data they produce generally does not have reliable provenance documentation, contains inconsistencies accumulated over years of use, or are feed into AI models without sufficient documentation of transformation steps. Organizations integrating AI into legacy environments should establish data lineage practices that trace the journey of every consequential input from its legacy source through the AI model to the output decision.
Accountability requires that a human or institutional actor can be identified as responsible for any AI-driven decision, particularly when that decision affects an individual's rights, finances, health, or employment.[5] In hybrid environments, diffuse accountability is a real and persistent risk. When an AI system makes a recommendation based on outputs from a legacy ERP system, and a human acts on that recommendation, who is accountable if the recommendation proves harmful? Governance frameworks need to assign clear, documented accountability at each stage of the decision chain, and not left to inference.
Safety and reliability require that AI systems operating within legacy environments need to be tested under conditions that reflect the actual variability of legacy data inputs. Models validated in clean, curated data environments frequently degrade when exposed to the noisy, inconsistent, or incomplete data characteristic of legacy systems. Organizations need to implement continuous monitoring programs that track model performance against baseline benchmarks, with defined thresholds that trigger human review when performance degrades.
Human oversight is the most operationally important RAI principles in hybrid environments. Legacy infrastructure generally embeds automation that predates modern AI, for example rules engines, batch processing workflows, and automated transaction processing, that interacts with AI systems in ways that reduce human oversight. A disciplined review of automation touchpoints is essential to ensure that critical decisions have meaningful human review, especially for decisions with legal or regulatory obligations.
Best Practices for Governance and Compliance
Several best practices can be gathered from organizations that are successfully managing governance and compliance in hybrid AI-IT environments. These best practices are not theoretical; they reflect what effective institutions are doing today.
Establish an AI Governance Council
Organizations should convene a standing governance body with cross-functional representation: technology, legal, compliance, operations, and business leadership. This council needs to own the AI governance framework, review high-risk deployments, monitor regulatory developments, and make sure that AI systems operating in legacy environments are subject to formal risk assessment before deployment and on a regular cycle after deployment.
Conduct Integration Risk Assessments
Before deploying any AI system that interacts with legacy infrastructure, organizations need to conduct a formal integration risk assessment. The assessment needs to evaluate data quality and provenance, identify accountability gaps at system boundaries, assess the potential for cascading failures, and document the human oversight mechanisms that are in place. Risk assessments need to be refreshed every time the legacy environment changes and could impact the AI model inputs or outputs.
Implement Model Documentation and Version Control
Every AI model operating in production should have a corresponding model card or similar documentation that describes its purpose, training data characteristics, validated performance parameters, known limitations, and any legacy system interaction. Version control for models need to be integrated with change management processes for the legacy systems they depend on. When legacy systems change, the models that depend on them should be revalidated before going back into production.
Audit Trails and Explainability Infrastructure
Compliance in regulated industries generally requires the ability to explain a decision once it’s been made the. AI systems operating within legacy environments need to produce structured logs of decisions, including the inputs received from legacy systems, the model version that processed them, and the output generated. These logs need to be retained according to the same regulations and schedules that apply to the records of the legacy systems. Explainability infrastructure, that is,tools that generate human-readable rationales for model outputs, need to be treated as a compliance requirement, not an optional enhancement.
Vendor and Third-Party Risk Management
Most organizations don’t build proprietary AI systems internally; they mostly integrate AI through third-party vendors or cloud service providers. Still, in these situations, the organization remains legally and regulatorily responsible for the outcomes of AI systems operating within its environment, regardless of who built or hosts the model.[6] Contracts with AI vendors need to explicitly articulate data governance requirements, audit rights, explainability obligations, and procedures for incident notification and remediation. Vendor risk assessments should be conducted before integration and on a regular cycle after the AI functionality is deployed.
Practical Guidance for Leaders
Here area few practical guidelines for executives and governance professionals dealing with the complexities of hybrid AI-IT environments:
First, resist the temptation to treat legacy modernization as a prerequisite for AI governance. Many organizations delay building AI governance infrastructure because they intend to replace legacy systems “soon.” That delay is itself a governance failure. AI systems operating today in legacy environments need governance today. The timeline for infrastructure modernization is almost always longer than anticipated, and the legal and reputational risks accumulate in the meanwhile.
Second, build compliance incrementally. Organizations that try to implement comprehensive AI governance in a single stage generally fail. A more effective approach is to start with the highest-risk AI applications, that is, those applications that provide recommendations and decisions that have legal consequences, financial impact, or potential for harm, and then build governance outward from that core. Compliance infrastructure designed for high-risk applications can be adapted for lower-risk application; but a lower-risk solution generally can’t be tailored to cover a higher-risk.
Third, invest in human capital as deliberately as in the technology. Governance frameworks are only as effective as the people responsible for implementing them. Organizations need professionals who understand both the technical properties of AI systems and the regulatory environments in which they operate. This combination of skills remains genuinely scarce and should be treated as a strategic priority for talent acquisition and development.
Fourth, engage legal counsel proactively. The legal landscape for AI is evolving with unusual speed. Litigation involving AI-influenced decisions in employment, credit, healthcare, and criminal justice is increasing. Organizations that engage legal counsel as strategic advisors in AI governance, rather than as reactive responders to incidents, are better positioned to identify and address legal exposure before it becomes litigation.
Legal Trends and Emerging Liability Landscape
The AI legal landscape is moving from a time of voluntary standards and aspirational frameworks to a legal reality of enforceable obligations and active litigation. There ate several trends that reqiree attention from organizations operating hybrid AI-IT environments.[7]
Algorithmic accountability legislation is advancing at both the federal and state levels. The proposed Algorithmic Accountability Act, versions of which have been introduced in Congress, will require impact assessments for automated decision systems used in consequential contexts.[8] Even where legislation has not yet been enacted, the FTC's enforcement posture, with Section 5 authority over unfair or deceptive acts and practices, provides a basis for liability when AI systems produce harmful outcomes that were foreseeable and preventable.
Product liability is being tested in AI contexts. Where AI systems cause harm, plaintiffs and their counsel are exploring theories of defective design, failure to warn, and negligence that adapt traditional tort frameworks to AI-specific facts.[9] In hybrid environments, the presence of legacy systems complicates causation analysis, determining whether a harmful AI output resulted from a model defect, corrupted legacy data, or integration failure may need forensic investigation that organizations are not currently equipped to conduct. Building that capacity proactively is both a governance necessity and a legal risk management strategy.
Regulatory enforcement actions targeting AI are increasing. For example, financial regulators have tightened scrutiny of AI use in credit decisions, fraud detection, and customer communications. Healthcare regulators are examining AI tools that influence clinical decision-making. Employment regulators are investigating AI systems used in hiring and performance evaluation. Organizations in regulated industries should anticipate that AI governance documentation, such as model cards, risk assessments, audit logs, governance council minutes, etc., will be subject to examination by regulators and, in litigation, by opposing counsel through discovery.
Intellectual property questions involving AI trained on proprietary legacy data are also emerging as a source of legal complexity. Where AI models are trained on data that resides in legacy systems, questions of data ownership, licensing, and permissible use require careful legal analysis. This is particularly relevant for organizations that have undergone mergers, acquisitions, or divestitures, where the provenance of legacy data may not be fully documented.
Trends Shaping the Future of Hybrid AI Governance
Technology and governance trends are reshaping the hybrid AI-IT governance.
The rise of agentic AI, systems that can make autonomously actions and can interact with other software systems, create new governance challenges in hybrid environments.[10]
When an AI agent initiates a transaction, modifies a record, or communicate with external and legacy infrastructure, the accountability frameworks designed for advisory AI systems is generally not enough for hybrid compliance. Governance programs need specific protocols for agentic AI that address authorization boundaries, audit logging requirements, and human override mechanisms.
Regulatory convergence is likely but is still in the early stages. Federal agencies are developing sector-specific AI guidance at different speeds and with different priorities. The result is a fragmented regulatory landscape that creates compliance complexity for organizations operating across sectors or jurisdictions. Organizations need to monitor regulatory developments continuously and invest in governance infrastructure that flexible enough to accommodate evolving requirements and should avoid building compliance programs that are tightly coupled to any single regulatory framework.
The increasing availability of AI governance tooling, like automated model monitoring platforms, bias detection tools, and explainability frameworks, is lowering the technical barriers to building robust governance infrastructure. Organizations that have deferred governance investment on grounds of technical complexity should reconsider that position. The tooling available today is substantially more accessible than it was even two years ago.
Conclusion
Governing AI in hybrid conventional-AI environments is not a future challenge. It is a present operational and legal reality for virtually every organization that has deployed AI capabilities.
Organizations that deal with this reality are building governance infrastructure deliberately, applying Responsible AI principles with operational specificity, engage with the evolving legal and regulatory landscape proactively, and investing in the human capacity to sustain governance practices over time.
The hybrid environment is complex, but governance is not beyond reach, the frameworks exist, and the best practices are reasonably well established. The legal obligations are stil evolving, but they are sufficiently clear to guide action.
Organizations need to treat AI governance as a core institutional responsibility, not as a compliance exercise, but as a foundation for trustworthy, durable, and defensible AI deployment.
FOOTNOTES
[1] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1
[2] International Organization for Standardization. (2023). ISO/IEC 42001:2023 — Information Technology — Artificial Intelligence — Management System. Geneva: ISO.
[3] Federal Trade Commission. (2022). Aiming for Truth, Fairness, and Equity in Your Company's Use of AI. FTC Business Blog. https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
[4] The White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Executive Order 14110. October 30, 2023.
[5] Doshi-Velez, F., et al. (2017). Accountability of AI Under the Law: The Role of Explanation. Berkman Klein Center for Internet and Society, Harvard University.
[6] Office of the Comptroller of the Currency. (2021). Model Risk Management: Supervisory Guidance on Model Risk Management (SR 11-7). Board of Governors of the Federal Reserve System and OCC.
[7] Brookings Institution. (2023). A New Era for AI Regulation in the United States. Brookings Governance Studies. https://www.brookings.edu/articles/a-new-era-for-ai-regulation-in-the-united-states/
[8] Algorithmic Accountability Act of 2023, S. 3572, 118th Congress (2023). Introduced by Sen. Ron Wyden et al.
[9] Vladeck, D.C. (2022). Machines Without Principals: Liability Rules and Artificial Intelligence. Washington Law Review, 89(1), 117–150.
[10] Anthropic. (2024). Claude's Model Specification and the Emergence of Agentic AI. Published guidance on agentic AI behavior and accountability structures.
REFERENCES
1. Calo, R. (2017). Artificial Intelligence Policy: A Primer and Roadmap. UC Davis Law Review, 51, 399–435.
2. Department of Commerce, National Telecommunications and Information Administration. (2023). AI Accountability Policy Report. Washington, DC: U.S. Government Publishing Office.
3. Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Communications of the ACM, 59(2), 56–62.
4. Floridi, L., et al. (2019). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 29(4), 689–707.
5. Gartner Research. (2024). Top Trends in AI Governance and Risk for Enterprise Technology Leaders. Gartner, Inc.
6. IBM Institute for Business Value. (2023). AI Governance in Practice: A Global Study of Enterprise AI Risk Management. Armonk, NY: IBM Corporation.
7. McKinsey Global Institute. (2023). The State of AI in 2023: Generative AI's Breakout Year. McKinsey & Company.
8. National Artificial Intelligence Initiative Act of 2020, Pub. L. No. 116-283, 134 Stat. 3967 (2020).
9. OECD. (2019). Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. Paris: Organisation for Economic Co-operation and Development.
10. Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Upper Saddle River, NJ: Pearson.
11. U.S. Government Accountability Office. (2022). Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO-21-519SP). Washington, DC: GAO.
12. Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841–887.

Copyright © 2026 The Institute for Responsible AI / MTI - All Rights Reserved.
Version 1.0
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.