Cite
| Harvard | sood, S., 2025. Synthetic Artefact Governance Theory: Governing Synthetic Reality in the Age of AI-Generated Artefacts and Artificial Humans. International Journal of Innovation and Economic Development, 11(5), pp.36-57. |
| APA | sood, S. (2025). Synthetic Artefact Governance Theory: Governing Synthetic Reality in the Age of AI-Generated Artefacts and Artificial Humans. International Journal of Innovation and Economic Development, 11(5), 36-57. |
| Vancouver | sood S. Synthetic Artefact Governance Theory: Governing Synthetic Reality in the Age of AI-Generated Artefacts and Artificial Humans. International Journal of Innovation and Economic Development. 2025 Dec,11(5):36-57. |
| MLA | sood, Suresh. "Synthetic Artefact Governance Theory: Governing Synthetic Reality in the Age of AI-Generated Artefacts and Artificial Humans." International Journal of Innovation and Economic Development 11.5 (2025): 36-57. |
| Chicago | sood, Suresh. "Synthetic Artefact Governance Theory: Governing Synthetic Reality in the Age of AI-Generated Artefacts and Artificial Humans." International Journal of Innovation and Economic Development 11, no.5 (2025): 36-57. |
Suggested articles
International Journal of Innovation and Economic Development
Volume 11, Issue 5, December 2025, Pages 21-41
Synthetic Artefact Governance Theory: Governing Synthetic Reality
in the Age of AI-Generated Artefacts and Artificial Humans
URL: https://doi.org/10.18775/ijied.1849-7551-7020.2015.115.2003
DOI: 10.18775/ijied.1849-7551-7020.2015.115.2003
Suresh Sood11,2
1Industry/Professional Fellow, Australian Artificial Intelligence Institute, University of Technology Sydney
2Adjunct Fellow, Frontier AI Research Centre, Macquarie University, Sydney
Abstract: Artificial intelligence (AI) governance is shifting from voluntary ethical principles toward binding, risk-based regulatory regimes across jurisdictions. While these developments strengthen accountability for automated decision-making systems, they do not fully address the governance challenges posed by generative AI systems that produce synthetic artefacts indistinguishable from human outputs and simulate human social presence. Generative AI destabilizes traditional distinctions between data, content, and action, raising regulatory questions about authenticity, institutional trust, and relational integrity rather than automation alone. This article examines whether contemporary AI policy is equipped to govern these developments. Drawing on an interdisciplinary synthesis of legal scholarship, governance theory, technical research on provenance and identity systems, and comparative policy analysis, the paper develops Synthetic Artefact Governance Theory (SAGT) as a meta-governance framework. The analysis evaluates major AI policy regimes across jurisdictions, with focused assessment of the European Union Artificial Intelligence Act, and maps existing instruments against artefact-level, interaction-level, accountability, and ecosystem dimensions. The findings indicate current regimes remain predominantly model centric. Synthetic artefacts and artificial humans are typically addressed indirectly through transparency obligations, content moderation rules, or sectoral law rather than as first-order regulatory objects. Governance responses show partial convergence across legal, technical, and institutional domains, yet significant gaps persist at the levels of relational manipulation, artefact legitimacy, and lifecycle accountability.The article concludes effective AI governance must evolve from regulating intelligent systems toward governing synthetic reality. This transition requires integrating artefact-level regulation, interaction safeguards, provenance infrastructure, and cross-sector coordination across AI law, consumer protection, competition policy, and content governance.
Keywords: Artificial Intelligence Governance; Generative Ai; Synthetic Artefacts; Artificial Humans; Digital Identity; Provenance; Eu Ai Act
1. Introduction
Artificial intelligence (AI) governance stands at a major inflection point. For over a decade, regulatory and scholarly attention focused on algorithmic decision-making as to how AI systems classify, predict, recommend, and automate outcomes across domains of finance, employment, healthcare, policing, and public administration. This emphasis has driven a global proliferation of ethical frameworks and binding regulatory regimes foregrounding fairness, transparency, accountability, and human oversight. Collectively, these developments mark a maturation of AI governance, signalling a shift from voluntary principles toward enforceable legal obligations.
The rapid diffusion of generative AI systems fundamentally alters the nature of AI-related risk. 21st century AI systems no longer operate solely as decision-support or decision-replacement tools. Instead, the systems function as generative infrastructures, producing synthetic artefacts comprising texts, images, videos, voices, documents, software, and interactive personas circulating widely across social, economic, and institutional contexts. These artefacts increasingly function as evidence, identity proxies, and social signals, shaping beliefs, behaviors, and institutional outcomes without triggering traditional notions of algorithmic harm. At the same time, advances in multimodal and agentic AI systems enable the emergence of artificial human presence. Digital clones, synthetic spokespersons, conversational agents, and avatar-based systems connecting to large language models (LLMs) now simulate human likeness, behavior, and relational engagement. Unlike earlier AI applications, these systems do not merely optimize decisions. They interact socially, persuade emotionally, and operate within domains historically governed by norms of authenticity, consent, and trust.
Artificial humans introduce governance challenges related to digital identity rights, consent management, monetization of likeness, and behavioral manipulation dimensions remaining weakly addressed by existing AI policy frameworks (Atta et al., 2025; Towne, 2024). This transformation of systems exposes a structural mismatch between what AI governance currently regulates and where AI-generated harm increasingly arises. Most binding AI policies remain model-centric, conceptualizing AI primarily as a system processing input to generate decisions or recommendations. Regulation therefore concentrates on system classification, provider and deployer obligations, and deployment contexts. Even the most comprehensive regime to date, the European Union Artificial Intelligence Act, focuses predominantly on AI systems rather than the synthetic artefacts, identities, and social interactions those systems generate.
As a result, governance of synthetic media, digital clones, and artificial human presence is fragmented across transparency and disclosure requirements, platform content policies, privacy law, intellectual property regimes, and sector-specific standards. This fragmentation produces three systemic effects. First, fragmentation shifts the burden of verification onto institutions and individuals, increasing the cost of establishing authenticity and truth in environments drowning in synthetic content. Second, relational harms, such as undisclosed social influence by artificial humans is largely unregulated. Third, accountability is obscured when synthetic artefacts are reused, modified, or amplified across platforms and jurisdictions. Existing AI governance architectures therefore struggle to preserve epistemic trust, relational integrity, and institutional accountability in the age of synthetic reality. To address this gap, this paper advances Synthetic Artefact Governance Theory (SAGT).
SAGT reframes AI governance by treating synthetic artefacts and artificial human presence as first-order regulatory objects, rather than incidental by-products of intelligent systems. Drawing on interdisciplinary research, the theory conceptualizes governance as a layered architecture integrating legal, technical, and institutional mechanisms across artefact provenance, interactional transparency, human accountability, and ecosystem-level amplification control.
1.1 SAGT Conceptual Definitions and Boundary Conditions
Because the central constructs of this theory shape the unit of governance analysis, they require explicit definition.
Synthetic artefact refers to an AI-generated or AI-altered digital output plausibly functioning as socially interpretable content within institutional or public contexts. The defining criterion is substitutability. The artefact can reasonably be mistaken for, or operate equivalently to, content authored, recorded, or performed by a human or institutional actor. This includes text, images, audio, video, documents, digital identities, and embodied or conversational outputs.
Artificial human refers to an AI-generated or AI-mediated agent simulating human identity, likeness, voice, or relational behaviour in interactive settings. The term applies where the system performs social roles such as spokesperson, advisor, companion, or representative in ways that may be perceived as authentically human.
Synthetic reality describes environments in which synthetic artefacts and artificial humans circulate at sufficient scale or plausibility to alter baseline assumptions about authenticity, attribution, or evidentiary reliability. The term does not imply all content is artificial, but authenticity can no longer be presumed without verification.
Relational manipulation refers to influence exerted through simulated social presence rather than through false facts alone. The manipulation arises when artificial agents leverage perceived authority, intimacy, or identity to shape behaviour without transparent disclosure of an artificial status.
Epistemic trust denotes background confidence socially circulating information is authentic, attributable, and accountable. Erosion of epistemic trust occurs not only through deception, but through uncertainty regarding provenance and authorship.
In terms of boundary conditions, SAGT does not treat all synthetic content as inherently harmful. The following content fall outside the core governance domain of SAGT:
- Satire, parody, and obvious fiction
- Clearly disclosed entertainment media
- Transparent human–AI collaborative content where human authorship and accountability remain intact
The theory is primarily concerned with synthetic artefacts and artificial agents plausibly substituting for authentic institutional, evidentiary, or relational signals without adequate transparency or accountability safeguards.
1.2 Theoretical Contribution
This paper contributes to AI governance theory by developing Synthetic Artefact Governance Theory (SAGT) as a meta-governance framework reconceptualizing the object, mechanisms, and locus of AI regulation in the generative era. Existing AI governance theories remain largely system-centric, assuming harms arise primarily from biased, opaque, or unsafe algorithmic decisions. SAGT departs from this orientation theorizing synthetic artefacts and artificial humans as primary sources of epistemic, relational, and ecosystemic risk, independent of algorithmic decision quality. By integrating recent advances in harmonized regulatory scaffolds, digital identity rights frameworks, cryptographic provenance systems, and decentralized verification infrastructures, SAGT explains the observed partial convergence of governance responses that cannot be fully justified within existing risk-based regulatory models. Importantly, SAGT does not replace system-centric AI regulation but rather extends and complements existing regulation by offering a generalizable theoretical lens for governing synthetic reality, the socio-technical environments produced by generative AI systems.
1.3 Theoretical Separation from Model-Centric Risk Governance
Existing AI governance frameworks conceptualize risk primarily at the level of system design, deployment context, and decision outputs. In contrast, SAGT theory shifts the unit of analysis from system performance to artefact circulation and relational interaction. The theoretical innovation lies not merely in expanding the scope of regulation, but in redefining the object of governance. Under model-centric governance, harm originates in the flawed system outputs emerging from the model under use and training data. With SAGT, harm emerges from artefact legitimacy, identity simulation, amplification dynamics, and relational substitution even when systems perform as designed.
SAGT therefore differs from risk-based governance theory in three respects:
- Object shift from systems to artefacts and artificial actors
- Causal mechanism shift from decision error to trust distortion
- Institutional locus shift from provider compliance to ecosystem trust infrastructure
This reframing positions SAGT not as a supplement to model-centric governance, but as a meta-theoretical extension necessary in environments with synthetic outputs.
1.4 AI Policy Development Cycle
This paper follows the stages of an AI policy development cycle (Figure 1) commencing with Section 2 review of cross-jurisdictional AI policy instruments and developments highlighting convergences and gaps in current governance approaches. Section 3 reviews the evolution of AI policy from ethical principles to risk-based regulation and examines the limited treatment of synthetic artefacts and artificial humans in existing frameworks. Section 4 introduces Synthetic Artefact Governance Theory (SAGT) and articulates four governance layers comprising artefact, interaction, agency, and ecosystem. Section 5 develops a set of testable hypotheses derived from the theory, while Section 6 outlines a multi-method empirical research agenda for evaluating governance mechanisms in practice. Section 7 analytically assesses the European Union Artificial Intelligence Act against SAGT, supporting a full article-by-article mapping (Appendix B). Section 8 discusses policy design implications and proposes pathways for operationalising SAGT in law. Section 9 Takes an early look at emerging Artefact-Level Legislative Responses. Section 10 concludes by summarising contributions and outlining directions for future research and policy development.

Figure 1: AI Policy Development Cycle
Note. Policy Development Cycle as Envisaged by Author (own work)
2 Cross-Jurisdictional AI Policy Landscape
2.1 AI Policy - Clarifying Governance and Coverage Criteria
In this study, governance refers to one or more of the following mechanisms:
- Statutory obligations (binding legislative requirements)
- Enforcement practice (regulatory oversight, penalties, or adjudication)
- Standards and soft-law instruments (guidelines, codes of conduct, or principles)
- Platform-level operational controls (labelling systems, moderation rules, or provenance mechanisms).
Coverage assessment is according to whether a regulatory instrument explicitly addresses synthetic artefacts or artificial human interactions as governance objects, rather than addressing AI systems generically.
When this paper states no regime explicitly governs a phenomenon, the reference is specifically with regards to the absence of artefact-level statutory obligations or relational safeguards as first-order regulatory categories. This does not imply no legal tools apply indirectly through related domains such as fraud, consumer protection, or platform policy.
2.2 The Policy Landscape
Across jurisdictions, AI governance is converging toward risk-based regulatory architectures, although legal form, scope, and enforcement intensity vary substantially. The European Union pursues comprehensive binding regulation through the Artificial Intelligence Act, complemented by the Digital Services Act and established data protection law. The United States relies on executive guidance, voluntary commitments, and sector-specific enforcement. China has targeted rules governing generative AI content and provider responsibilities, while jurisdictions such as the United Kingdom and Australia favour principles-based or regulator-led approaches. Despite these differences, a common structural pattern emerges with synthetic artefacts and artificial humans rarely governed directly as regulatory objects, even as they become central to AI-related harm.
Policy table (1) illustrates the diversity of approaches taken across countries and regions, reflecting different institutional traditions, political economies, and levels of technological maturity. A clear shift is now evident. Jurisdictions of the European Union and China have moved decisively toward binding regulation, while others (e.g., the United States, United Kingdom, Japan, Singapore) rely on hybrid models combining soft law, standards, and sectoral enforcement. The result is a layered governance ecosystem where ethics no longer stand alone but are moving toward risk controls increasingly embedding in compliance obligations, audits, reporting duties, and penalties.
Table 1: Cross-Jurisdictional AI Policy Instruments
| Country / Region | AI Policy / Instrument | Key Characteristics | Year |
| European Union | Artificial Intelligence Act | Binding, risk-based AI regulation | 2024 |
| United States | Executive Order on AI | Federal coordination and safety testing | 2023 |
| China | Generative AI Measures | Content governance and provider obligations | 2023 |
| United Kingdom | Pro-innovation AI Framework | Principles-based, regulator-led approach | 2023 |
| Australia | AI Ethics Principles | Voluntary national principles | 2019 |
2.3 European Union: Partial and Indirect Coverage
The European Union Artificial Intelligence Act (European Parliament and Council, 2024) represents the most comprehensive binding AI framework currently enacted in 2025/26. The Regulation entered into force in 2024, but its substantive provisions apply on a staged timeline. Article 50 signals the introduction of transparency obligations when individuals interact with an AI system or when content is synthetically generated or manipulated (including deepfakes). This act is scheduled to apply from 2 August 2026 under the implementation timetable of the Act. Accordingly, while the transparency architecture is legislatively established, full operational enforcement of Article 50 remains prospective at the time of writing.
The Act addresses generative AI primarily through transparency, manipulation prevention, and systemic risk controls. Article 5 prohibits certain manipulative AI practices that exploit vulnerabilities or materially distort behaviour. In addition, large general-purpose AI models designated as posing systemic risk are subject to obligations designed to mitigate downstream misuse. These provisions collectively strengthen oversight of AI systems and impose compliance duties on providers and deployers.
However, the Act approaches synthetic artefacts principally through transparency and system-based obligations rather than through artefact-level governance. Synthetic outputs such as fabricated reports, synthetic research materials, digital identities, or AI-generated evidentiary materials are not treated as distinct regulatory categories with dedicated provenance or lifecycle accountability requirements. Similarly, artificial humans (e.g., avatars or AI personas) are not explicitly recognized as a separate governance object beyond disclosure requirements in interactive contexts.
From a Synthetic Artefact Governance Theory (SAGT) perspective, the EU framework therefore remains predominantly system centric. While it meaningfully advances disclosure and systemic oversight, it does not yet establish artefact-level provenance obligations, relational safeguards, or ecosystem accountability mechanisms directed specifically at the circulation and institutional effects of synthetic artefacts or artificial human presence. This distinction does not suggest that the Act is incomplete, but rather that its architecture prioritizes system regulation over artefact-centered governance.
2.4 China: Strongest Direct Control of Synthetic Humans Without a Normative Framework
China has developed some of the most explicit regulatory instruments addressing synthetic media and AI-generated content, though these measures are embedded within a broader content governance architecture rather than a unified AI safety statute. Key instruments include the Algorithmic Recommendation Provisions (2022), the Deep Synthesis Provisions (2023), and the Interim Measures for Generative AI Services (2023).
These regulations impose operational obligations on providers, including mandatory labelling of synthetic content, registration and licensing requirements for certain deep synthesis services, and liability exposure where impersonation or social harm results. In practice, China’s framework directly addresses synthetic personas, voices, and manipulated images more explicitly than many Western AI statutes.
However, this coverage operates primarily through content control and platform compliance mechanisms rather than through a rights-based or artefact-centered governance theory. Synthetic artefacts are regulated insofar as they implicate social stability, misinformation, or impersonation risks, but they are not framed as first-order institutional objects requiring provenance infrastructure or lifecycle accountability safeguards. Likewise, while artificial humans are subject to disclosure and impersonation constraints, there is no articulated governance regime addressing relational deception, emotional substitution, or broader epistemic trust dynamics.
From a SAGT perspective, China’s framework provides comparatively strong operational controls at the artefact surface layer (e.g., labelling and provider liability), yet it does not establish a comprehensive artefact-level accountability architecture grounded in institutional trust preservation. The regulatory emphasis remains content stability and platform discipline rather than ecosystem-level trust governance.
2.5 United States: Sectoral and Fragmented Governance
The United States does not currently operate under a comprehensive federal AI statute. Instead, governance emerges through a combination of executive action, sector-specific regulation, state-level legislation, and enforcement practice. Instruments such as Executive Order 14110 (2023), the NIST AI Risk Management Framework (2023), and the Blueprint for an AI Bill of Rights (2022) articulate principles and risk management expectations but do not establish artefact-level statutory obligations.
At the state level, several states enact targeted deepfake election laws, and regulatory agencies such as the Federal Trade Commission pursue enforcement actions against deceptive synthetic endorsements and impersonation practices. These measures address synthetic artefacts primarily through fraud, consumer protection, or electoral integrity frameworks.
However, coverage remains fragmented. Synthetic artefacts are governed indirectly through deception or misrepresentation doctrines rather than through dedicated artefact-centered statutes. Artificial humans, outside of impersonation or advertising contexts, are not recognized as a distinct regulatory category. No unified federal provenance infrastructure or statutory framework assigns lifecycle accountability to circulating synthetic artefacts.
From a SAGT perspective, the U.S. approach reflects strong enforcement capacity in specific domains (e.g., fraud, elections) but lacks an integrated artefact-layer governance architecture. Governance is reactive and sectoral rather than systemic, leaving broader questions of relational manipulation, artificial social agents, and institutional epistemic trust largely unaddressed in statutory form.
2.6 Generative AI, Platform Governance, and Epistemic Risk
Generative AI introduces what may be described as epistemic risk. This is the risk to shared understandings of truth, authorship, authenticity, and evidentiary reliability (Rini, 2020). Synthetic artefacts can be factually accurate yet misleading, procedurally compliant yet deceptive, and legally permissible yet corrosive to institutional trust. Unlike traditional misinformation, often adversarial and demonstrably false, synthetic artefacts may be produced by legitimate actors using approved systems. This complicates intent-based regulatory approaches and shifts the governance problem from falsity alone to questions of authenticity, provenance, and relational transparency (Chesney & Citron, 2019).
In response to this proliferation, platform-level governance emerges as one of the most operationalized mechanisms addressing artefact circulation in practice. Major digital platforms have introduced disclosure requirements, labelling systems, and moderation policies specifically targeting AI-generated or AI-altered content. These mechanisms function as embedded trust controls within content distribution infrastructures rather than as abstract regulatory principles.
YouTube provides a particularly important example. The platform requires creators to disclose when content is meaningfully altered or synthetically generated in ways appearing realistic, including deepfake-like depictions, synthetic voices, or altered scenes that could mislead viewers (Google Support, 2025). Disclosure occurs at the point of upload through built-in signalling tools, integrating transparency obligations directly into user workflows. Failure to comply may result in removal, demonetisation, or other enforcement actions. Similar disclosure regimes are emerging across major platforms as generative AI tools become mainstream.
These policies represent a form of private governance that parallels public-law transparency obligations, such as Article 50 of the EU AI Act. In practice, platforms currently provide some of the most direct artefact-level interventions. The interventions govern synthetic content at the moment of distribution, apply penalties for non-disclosure, and operationalize audience-facing labelling at scale. From a Synthetic Artefact Governance Theory (SAGT) perspective, platform governance addresses the artefact layer and partially engages the interaction layer by making artificial involvement visible at the point of consumption.
However, platform governance differs structurally from statutory regulation. Operating through private rulemaking, contractual enforcement, and internal moderation systems rather than democratically enacted public law. Coverage varies across platforms, enforcement standards are not uniform, cross-platform interoperability is limited, and due process protections are comparatively narrow. Moreover, platform rules focus primarily on disclosure and content moderation rather than on formal provenance infrastructures, lifecycle accountability chains, or institutional-grade verification mechanisms.
Platform governance therefore demonstrates the feasibility of artefact-level controls in practice, while also revealing limitations. Such platform governance operationalizes disclosure and labelling at scale, yet lacks the public legitimacy, harmonized standards, and systemic accountability mechanisms required for comprehensive synthetic artefact governance. In this sense, platforms provide practical micro-level governance of epistemic risk, but not a fully institutionalized artefact-centered regulatory regime.
2.7 International and Soft Law: Recognition Without Enforcement
International and soft-law frameworks, including those developed by UNESCO (2021) and the OECD (2019), increasingly acknowledge risks posed by deepfakes and synthetic media. These instruments emphasize transparency, human dignity, and responsible AI use. However, they stop short of imposing binding obligations or establishing enforcement mechanisms. As such, they function primarily as normative recognition, not operational governance.
2.8 Comparative Synthesis: What Exists and What Is Missing
Taken together, existing AI governance regimes reveal an asymmetry between what is formally regulated and where AI-generated harm increasingly manifests. While multiple jurisdictions and platforms address disclosure, manipulation, and content moderation, coverage thins markedly when moving toward artefact legitimacy, relational integrity, and institutional trust. Table 2 synthesizes current coverage and gaps across key governance dimensions. The comparative analysis below evaluates instruments against consistent criteria: (1) whether synthetic artefacts are explicitly defined, (2) whether artificial human interactions are regulated, (3) whether provenance or accountability chains are mandated, and (4) whether enforcement mechanisms attach directly to artefact circulation rather than to system classification alone.
As table 2 indicates, governance concentrates heavily on disclosure, particularly for visible or politically sensitive synthetic media. Beyond disclosure, however, regulation becomes sparse. Synthetic documents and AI-generated evidence remain unregulated as artefacts, despite growing use in professional, legal, and administrative settings. Artificial social agents are not recognized as a distinct category in any setting. Most notably, no regime explicitly governs relational manipulation, the sustained capacity of artificial humans to influence emotions, build trust, simulate intimacy, or substitute for human social interaction without disclosure or consent.
Table 2: Comparative Synthesis of Synthetic Artefact Governance Across Regimes
| Governance Dimension | Explicit Statutory Recognition as Regulatory Object? | Operational Coverage in Practice? | Illustrative Sources |
| Synthetic media disclosure (deepfakes, altered content) | Partial treated as transparency obligation rather than independent artefact category | Yes, disclosure and labelling mechanisms increasingly implemented | EU AI Act (Art. 50); China Deep Synthesis Provisions; major platforms |
| Artificial humans (avatars, AI personas, conversational agents) | Limited and addressed indirectly via disclosure rules; not recognized as distinct regulatory class | Emerging with some platform and national labelling rules | EU AI Act (interaction disclosure); China; platform policies |
| Synthetic documents / evidence (reports, research outputs, institutional artefacts) | No, not governed as artefact-level institutional objects | Minimal reliant on general fraud, misrepresentation, or sectoral law | No dedicated AI-specific regime identified |
| Artificial social agents engaging in relational interaction | No, no regime defines or regulates artificial humans as social substitutes | Minimal and interaction disclosure may apply, but no relational safeguards | Indirect via EU Art. 50; limited platform practice |
| Relational manipulation (authority simulation, emotional influence) | Partial manipulation prohibitions exist, but framed at system level | Limited addressed via general anti-manipulation clauses | EU Art. 5; sectoral consumer law |
| Provenance infrastructure (traceability, authentication standards) | Emerging though not comprehensively mandated across jurisdictions | Emerging with pilot registries, watermarking, voluntary standards | Platform initiatives; Web3 pilots; fragmented national efforts |
| Artefact-linked liability (accountability follows artefact lifecycle) | No – liability triggered at system or actor level, not artefact circulation | No systematic coverage | Absent as a structured governance model |
Efforts to establish trust infrastructure including provenance tracking, watermarking, and cryptographic verification are emerging primarily through platforms and pilots, remaining voluntary and non-standardised. Likewise, artefact-level liability who is responsible as synthetic artefacts are reused, modified, or amplified across contexts remains unaddressed in formal law.
Across jurisdictions, synthetic artefacts and artificial human presence are addressed indirectly, unevenly, and incompletely. Transparency rules and platform moderation provide partial coverage, but no jurisdiction governs synthetic artefacts or artificial humans as first-order institutional objects. This fragmentation weakens cross-border enforcement, enables regulatory arbitrage, and shifts verification costs onto individuals and institutions underscoring the SAGT core empirical claim contemporary AI governance remains model-centric, while the most consequential risks arise at the level of synthetic reality itself.
Across jurisdictions, AI governance converges toward risk-based regulatory architectures, though legal form and enforcement intensity vary substantially. The European Union has pursued binding regulation through the AI Act, complemented by the Digital Services Act and data protection law. The United States relies on executive guidance, voluntary commitments, and sector-specific oversight. China uses targeted rules governing generative AI content and provider responsibilities, while jurisdictions such as the United Kingdom and Australian favour principles-based or regulator-led approaches.
Despite these differences, a common pattern emerges, the synthetic artefacts and artificial humans as outputs of generative AI are typically addressed indirectly rather than through dedicated artefact-level statutory framework. Instead, they are addressed through transparency obligations, platform moderation rules, intellectual property law, and privacy regulation. This fragmentation weakens cross-border enforcement and enables regulatory arbitrage, particularly as synthetic content circulates globally at little cost.
Proposals for harmonized generative AI governance frameworks e.g. Generative AI Governance Framework v1.0 (Szarmach, 2025) reflect growing recognition of the challenge. Such frameworks seek to align regional governance processes into shared regulatory scaffolds mapping risk-based provisions across modalities and jurisdictions, enabling interoperability while preserving local legal autonomy (Calzada et al., 2025).
In addition to formal statutory regimes, platform governance functions as a parallel layer of synthetic artefact oversight. As discussed in Section 2.6, major platforms now impose structured disclosure requirements for realistic AI-generated or materially altered content. Rather than revisiting the operational details, the key point is platforms increasingly treat synthetic artefacts as audience trust risks requiring proactive signalling mechanisms. These disclosure systems are embedded into upload workflows and enforced through moderation penalties operating as de facto artefact-level governance tools, even though they lack statutory authority or cross-platform standardisation.
This private mechanism creates a hybrid governance environment. On the one hand, platform rules operationalise transparency at scale, often more rapidly than legislatures. On the other hand, enforcement remains contractual and discretionary, without the due process guarantees, interoperability mandates, or uniform evidentiary standards associated with public law. From a Synthetic Artefact Governance Theory (SAGT) perspective, platform policies partially occupy the artefact and interaction layers, but they do so in a fragmented and non-harmonised manner.
Empirically, these environments provide a valuable testing ground for SAGT hypotheses (section 5). Comparing contexts in which synthetic content disclosure is platform-enforced versus legally mandated allows assessment of governance effectiveness across architectures. Such comparisons are particularly relevant to hypotheses concerning deception mitigation (H4) and potential habituation effects arising from repeated exposure to labelling regimes (H6). Platform governance thus offers an observable intermediary stage between voluntary transparency and fully institutionalised artefact-level regulation.
2.9 Artificial Humans & Relational Governance
AI avatars, conversational agents, and embodied systems increasingly perform roles once reserved for humans including customer service representatives, educators, influencers, financial advisors, sales account managers and even experimental participants. This raises profound governance questions:
- Should artificial agents be explicitly disclosed as non-human?
- Who bears responsibility for harm caused by persuasive or deceptive AI personas?
- How should consent operate when humans interact with synthetic social actors?
- Can existing consumer protection, labour, and discrimination laws cope with artificial humans?
A relatable strand of research highlights the rise of artificial agents designed to engage users socially and emotionally. These systems blur the boundary between tool and actor, raising questions traditionally associated with social psychology, consumer protection, and labour law (Malhotra et al., 2024). Existing AI governance frameworks rarely conceptualize these systems as a distinct policy object, instead subsuming them under general transparency or consumer information requirements.
3. Literature Review
3.1 Governing Synthetic Artefacts and Artificial Humans
3.1.1 From Model-Centric Regulation to Artefact-Centric Risk
Traditional AI governance assumes harm arises from incorrect or biased system behavior. Generative AI challenges this assumption by producing artefacts that may be factually accurate yet socially misleading, lawful yet corrosive to trust. Recent scholarship increasingly emphasizes artefacts not models are primary sites of harm, particularly when synthetic content is reused, edited, or amplified beyond the original context (Raza et al., 2025).
3.1.2 Accountability and Cryptographic Provenance
A major research stream focuses on cryptographic accountability mechanisms. Blockchain anchoring enables immutable recording of content metadata and consent assertions, creating verifiable chains of custody (Vetrivel et al., 2025). Zero-knowledge proofs allow provenance verification without exposing sensitive data, balancing transparency and privacy. AI-powered watermarking embeds robust origin signals within media, while federated detection systems distribute verification capacity without centralizing data (Aarthi et al., 2025). The literature consistently emphasizes effective governance requires layered technical enforcement, not single mechanisms.
3.1.3 Governing Artificial Human Presence
Artificial humans introduce governance challenges extending beyond content authenticity. The Digital Identity Rights Framework (DIRF) represents a most comprehensive response, defining nine governance domains and 63 operational controls for managing consent, traceability, monetization, and enforcement related to digital likenesses (Atta et al., 2025). Legal role taxonomies further clarify accountability by distinguishing AI-generated content (AIGC), human–AI collaborative content (HAIC), and AI-mediated communication (AI-MC), assigning liability to human actors rather than AI systems (Towne, 2024).
3.1.4 Do Any Existing AI Policies Govern Synthetic Artefacts or Artificial Humans?
Despite the rapid proliferation of generative AI systems capable of producing synthetic text, images, video, audio, documents, and socially interactive agents, to date, no major AI law appears to treat synthetic artefacts as independent, first-order regulatory objects. Instead, current governance approaches address these phenomena indirectly through a patchwork of transparency requirements, content controls, and misuse prohibitions remaining fundamentally model-centric rather than artefact-centric.
Most binding AI regulations, including comprehensive frameworks such as the EU Artificial Intelligence Act, conceptualize risk primarily in relation to system functionality and use context, rather than the downstream institutional role of AI-generated outputs. Synthetic artefacts are addressed only in so far as they create immediate deception risks most commonly through disclosure or labelling obligations for manipulated or AI-generated content. These measures are designed to inform users content has been altered or that they are interacting with an AI system, but they do not regulate the artefacts themselves as objects potentially functioning as evidence, records, identities, or social actors.
Similarly, policies addressing manipulative or exploitative AI practices focus on intent and behavioural distortion, rather than on the cumulative effects of large-scale artefact production on epistemic trust. Artificial humans such as AI avatars, conversational agents, or synthetic personas are not recognized as a distinct regulatory category in most jurisdictions. Where they are addressed at all, they are treated as a subset of content generation or user interaction rather than as relational agents capable of persuasion, emotional influence, or social substitution.
The most explicit governance of synthetic artefacts currently occurs not in public law but in platform-level policies, where private companies require disclosure of realistic AI-generated or altered content and impose penalties for non-compliance. While these measures provide practical safeguards against deception, they lack the legitimacy, consistency, and due-process protections associated with statutory regulation. As a result, governance of synthetic artefacts today remains fragmented, reactive, and normatively thin. This gap supports the core claim of Synthetic Artefact Governance Theory. Contemporary AI policy regimes govern how models are built and used, but not how synthetic artefacts reshape institutional trust, social interaction, and accountability once they enter circulation.
3.1.5 Institutional Trust and Verification Infrastructures
Institutions such as newsrooms, courts, and electoral bodies face acute challenges in verifying synthetic content. Recent research documents the emergence of institutionalized verification infrastructures, including mandatory labelling, standardized authentication pipelines, certified detection tools, and decentralized verification models using Decentralized Autonomous Organizations (DAOs; Panagopoulos & Davalas, 2025; Fabuyi et al., 2024). These developments signal a shift from ad hoc fact-checking toward formal trust infrastructures embedded in governance processes.
3.2 Synthetic Artefact Governance Theory (SAGT)
3.2.1 Core Theoretical Claim
Synthetic Artefact Governance Theory (SAGT) posits the primary governance challenge of advanced AI lies in the production and circulation of synthetic artefacts and artificial social agents, rather than in algorithmic decision-making alone. Hence, SAGT applies most strongly in contexts wherein AI systems generate artefacts functioning as evidence, identity proxies, institutional records, or socially interactive agents. SAGT is less applicable to purely internal optimization systems such as logistics routing and predictive maintenance. Here, the outputs do not circulate as socially interpretable artefacts. The theory therefore primarily governs generative, multimodal, and agentic AI systems operating in epistemically sensitive domains especially law, finance, journalism, education, governance, and social media ecosystems. Therefore, AI governance evaluation is according to the preservation of epistemic trust, relational integrity, and institutional accountability in environments presenting synthetic outputs.
Table 3: Model-Centric vs Artefact-Centric Governance
| Model-Centric | Artefact-Centric (SAGT) |
| Regulates system risk | Regulates output legitimacy |
| Focus on decision harm | Focus on trust distortion |
| Provider compliance | Ecosystem accountability |
| Disclosure as notice | Disclosure as relational safeguard |
3.2.2 Governance Layers - Explanatory & Design
- Artefact layer (provenance and authenticity)Focuses on how synthetic artefacts are created, tagged, and integrated into evidentiary and archival systems, including watermarking, content provenance standards, and authenticity verification for texts, images, audio, and avatars.
- Interaction layer (disclosure and relational influence)Governs interactions between humans and artificial humans, including disclosure duties for AI agents, consent mechanisms, limits on dark patterns, and safeguards against exploitative or covert persuasion.
- Agency layer (responsibility and liability)Clarifies how accountability is allocated across model developers, platform operators, deployers, and end users when synthetic artefacts cause harm, especially in contexts where AI avatars act on behalf of individuals or organizations.
- Ecosystem layer (amplification and systemic risk)Addresses platform-level amplification, recommendation dynamics, cross-platform diffusion, and the role of synthetic media operations functions in monitoring, mitigating, and learning from incidents at scale.
SAGT is both explanatory and design oriented. From an explanatory perspective, SAGT identifies structural governance gaps in model-centric AI regulation. Normatively, SAGT proposes layered institutional responses aligned with artefact circulation and artificial human interaction dynamics. The theory therefore operates at the intersection of governance analysis and institutional design theory.
4. Hypotheses Development
Drawing on SAGT, this paper advances hypotheses linking artefact-level provenance, identity rights enforcement, cryptographic accountability, and ecosystem oversight to outcomes such as epistemic trust, reduced manipulation, accountability clarity, and institutional resilience.
SAGT generates a set of testable hypotheses spanning artefact, interaction, agency, and ecosystem layers.
H1: Stronger provenance mechanisms reduce misclassification of synthetic artefacts as human authored.
H2: Exposure to unlabelled synthetic artefacts increases verification costs and reduces epistemic trust.
H3: Trust erosion is greater in evidence-intensive domains (e.g., law, science, public administration) than in entertainment contexts.
H4: Salient disclosure reduces deception and improves informed consent in encounters with synthetic artefacts and artificial humans.
H5: Artificial humans increase persuasion but also perceived manipulation risk.
H6: Disclosure effectiveness diminishes over time without technical reinforcement (e.g., watermarking, provenance tools).
H7: Accountability clarity decreases as AI value chains lengthen and involve more intermediaries.
H8: Documentation and logging reduce incident resolution time when harms are linked to specific synthetic artefacts.
H9: Platform amplification increases downstream harm non-linearly as synthetic artefacts are promoted and recombined.
H10: Strong general-purpose AI (GPAI) obligations reduce severe downstream incidents involving synthetic artefacts and artificial humans.
H11: Institutional trust erosion in one domain e.g., synthetic legal evidence likely produces spillover reductions in trust across unrelated institutional domains.
H12: A threshold effect exists whereby increasing saturation of synthetic artefacts produces non-linear declines in baseline epistemic trust, even when disclosure compliance remains constant.
These hypotheses can be examined through controlled experiments, field audits, computational diffusion analyses, and quasi-experimental policy evaluation designs leveraging platform governance capabilities as highlighted by the research agenda.
5. Empirical Research Agenda
The hypotheses motivate a multi-method empirical agenda. Controlled experiments can test how provenance signals and disclosure labels affect user trust, detection accuracy, and behavioral responses to synthetic media. Field studies and platform A/B tests can evaluate the impact of labelling, watermarking, and interaction rules on real-world user behavior and harm incidence.
Computational diffusion analyses trace how synthetic artefacts spread across platforms, identifying amplification patterns, cross-lingual transfers, and emergent narrative clusters. Quasi-experimental designs, such as difference-in-differences, can assess how regulatory changes or platform interventions affect the prevalence and impact of synthetic artefacts and artificial humans over time. Taken together, these methods provide an empirical foundation for calibrating SAGT-informed policies.
The power of SAGT lies in its capacity to generate empirically testable research programs. The theory motivates a multi-method research agenda, including experiments on labelling and trust calibration, field audits of provenance systems, computational diffusion analysis, and quasi-experimental evaluations of regulatory interventions
The proposed hypotheses can be evaluated through multiple complementary approaches:
- Controlled experiments isolate micro-level causal effects of disclosure, embodiment, and provenance on trust and behaviour.
Goal: Test H1, H4–H6.Methods: randomized online experiments; lab studies; A/B tests.
Outcomes: deception detection accuracy; trust ratings; consent comprehension; behavioral choices (e.g., share/not share). Design example: Conditions: (i) no label, (ii) label only, (iii) label + watermark indicator, (iv) label + cryptographic verification UI. - Field studies and compliance audits meso-level governance performance examine how organizations implement documentation and accountability requirements.Goal: Test H7–H8Methods: compliance audits, interviews with compliance/legal teams, document analysis, enforcement case coding.Outcomes: documentation completeness; incident response time; accountability clarity; remediation quality.
- Computational social science methods model artefact diffusion and amplification across platforms.Goal: Test H2–H3, H9.Methods: network diffusion models, platform data partnerships, synthetic artefact trace analysis, event studies. Methods: network diffusion models, platform data partnerships, synthetic artefact trace analysis, event studies.Outcomes: virality, cross-platform propagation, economic impacts (fraud losses, moderation costs), trust sentiment time series.
- Policy evaluation designs, macro-level causal inference including difference-in-differences and synthetic control methods, can assess the impact of regulatory interventions over time.Goal: Test H10 and policy effectiveness.
Methods: difference-in-differences around enforcement dates; synthetic control across jurisdictions; interrupted time series.
Outcomes: incident rates, reporting frequency, fraud/misinformation metrics, compliance costs, innovation indicators.
Platform governance mechanisms operate across all analytical levels in this research design, serving respectively as experimental treatments (micro), organizational accountability systems (meso), structural moderators of artefact diffusion (network), and policy interventions subject to causal evaluation (macro). For example, leveraging platform governance mechanisms such as the YouTube disclosure policies enables large-scale natural experiments comparing user responses across environments with and without enforced disclosure.
6. Analytical Assessment: The EU Al Act
When assessed through SAGT, the EU AI Act appears as a sophisticated but predominantly model-centric regulatory regime. The Act devotes extensive attention to agency and ecosystem governance through provider obligations, conformity assessment, and systemic-risk controls for general-purpose AI, thereby strengthening accountability for high-risk decision-making systems.
The Act does contain transparency obligations relevant to synthetic artefacts such as duties to inform users when content is AI-generated or significantly manipulated, and specific provisions for deepfake disclosure in many contexts. However, these rules typically treat artefacts as by-products of systems, focusing on user notification rather than on artefact status as evidence, records, or identities.
From a SAGT perspective, synthetic artefacts and artificial humans are addressed mainly through transparency and manipulation provisions, not as primary regulatory categories with dedicated governance infrastructures. While the Act meaningfully advances transparency and systemic-risk governance, the treatment of synthetic artefacts or artificial humans as independent regulatory categories with dedicated provenance, relational safeguards, and lifecycle accountability mechanisms is open for further refinement and treatments. Ecosystem-level oversight of synthetic reality such as monitoring synthetic artefact circulation across platforms or requiring synthetic media operations within large intermediaries and enterprises remains underdeveloped. This supports the SAGT claim current law under-regulates epistemic and relational harms.
The EU AI Act represents the most advanced system-centric AI regulation to date. However, assessed through the SAGT lens, the Act governs synthetic artefacts primarily indirectly, through transparency obligations. Artificial humans are not treated as a distinct governance category, and artefact-level provenance enforcement remains underdeveloped.
The European Union Artificial Intelligence Act represents the most extensive binding regulatory regime for AI to date. As shown in Appendix A, the Act devotes substantial regulatory effort to agency- and ecosystem-level governance through provider obligations, conformity assessments, enforcement institutions, and systemic risk controls for general-purpose AI models. However, synthetic artefacts and artificial humans are not explicitly governed as independent objects but instead, artefact transparency is addressed mainly through disclosure provisions such as in Article 50.
This analytical assessment confirms the SAGT claim contemporary AI regulation remains anchored in model-centric risk categories, addressing artefact and interaction risks only partially. For example, transparency duties are necessary but may not sufficiently mitigate epistemic harms without integrated provenance infrastructure and interactive consent safeguards.
7. Discussion and Policy Implications
The integration of platform governance such as YouTube synthetic-content disclosure rules into the AI policy landscape illustrates a broader normative shift with platforms becoming de facto regulators of synthetic artefacts. While platforms operate under commercial incentives, their policies often anticipate or complement public sector regulatory goals, especially around disclosure and trust. This dynamic has important implications. First, platform-level governance can provide empirical evidence about what works in practice, helping policymakers calibrate statutory obligations. Second, lack of harmonization between platform policies and public law creates fragmented governance regimes, potentially weakening systemic accountability. Harmonizing platform obligations with statutory transparency duties especially around synthetic media and artificial humans strengthens the trust infrastructure across ecosystems.
The analysis also highlights the need for governance mechanisms addressing relational harms and systemic diffusion, not just model risk. This requires investing in public trust infrastructure such as authentication standards, cross-platform reporting protocols, and institutional reporting channels for artefact-level incidents.
7.1 Policy Design: Operationalising SAGT in Law
Operationalising Synthetic Artefact Governance Theory (SAGT) requires complementing system-centric AI regulation with artefact-centric and interaction-aware governance instruments. This does not imply abandoning existing risk-based AI safety regimes but rather extending them to address the governance of synthetic reality, the artefacts, identities, and social interactions generated by AI systems. Core instruments include mandatory provenance standards for synthetic outputs, rights-based governance of digital identity and artificial human presence, cryptographically enforced accountability mechanisms, and institutional verification infrastructures embedded within regulatory compliance workflows.
The European Union Artificial Intelligence Act provides a useful benchmark for assessing this shift. As illustrated in Appendix B, the Act concentrates regulatory effort at the agency and ecosystem layers, through provider and deployer obligations, conformity assessments, enforcement authorities, and systemic risk controls for general-purpose AI. Transparency provisions particularly those addressing user interaction with AI systems and disclosure of synthetic content partially engage the artefact and interaction layers. However, synthetic artefacts are not treated as independent governance objects, and artificial humans are not explicitly regulated as a distinct category. This confirms the central theoretical claim of SAGT that contemporary AI regulation, while comprehensive, remains anchored in a model-centric paradigm.
A SAGT-aligned policy architecture would therefore introduce legal recognition of synthetic artefacts as regulatory objects, particularly where such artefacts function as documents, evidence, identities, or institutional representations. This entails mandatory provenance and authenticity markers for high-impact synthetic outputs, verifiable audit trails linking artefacts to their generating systems, and evidentiary standards governing the admissibility and reuse of AI-generated materials. Such measures would reduce verification costs and strengthen institutional trust without constraining generative innovation.
Beyond artefacts themselves, governance must address human–AI interaction as a site of risk. Artificial humans and socially persuasive agents require explicit safeguards governing disclosure, consent, and relational integrity, particularly in contexts involving emotional influence or vulnerable populations. These measures move beyond generic transparency to regulate relational manipulation, recognizing that harm can arise even when content is factually accurate but socially deceptive.
SAGT further requires artefact-linked accountability infrastructure. Responsibility should follow synthetic artefacts across their lifecycle creation, deployment, reuse, and amplification through shared responsibility models involving providers, deployers, and distributors. Artefact-based liability triggers, supported by logging and documentation duties, directly address the diffusion of responsibility that characterizes current AI governance regimes.
Finally, operationalising SAGT necessitates ecosystem-level oversight. Regulators must monitor the aggregate effects of synthetic artefact circulation, including cross-platform diffusion, amplification dynamics, and systemic erosion of epistemic trust. Public trust infrastructures such as registries, verification services, and standardized authentication protocols combined with reporting obligations for large-scale deployment, would enable adaptive intervention where artefact-driven harms become systemic.
Taken together, these policy design implications underscore a fundamental shift in AI governance. From regulating what AI systems do to governing what AI produces and how society interacts with it. Such a shift is essential if AI law is to remain effective in an era where synthetic artefacts and artificial humans increasingly shape social, economic, and institutional life.
Table 4: Mapping Policy Design Levers to Synthetic Artefact Governance Theory Layers
| SAGT layer | Governance focus | Policy lever | Regulatory purpose |
| Artefact | Authenticity, provenance, evidentiary integrity | Mandatory provenance markers; authenticity labels; artefact audit trails | Establish trust in AI-generated artefacts used as documents, evidence, or representations |
| Interaction | Disclosure, consent, relational transparency | Disclosure of artificial identity; consent standards for prolonged or affective interaction; prohibitions on undisclosed relational manipulation | Protect users from deceptive or manipulative human–AI interaction |
| Agency | Accountability, liability allocation | Artefact-linked liability triggers; shared responsibility across providers, deployers, and distributors; logging and documentation duties | Prevent responsibility diffusion and ensure traceable accountability |
| Ecosystem | Systemic risk, amplification, institutional trust | Public trust infrastructure (registries, verification services); reporting obligations for large-scale artefact deployment; adaptive regulatory powers | Monitor and mitigate systemic trust erosion and cross-platform artefact amplification |
7.2 Emerging Artefact-Level Legislative Responses
Recent state legislative initiatives in the United States highlight the early emergence of governance approaches explicitly treating synthetic media as an artefact-centric harm, distinct from model or system-focused regulation. In January 2026, the New Mexico Attorney General announced proposed legislation to protect residents from deceptive synthetic media generated using artificial intelligence. The draft law requires digital markers on AI-generated images, audio, and video to enable provenance tracking, grant enforcement authority to the state Attorney General for malicious dissemination and impose penalties for harmful synthetic content distribution (New Mexico Department of Justice, 2026). By directly regulating the production, disclosure, and accountability of synthetic media artefacts, this initiative operates substantively at the artefact and interaction layers of governance rather than solely at the model level.
Parallel developments in Mexico illustrate a regional trend toward formalising AI governance in ways extending beyond traditional system-centric regulation. While Mexico does not yet have a comprehensive AI statute, emerging regulatory proposals including frameworks for ethical, sovereign, and inclusive AI development envisage national authorities responsible for system registration, risk oversight, transparency obligations, and enforcement action against unsafe practices (Nemko Digital, 2025). Such proposals signal an integration of artefact, oversight, and systemic governance considerations, even in the absence of a fully enacted law.
These policy developments resonate with civil society analyses emphasising the complexity and social impact of synthetic media and deepfakes. The Center for News, Technology & Innovation (CNTI) frames deepfakes as a form of synthetic media undermining journalists, fact-based news, and public trust, noting most countries still lack laws expressly targeting this class of content (CNTI, 2025). Detection technologies and provenance tools are evolving but cannot alone address all harms. Clear, context-sensitive policy responses are needed balancing freedom of expression, safety, and media independence.
Together, these signals reinforce while formal AI law continues to prioritise models and deployment contexts, real-world governance responses are already converging on artefact-level harms. In the absence of a unifying theoretical framework, such interventions remain fragmented and incremental. SAGT provides the conceptual structure needed to integrate emerging artefact-oriented initiatives into a coherent, multi-layered governance architecture capable of addressing synthetic reality at institutional scale.
8. Conclusion
AI governance is entering a new phase with the central challenge no longer merely regulating intelligent systems but governing the output of synthetic reality itself to preserve social trust, accountability, and meaning in environments increasingly shaped by AI-generated artefacts and artificial human presence. This paper argues while contemporary AI regulation exemplified by the European Union Artificial Intelligence Act represents a significant milestone in the maturity of system-centric, risk-based governance. This regulation remains structurally ill-equipped to address harms arising from the production, circulation, and amplification of synthetic artefacts and artificial social actors. By integrating recent advances in regulatory design, cryptographic provenance, digital identity rights, and platform governance, Synthetic Artefact Governance Theory (SAGT) provides both a conceptual lens and a practical foundation for addressing these gaps. SAGT reframes governance away from models alone toward artefacts, interactions, accountability chains, and ecosystem dynamics, highlighting the need for next-generation AI policy. Explicitly governing synthetic artefacts and artificial human interactions while integrating public law with the platform-level mechanisms such as disclosure, labelling, and provenance controls already operationalizing trust in practice.
This governance shift also necessitates a redefinition of what is meant by human-centric AI. Rather than focusing narrowly on keeping humans in the loop of automated decision-making, future governance must address how humans coexist with artificial agents psychologically, socially, and economically. Safeguarding human autonomy, dignity, and epistemic trust in environments saturated with synthetic actors requires recognising artificial humans and synthetic artefacts as first-order institutional objects of governance. Without such a shift, AI policy risks remaining misaligned with the realities of synthetic media, artificial social presence, and trust formation in digital societies. Governing synthetic reality, rather than intelligent systems alone, therefore represents the critical frontier for AI governance in the coming decade.
References
- Aarthi, S., Ravikumar, R. N., & Pardaev, J. (2025). Synergizing multimodal generative AI and blockchain for the future of digital media. In Advances in computational intelligence and robotics. https://doi.org/10.4018/979-8-3373-1504-1.ch001
- Atta, H., Baig, M., Mehmood, Y., et al. (2025). DIRF: A framework for digital identity protection and clone governance in agentic AI systems. arXiv. https://doi.org/10.48550/arxiv.2508.01997
- Calzada, I., Németh, G., & Al-Radhi, M. S. (2025). Trustworthy AI for whom? GenAI detection techniques of trust through decentralized Web3 ecosystems. Preprints. https://doi.org/10.20944/preprints202501.2018.v2
- Center for New Technology and Innovation. (2024). Synthetic media and deepfakes: Issue primer. https://cnti.org/issue-primers/synthetic-media-deepfakes/
- Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war. Foreign Affairs, 98(1), 147–155.
- European Parliament and Council. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
- Fabuyi, J. A., Olaniyi, O. O., Olateju, O. O., et al. (2024). Deepfake regulations and their impact on content creation in the entertainment industry. Archives of Current Research International, 24(12). https://doi.org/10.9734/acri/2024/v24i12997
- Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
- Future of Life Institute. (2026). AI Act Explorer. Retrieved January 12, 2026, from https://artificialintelligenceact.eu/ai-act-explorer/
- Google Support. (2025). Disclosing use of altered or synthetic content. Google Help Center. https://support.google.com/youtube/answer/14328491
- Malhotra, A., et al. (2024). Human–AI interaction and social agency. Journal of Management. Advance online publication.
- Nemko Digital. (2025). AI regulation in Mexico: Legal framework & compliance insights. https://digital.nemko.com/regulations/mexico-ai-regulation
- New Mexico Department of Justice. (2026, January 15). Attorney General Raúl Torrez announces proposed legislation to protect New Mexicans from deceptive synthetic media generated using artificial intelligence. https://nmdoj.gov/press-release/attorney-general-raul-torrez-announces-proposed-legislation-to-protect-new-mexicans-from-deceptive-synthetic-media-generated-using-artificial-intelligence/
- Organisation for Economic Co-operation and Development (OECD). (2019). OECD principles on artificial intelligence. OECD Publishing.
- Panagopoulos, A. M., & Davalas, A. (2025). Deepfakes, the EU AI Act, and newsroom implementation. International Journal of Social Science and Economic Research, 10(8). https://doi.org/10.46609/ijsser.2025.v10i08.018
- Park, J. S., et al. (2023). Generative agents: Interactive simulacra of human behavior. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
- Raza, S., Qureshi, R., Zahid, A., et al. (2025). Who is responsible? Responsible generative AI for a sustainable future. TechRxiv. https://doi.org/10.36227/techrxiv.173834932.29831105
- Rini, R. (2020). Deepfakes and the epistemic backstop. Philosophy & Technology, 33(3), 1–22.
- Szarmach, J. (2025). Generative AI governance framework (v1.0). Artificial Intelligence Governance Blog. https://www.aigl.blog/generative-ai-governance-framework
- Towne, B. P. (2024). Reconceptualizing authorship and accountability in the age of AI. Open Science Framework. https://doi.org/10.31219/osf.io/teymn
- (2021). Recommendation on the ethics of artificial intelligence.
- Vetrivel, S. C., Vidhyapriya, P., Arun, V. P., et al. (2025). Ethical and legal considerations in AI-generated media. In Advances in computational intelligence and robotics. https://doi.org/10.4018/979-8-3373-6481-0.ch001
Suggested Articles
-
The Ethical Implications of AI in Academic Research: Balancing Innovation and Responsibility
Artificial intelligence (AI) is transforming the way we conduct research in academia, providing new opportunities…
-
Optimizing Experimental Design with AI: Maximizing the Impact of Your Research
Experimental design is a critical component of academic research, determining the validity and reliability of…
-
Harnessing AI for Hypothesis Generation: Accelerate Discovery in Your Research Field
Artificial intelligence (AI) has been making waves in the scientific community, offering new opportunities for…
-
The Potential Utilisation of Artificial Intelligence (AI) in Enterprises
By identifying how enterprises utilize AI, what is the impact of implementing AI in enterprise…



