EU AI Act (2024) - Full Machine-Readable Text

This page contains the full text of the EU AI Act in a simple JSON format for analysis and processing.


{
  "regulation_metadata": {
    "official_title": "REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).",
    "publication_date": "12.7.2024.",
    "legal_basis": [
      "Treaty on the Functioning of the European Union, in particular Articles 16 and 114 thereof.",
      "Proposal from the European Commission.",
      "Opinion of the European Economic and Social Committee.",
      "Opinion of the European Central Bank.",
      "Opinion of the Committee of the Regions."
    ],
    "procedure": "Acting in accordance with the ordinary legislative procedure."
  },
  "recitals": {
    "recital_1_purpose": "The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.",
    "recital_2_values": "This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI.",
    "recital_3_internal_market": "AI systems can be easily deployed in a large variety of sectors of the economy and many parts of society, including across borders, and can easily circulate throughout the Union. Diverging national rules may lead to the fragmentation of the internal market and may decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market on the basis of Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for remote biometric identification for the purpose of law enforcement, of the use of AI systems for risk assessments of natural persons for the purpose of law enforcement and of the use of AI systems of biometric categorisation for the purpose of law enforcement, it is appropriate to base this Regulation, in so far as those specific rules are concerned, on Article 16 TFEU.",
    "recital_4_benefits": "AI is a fast evolving family of technologies that contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of AI can provide key competitive advantages to undertakings and support socially and environmentally beneficial outcomes.",
    "recital_5_risks": "Depending on the circumstances regarding its specific application, use, and level of technological development, AI may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.",
    "recital_6_human_centric": "Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its regulatory framework to be developed in accordance with Union values as enshrined in Article 2 of the Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and, pursuant to Article 6 TEU, the Charter. As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.",
    "recital_7_common_rules": "In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common rules for high-risk AI systems should be established. Those rules should be consistent with the Charter, non-discriminatory and in line with the Union’s international trade commitments.",
    "recital_8_innovation_and_smes": "A Union legal framework laying down harmonised rules on AI is needed to foster the development, use and uptake of AI in the internal market that meets a high level of protection of public interests. By laying down these rules as well as measures in support of innovation with a particular focus on small and medium enterprises (SMEs), including startups, this Regulation supports the objective of promoting the European human-centric approach to AI.",
    "recital_9_complementarity": "Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems should be laid down consistently with the New Legislative Framework. The harmonised rules laid down in this Regulation should apply across sectors and should be without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary. This Regulation should not affect Union law on social policy and national labour law concerning employment and working conditions.",
    "recital_10_personal_data_protection": "The fundamental right to the protection of personal data is safeguarded by Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data. Data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling.",
    "recital_11_intermediary_liability": "This Regulation should be without prejudice to the provisions regarding the liability of providers of intermediary services as set out in Regulation (EU) 2022/2065.",
    "recital_12_ai_system_definition": "The notion of ‘AI system’ should be clearly defined and closely aligned with the work of international organisations to ensure legal certainty. The definition should be based on key characteristics that distinguish it from simpler traditional software systems and should not cover systems based on rules defined solely by natural persons to automatically execute operations. A key characteristic is the capability to infer, referring to the process of obtaining outputs like predictions, content, recommendations, or decisions. The techniques include machine learning and logic- and knowledge-based approaches. AI systems are designed to operate with varying levels of autonomy.",
    "recital_13_deployer_definition": "The notion of ‘deployer’ should be interpreted as any natural or legal person, including a public authority, agency or other body, using an AI system under its authority, except where used in a personal non-professional activity.",
    "recital_14_biometric_data_definition": "The notion of ‘biometric data’ should be interpreted in light of existing EU data protection regulations. Biometric data can allow for authentication, identification, categorisation, and recognition of emotions of natural persons.",
    "recital_15_biometric_identification_definition": "Biometric identification is defined as the automated recognition of human features for establishing an individual's identity by comparing data to a reference database, irrespective of consent. This excludes AI systems used for biometric verification/authentication for the sole purpose of confirming identity for access.",
    "recital_16_biometric_categorisation_definition": "Biometric categorisation is defined as assigning natural persons to specific categories (e.g., sex, age, religion) based on biometric data. This does not include purely ancillary features linked to other commercial services, such as facial filters on marketplaces or social networks.",
    "recital_17_remote_biometric_identification": "Remote biometric identification systems identify persons without their active involvement, typically at a distance. 'Real-time' systems identify instantaneously or with minor delay using 'live' material, while 'post' systems involve a significant delay and pre-captured material.",
    "recital_18_emotion_recognition_definition": "Emotion recognition systems infer emotions or intentions based on biometric data. This does not include detecting physical states like pain or fatigue for safety (e.g., for pilots) or mere detection of expressions unless used for inferring emotions.",
    "recital_19_publicly_accessible_space": "A 'publicly accessible space' refers to any physical space accessible to an undetermined number of persons, regardless of ownership or activity. Online spaces are not covered.",
    "recital_20_ai_literacy": "AI literacy should equip providers, deployers, and affected persons with necessary notions to make informed decisions. The European Artificial Intelligence Board should support the Commission in promoting AI literacy tools.",
    "recital_21_geographic_scope": "Rules should apply to providers irrespective of whether they are established in the Union or a third country, and to deployers established within the Union.",
    "recital_22_extraterritorial_effect": "This Regulation applies to providers and deployers in third countries if the output produced by their systems is intended to be used in the Union. It does not apply to third-country public authorities acting under international agreements for law enforcement/judicial cooperation with adequate safeguards.",
    "recital_23_union_institutions": "This Regulation applies to Union institutions, bodies, offices, and agencies acting as providers or deployers.",
    "recital_24_military_and_national_security_exclusion": "AI systems used exclusively for military, defence, or national security purposes are excluded from the scope, regardless of the entity type. If a system developed for these purposes is used for other purposes (e.g., civilian or humanitarian), it falls within the scope.",
    "recital_25_research_and_development_exclusion": "AI systems specifically developed and put into service for the sole purpose of scientific research and development are excluded. This exclusion also applies to research, testing, and development activities prior to being placed on the market or put into service.",
    "recital_26_risk_based_approach": "The Regulation follows a risk-based approach, tailoring rules to the intensity and scope of risks. It prohibits unacceptable practices and lays down requirements for high-risk systems.",
    "recital_27_trustworthy_ai_principles": "The Regulation takes into account the seven non-binding ethical principles for trustworthy AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.",
    "recital_28_manipulative_practices": "AI-enabled manipulative techniques that subvert autonomy and impair decision-making are prohibited.",
    "recital_29_vulnerability_exploitation": "Prohibitions apply to AI systems that exploit vulnerabilities of persons due to age, disability, or social/economic situations.",
    "recital_30_biometric_categorisation_prohibition": "Biometric categorisation systems inferring sensitive traits like political opinions, race, or sexual orientation are prohibited.",
    "recital_31_social_scoring_prohibition": "AI systems providing social scoring that leads to detrimental or unjustified treatment are prohibited.",
    "recital_32_real_time_rbi_risks": "The use of 'real-time' remote biometric identification (RBI) in publicly accessible spaces for law enforcement is particularly intrusive and carries risks of constant surveillance and discrimination.",
    "recital_33_rbi_exceptions": "RBI for law enforcement is prohibited except in narrowly defined situations like searching for victims, preventing threats to life/terrorist attacks, or localising suspects for serious crimes.",
    "recital_34_rbi_safeguards": "Use of real-time RBI requires taking into account the seriousness of the situation, consequences for rights, and must be limited in time and geographic scope.",
    "recital_35_rbi_authorisation": "Each use of real-time RBI must be subject to prior express and specific authorisation by a judicial or independent administrative authority, except in cases of extreme urgency.",
    "recital_36_rbi_notification": "Market surveillance and national data protection authorities must be notified of each use of real-time RBI.",
    "recital_37_national_rbi_rules": "Member States must expressly provide for the possibility to authorise RBI use in national law and notify the Commission.",
    "recital_38_lex_specialis": "The RBI rules in this Regulation act as lex specialis to the Data Protection Directive for Law Enforcement (2016/680).",
    "recital_39_biometric_processing_non_law_enforcement": "Biometric processing for purposes other than law enforcement remains subject to GDPR and Regulation (EU) 2018/1725.",
    "recital_40_ireland_position": "Ireland is not bound by certain specific rules related to personal data processing for police and judicial cooperation in criminal matters under Article 16 TFEU where it is not bound by the underlying rules.",
    "recital_41_denmark_position": "Denmark is not bound by specific rules related to personal data processing for activities falling within police and judicial cooperation in criminal matters.",
    "recital_42_predictive_policing_prohibition": "AI-predicted behavior for assessing the likelihood of offending based solely on profiling or personality traits is prohibited, as individuals should be judged on actual behavior.",
    "recital_43_untargeted_scraping_prohibition": "AI systems creating facial recognition databases through untargeted scraping of images from the internet or CCTV are prohibited.",
    "recital_44_workplace_education_emotion_recognition": "AI systems used to detect emotional states in workplace and education situations are prohibited, except for medical or safety reasons.",
    "recital_45_other_prohibited_practices": "Practices prohibited by other Union laws like data protection, non-discrimination, and consumer protection are not affected.",
    "recital_46_high_risk_compliance": "High-risk AI systems must comply with mandatory requirements to ensure they do not pose unacceptable risks to public interests.",
    "recital_47_product_safety": "Safety risks generated by products with digital components, including AI, must be prevented and mitigated.",
    "recital_48_fundamental_rights_impact": "Classification as high-risk considers impact on fundamental rights like human dignity, privacy, and non-discrimination.",
    "recital_49_sectoral_amendments": "Certain existing Union acts are amended to ensure Commission takes high-risk AI requirements into account when adopting delegated acts.",
    "recital_50_product_conformity": "AI systems that are safety components or products themselves under certain harmonisation legislation are high-risk if they undergo third-party conformity assessment.",
    "recital_51_risk_criteria_consistency": "Classification as high-risk under this Regulation does not automatically mean a product is high-risk under other sectoral laws.",
    "recital_52_stand_alone_high_risk": "Stand-alone AI systems are high-risk if they pose a high risk of harm to health/safety or fundamental rights and are used in pre-defined areas.",
    "recital_53_non_high_risk_derogations": "AI systems in pre-defined high-risk areas might not be high-risk if they do not materially influence decision-making. This includes narrow procedural tasks, improving human activity results, detecting patterns for review, or preparatory tasks.",
    "recital_54_biometric_high_risk": "Several critical use cases of biometric systems are classified as high-risk, such as remote biometric identification, sensitive biometric categorisation, and non-prohibited emotion recognition.",
    "recital_55_critical_infrastructure": "AI systems used as safety components in management of critical digital infrastructure, road traffic, and supply of water, gas, heating, and electricity are high-risk.",
    "recital_56_education_high_risk": "AI systems for determining access, assignment, evaluation, or monitoring behavior during tests in education are high-risk.",
    "recital_57_employment_high_risk": "AI systems for recruitment, promotion, termination, task allocation, and monitoring in employment are high-risk.",
    "recital_58_essential_services_high_risk": "AI systems used for public assistance benefits, creditworthiness assessment (except fraud detection), and emergency response/triage are high-risk.",
    "recital_59_law_enforcement_high_risk": "High-risk AI in law enforcement includes risk assessments for victims, polygraphs, evidence reliability evaluation, and profiling for crime detection/prosecution.",
    "recital_60_migration_asylum_high_risk": "High-risk systems in migration/asylum include polygraphs, risk assessments for entry, and examination of applications/evidence reliability.",
    "recital_61_administration_of_justice": "AI systems assisting judicial authorities in research and interpretation of facts/law are high-risk.",
    "recital_62_democratic_processes": "AI systems intended to influence the outcome of an election or referendum or voting behavior are high-risk.",
    "recital_63_lawful_use_under_other_law": "Classification as high-risk does not imply the use is otherwise lawful under other acts.",
    "recital_64_mandatory_requirements": "High-risk AI systems must meet requirements for risk management, data quality, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.",
    "recital_65_risk_management_lifecycle": "The risk management system must be a continuous iterative process throughout the entire lifecycle.",
    "recital_66_requirement_summary": "Requirements are necessary to mitigate risks to health, safety, and fundamental rights.",
    "recital_67_data_governance": "High-quality data sets are vital for high-risk AI performance and to prevent discrimination. Data should be relevant, representative, and to the best extent possible free of errors.",
    "recital_68_common_data_spaces": "Common European data spaces will be instrumental in providing access to high-quality data.",
    "recital_69_privacy_principles": "Data minimisation and protection by design apply throughout the lifecycle.",
    "recital_70_bias_detection_special_categories": "Providers may exceptionally process special categories of personal data to ensure bias detection and correction in high-risk systems.",
    "recital_71_traceability_documentation": "Traceability and compliance verification require record-keeping and availability of technical documentation.",
    "recital_72_transparency_obligations": "High-risk AI systems must be designed for transparency, allowing deployers to understand functionality and limitations. Instructions for use must accompany the system.",
    "recital_73_human_oversight_measures": "Systems must be designed for effective human oversight, including in-built constraints and mechanisms for intervention. Enhanced oversight (two natural persons) is required for certain biometric systems.",
    "recital_74_performance_metrics": "High-risk AI must meet appropriate levels of accuracy, robustness, and cybersecurity.",
    "recital_75_technical_robustness": "Systems should be resilient against errors and unexpected situations, using mechanisms like fail-safe plans.",
    "recital_76_cybersecurity_protection": "Providers must take measures against cyberattacks leveraging AI-specific assets like training data (poisoning) or models (adversarial attacks).",
    "recital_77_horizontal_cybersecurity": "High-risk AI complying with horizontal cybersecurity requirements for digital products may demonstrate compliance with this Regulation's cybersecurity requirements.",
    "recital_78_cybersecurity_conformity": "Conformity assessments for cybersecurity may build on knowledge from ENISA.",
    "recital_79_provider_responsibility": "A specific natural or legal person must take responsibility as the provider for placing high-risk AI on the market.",
    "recital_80_accessibility": "Providers must ensure compliance with accessibility requirements for persons with disabilities.",
    "recital_81_quality_management_system": "Providers must establish a quality management system and post-market monitoring.",
    "recital_82_authorised_representative": "Providers established in third countries must appoint an authorised representative in the Union.",
    "recital_83_value_chain_obligations": "Specific obligations apply to relevant operators along the value chain, such as importers and distributors.",
    "recital_84_third_party_as_provider": "Parties making substantial modifications or changing the intended purpose of an AI system to make it high-risk are considered providers.",
    "recital_85_gpai_as_high_risk": "General-purpose AI (GPAI) may be used as or in high-risk AI systems.",
    "recital_86_cooperation_with_former_providers": "If an initial provider is no longer considered the provider, they must still cooperate and provide information for compliance.",
    "recital_87_product_manufacturer_responsibility": "Manufacturers of products with embedded high-risk AI components must ensure compliance.",
    "recital_88_supplier_cooperation": "Suppliers of AI components or processes must provide necessary information and technical access to the high-risk AI provider.",
    "recital_89_open_source_exception": "Free and open-source AI components (not GPAI models) are generally excluded from value-chain requirements.",
    "recital_90_model_contractual_terms": "The Commission may develop voluntary model contractual terms for cooperation along the value chain.",
    "recital_91_deployer_responsibilities": "Deployers must use systems according to instructions and monitor functioning.",
    "recital_92_worker_information": "Employers must inform workers or representatives about putting high-risk AI into service at the workplace.",
    "recital_93_deployer_context_knowledge": "Deployers are best placed to identify risks in specific contexts and must inform natural persons if they are subject to high-risk AI.",
    "recital_94_biometric_data_law_enforcement_compliance": "Biometric processing in law enforcement must comply with purpose limitation, accuracy, and storage limitation principles.",
    "recital_95_post_remote_biometric_safeguards": "Post-remote biometric identification must be targeted, strictly necessary, and proportionate.",
    "recital_96_fundamental_rights_impact_assessment": "Public bodies and certain private entities (e.g., banking/insurance) must perform a fundamental rights impact assessment before deploying high-risk AI.",
    "recital_97_gpai_model_definition": "General-purpose AI models are trained on large amounts of data and can perform a wide range of distinct tasks. They are components of AI systems but not systems themselves.",
    "recital_98_gpai_parameter_threshold": "Models with at least a billion parameters and trained at scale are considered to display significant generality.",
    "recital_99_generative_ai_example": "Large generative AI models are typical examples of GPAI.",
    "recital_100_gpai_system_definition": "A GPAI system results when a GPAI model is integrated into a system, enabling a variety of purposes.",
    "recital_101_gpai_transparency": "GPAI providers must provide technical documentation to the AI Office and information to downstream providers.",
    "recital_102_open_source_gpai": "GPAI models released under free and open-source licenses should ensure high levels of transparency regarding parameters and architecture.",
    "recital_103_gpai_monetisation": "Exceptions for open-source do not apply if the model is monetized or used with personal data for reasons other than security.",
    "recital_104_open_source_transparency_exception": "Open-source GPAI models have exceptions for transparency requirements unless they present a systemic risk. They must still provide summaries and comply with copyright law.",
    "recital_105_copyright_compliance": "GPAI providers must respect rightsholders' opt-outs for text and data mining under Directive (EU) 2019/790.",
    "recital_106_level_playing_field_copyright": "Copyright policy requirements apply regardless of the jurisdiction where training occurs.",
    "recital_107_training_content_summary": "Providers must draw up a detailed summary of content used for training.",
    "recital_108_ai_office_monitoring": "The AI Office monitors fulfillment of copyright and summary obligations without work-by-work assessment.",
    "recital_109_gpai_proportionality": "Obligations are proportionate to provider size, with simplified compliance for SMEs and start-ups.",
    "recital_110_systemic_risk_definition": "Systemic risks include negative effects on public health, safety, democratic processes, and critical sectors.",
    "recital_111_systemic_risk_classification": "GPAI models are classified as having systemic risk based on high-impact capabilities or significant market impact. A threshold of 10^25 floating point operations for training leads to a presumption of systemic risk.",
    "recital_112_notification_procedure": "Providers must notify the AI Office within two weeks of meeting systemic risk criteria.",
    "recital_113_commission_designation": "The Commission can designate models as having systemic risk ex officio or based on alerts.",
    "recital_114_gpai_systemic_risk_obligations": "Providers of systemic risk GPAI must perform model evaluations, adversarial testing, and mitigate risks throughout the lifecycle.",
    "recital_115_incident_reporting_gpai": "Providers must report serious incidents and corrective measures and ensure cybersecurity for the model.",
    "recital_116_codes_of_practice_gpai": "The AI Office facilitates the drawing up of codes of practice for GPAI obligations.",
    "recital_117_reliance_on_codes": "Providers can rely on codes of practice to demonstrate compliance until harmonised standards are available.",
    "recital_118_presumption_of_fulfillment": "If GPAI models are embedded in designated very large online platforms (VLOPs), obligations under this Regulation are presumed fulfilled if the DSA framework covers the risks.",
    "recital_119_chatbot_as_intermediary": "AI systems like chatbots may be provided as intermediary services.",
    "recital_120_detection_of_synthetic_content": "Detection and disclosure of AI-generated content are relevant for the Digital Services Act.",
    "recital_121_standardisation_role": "Standardisation provides technical solutions for compliance. The Commission can establish common specifications if standards are delayed or inadequate.",
    "recital_122_presumption_from_data": "Presumption of compliance with data governance exists if high-risk AI is tested on data reflecting specific settings.",
    "recital_123_pre_market_conformity": "High-risk AI systems must undergo conformity assessment prior to being placed on the market.",
    "recital_124_minimising_duplication": "For products covered by other laws, AI compliance should be assessed as part of the existing conformity assessment.",
    "recital_125_third_party_assessment_limit": "Third-party assessment for high-risk AI is initially limited to biometric systems.",
    "recital_126_notified_body_requirements": "Notified bodies must meet requirements for independence, competence, and absence of conflicts.",
    "recital_127_mutual_recognition": "The Union should pursue mutual recognition of conformity assessment results with third countries.",
    "recital_128_substantial_modification": "Changes affecting compliance or intended purpose constitute a new AI system requiring new assessment. Learning-based adaptations are not substantial modifications if pre-determined.",
    "recital_129_ce_marking": "High-risk AI systems must bear the CE marking.",
    "recital_130_emergency_authorisation": "Market surveillance authorities can authorise systems that haven't undergone assessment in exceptional circumstances.",
    "recital_131_eu_database_registration": "Providers of high-risk AI and public authority deployers must register in an EU database. Law enforcement and migration systems are registered in a secure non-public section.",
    "recital_132_impersonation_transparency": "Transparency obligations apply to AI systems interacting with persons or generating content to prevent impersonation/deception.",
    "recital_133_synthetic_content_marking": "Providers of systems generating large quantities of synthetic content must embed technical solutions for marking and detection.",
    "recital_134_deep_fake_disclosure": "Deployers must disclose deep fakes unless part of artistic/satirical works (subject to safeguards). AI-generated text published for public interest must be disclosed unless human-reviewed.",
    "recital_135_detection_codes": "The Commission may encourage codes of practice for detection and labeling of AI content.",
    "recital_136_dsa_complementarity": "Labeling requirements complement the Digital Services Act.",
    "recital_137_compliance_not_legality": "Transparency compliance does not automatically mean the AI's use is otherwise lawful.",
    "recital_138_regulatory_sandboxes": "Member States must establish at least one AI regulatory sandbox to facilitate innovation under oversight.",
    "recital_139_sandbox_objectives": "Sandboxes aim to enhance legal certainty, facilitate regulatory learning, and accelerate market access for SMEs.",
    "recital_140_personal_data_in_sandboxes": "Regulation provides a legal basis for using personal data collected for other purposes in sandboxes for public interest AI development.",
    "recital_141_real_world_testing_regime": "Providers can benefit from a regime for testing systems in real world conditions outside sandboxes with sufficient guarantees.",
    "recital_142_beneficial_outcomes": "Member States should support AI research for socially and environmentally beneficial outcomes.",
    "recital_143_sme_support_measures": "Support includes priority sandbox access, awareness-raising, and standardized templates.",
    "recital_144_union_funding": "Union funding programs like Digital Europe should contribute to the Regulation's objectives.",
    "recital_145_ai_on_demand_platform": "Platforms and hubs established by the Commission should contribute to implementation.",
    "recital_146_microenterprise_simplified_qms": "Microenterprises can fulfill quality management system requirements in a simplified manner.",
    "recital_147_expert_panels": "The Commission facilitates access to testing facilities for bodies involved in medical device conformity.",
    "recital_148_governance_framework": "Effective enforcement requires a framework with the AI Office, a Board, a scientific panel, and an advisory forum.",
    "recital_149_ai_board_tasks": "The Board provides advisory tasks, issues opinions, and facilitates cooperation among market surveillance authorities.",
    "recital_150_advisory_forum_membership": "The forum includes industry, SMEs, start-ups, academia, and civil society.",
    "recital_151_scientific_panel_tasks": "Independent experts support AI Office monitoring of GPAI models.",
    "recital_152_union_testing_structures": "Testing support structures reinforce Member State capacities.",
    "recital_153_national_competent_authorities": "Each Member State designates a notifying authority and a market surveillance authority (single point of contact).",
    "recital_154_authority_independence": "Authorities must exercise powers independently and impartially.",
    "recital_155_post_market_monitoring_system": "Providers must have a system to collect experience from use and report serious incidents.",
    "recital_156_market_surveillance_powers": "The system established by Regulation (EU) 2019/1020 applies. The European Data Protection Supervisor is the authority for Union bodies.",
    "recital_157_safeguard_procedure": "Procedures ensure enforcement against systems presenting risks to health, safety, or fundamental rights.",
    "recital_158_financial_services_supervision": "Existing financial supervisory authorities act as market surveillance authorities for AI in the financial sector.",
    "recital_159_biometric_surveillance_powers": "Authorities supervising biometric high-risk AI in law enforcement and justice need effective investigative powers.",
    "recital_160_joint_investigations": "The AI Office provides coordination for joint investigations across Member States.",
    "recital_161_gpai_supervision_split": "The AI Office supervises AI systems based on GPAI models from the same provider; otherwise, national authorities remain responsible.",
    "recital_162_commission_gpai_competence": "Monitoring and enforcement for GPAI model providers is a competence of the Commission.",
    "recital_163_qualified_alerts_gpai": "The scientific panel provides qualified alerts to trigger AI Office follow-up/investigations.",
    "recital_164_gpai_enforcement_measures": "The AI Office can request documents, conduct evaluations, and request risk mitigation measures or market restrictions.",
    "recital_165_voluntary_codes_of_conduct": "Providers of non-high-risk systems are encouraged to voluntarily apply high-risk requirements via codes of conduct.",
    "recital_166_product_safety_net": "Regulation (EU) 2023/988 applies as a safety net for non-high-risk AI products.",
    "recital_167_confidentiality": "Parties must respect confidentiality of data and intellectual property.",
    "recital_168_penalties_and_fines": "Member States lay down rules for effective, proportionate, and dissuasive penalties.",
    "recital_169_gpai_fines": "The Commission can impose fines on GPAI model providers.",
    "recital_170_complaint_right": "Persons can lodge complaints with market surveillance authorities.",
    "recital_171_right_to_explanation": "Affected persons have a right to a clear explanation for decisions based on certain high-risk AI.",
    "recital_172_whistleblower_protection": "Directive (EU) 2019/1937 applies to reporting infringements.",
    "recital_173_delegated_powers": "The Commission can adopt delegated acts to amend lists and technical documentation provisions.",
    "recital_174_evaluation_review_cycles": "The Commission evaluates the Regulation periodically starting 2028/2029.",
    "recital_175_implementing_powers": "Conferred on the Commission according to Regulation (EU) No 182/2011.",
    "recital_176_subsidiarity_proportionality": "Objectives are better achieved at Union level.",
    "recital_177_transitional_period": "Applies to systems placed on the market before application only if subject to significant design/purpose changes.",
    "recital_178_voluntary_compliance": "Providers are encouraged to comply early.",
    "recital_179_application_dates": "Applies from 2 August 2026; prohibitions and general provisions from 2 February 2025; GPAI and governance rules from 2 August 2025.",
    "recital_180_consultation": "EDPS and EDPB were consulted and delivered a joint opinion."
  },
  "chapters": [
    {
      "chapter_number": "I",
      "chapter_title": "GENERAL PROVISIONS",
      "articles": [
        {
          "article_number": 1,
          "title": "Subject matter",
          "content": "The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, and fundamental rights. It lays down: harmonised rules for placing AI systems on the market, prohibitions of certain practices, requirements for high-risk AI, transparency rules, rules for GPAI models, and measures supporting innovation."
        },
        {
          "article_number": 2,
          "title": "Scope",
          "content": "Applies to providers placing AI systems or GPAI models on the Union market; deployers located in the Union; providers/deployers in third countries if output is used in the Union; importers/distributors; and affected persons in the Union. Does not apply to: military, defense, or national security purposes; international cooperation for law enforcement with adequate safeguards; scientific research and development; or purely personal non-professional activity. Does not apply to free and open-source AI unless high-risk or prohibited."
        },
        {
          "article_number": 3,
          "title": "Definitions",
          "definitions_found": [
            "(1) ‘AI system’ means a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs like predictions, content, recommendations, or decisions.",
            "(2) ‘risk’ means the combination of probability of harm and its severity.",
            "(3) ‘provider’ means a person/body that develops an AI system or GPAI model and places it on the market under its own name/trademark.",
            "(4) ‘deployer’ means a person/body using an AI system under its authority, except for personal non-professional use.",
            "(5) ‘authorised representative’ means a person in the Union with a written mandate from a third-country provider.",
            "(6) ‘importer’ means a person in the Union placing a system from a third country on the market.",
            "(7) ‘distributor’ means a person in the supply chain making a system available on the market.",
            "(8) ‘operator’ means any of the above entities.",
            "(9) ‘placing on the market’ means the first making available in the Union.",
            "(10) ‘making available on the market’ means the supply for distribution/use in a commercial activity.",
            "(11) ‘putting into service’ means supply for first use directly to the deployer or for own use in the Union.",
            "(12) ‘intended purpose’ means the use specified by the provider in instructions or documentation.",
            "(13) ‘reasonably foreseeable misuse’ means use not in accordance with intended purpose resulting from predictable human behavior.",
            "(14) ‘safety component’ means a component whose failure endangers health/safety.",
            "(15) ‘instructions for use’ means info provided to the deployer.",
            "(16) ‘recall’ means measures to achieve the return or disabling of a system.",
            "(17) ‘withdrawal’ means measures to prevent making available on the market.",
            "(18) ‘performance’ means ability to achieve intended purpose.",
            "(19) ‘notifying authority’ means the national authority for conformity assessment bodies.",
            "(20) ‘conformity assessment’ means the process of fulfilling requirements.",
            "(21) ‘conformity assessment body’ means a body performing testing/certification.",
            "(22) ‘notified body’ means a body notified in accordance with this Regulation.",
            "(23) ‘substantial modification’ means an unplanned change affecting compliance or intended purpose.",
            "(24) ‘CE marking’ indicates conformity with requirements.",
            "(25) ‘post-market monitoring system’ means activities to collect experience and identify corrective actions.",
            "(26) ‘market surveillance authority’ means the national authority carrying out Regulation (EU) 2019/1020 activities.",
            "(27) ‘harmonised standard’ as defined in Regulation (EU) No 1025/2012.",
            "(28) ‘common specification’ means technical specifications providing means to comply.",
            "(29) ‘training data’ means data used for fitting learnable parameters.",
            "(30) ‘validation data’ means data for evaluation and tuning non-learnable parameters.",
            "(31) ‘validation data set’ means a separate or split data set.",
            "(32) ‘testing data’ means data for independent evaluation before placing on market.",
            "(33) ‘input data’ means data on which the system produces output.",
            "(34) ‘biometric data’ as defined in GDPR.",
            "(35) ‘biometric identification’ means automated recognition for establishing identity.",
            "(36) ‘biometric verification’ means one-to-one verification.",
            "(37) ‘special categories of personal data’ as referred to in GDPR Article 9(1).",
            "(38) ‘sensitive operational data’ means data whose disclosure jeopardises criminal proceedings.",
            "(39) ‘emotion recognition system’ means an AI system for inferring emotions/intentions.",
            "(40) ‘biometric categorisation system’ means an AI system assigning persons to categories based on biometric data.",
            "(41) ‘remote biometric identification system’ (RBI) means identification at a distance without active involvement.",
            "(42) ‘real-time RBI’ means identification without significant delay.",
            "(43) ‘post-RBI’ means systems other than real-time.",
            "(44) ‘publicly accessible space’ means a place accessible to an undetermined number of natural persons.",
            "(45) ‘law enforcement authority’ means public authorities competent for crime prevention/prosecution.",
            "(46) ‘law enforcement’ means activities for crime prevention/prosecution.",
            "(47) ‘AI Office’ means the Commission's function.",
            "(48) ‘national competent authority’ means notifying or market surveillance authority.",
            "(49) ‘serious incident’ means a malfunction leading to death, health harm, infrastructure disruption, or fundamental rights infringement.",
            "(50) ‘personal data’ as defined in GDPR.",
            "(51) ‘non-personal data’ means other data.",
            "(52) ‘profiling’ as defined in GDPR.",
            "(53) ‘real-world testing plan’ means a document describing testing objectives and methodology.",
            "(54) ‘sandbox plan’ means an agreement for sandbox activities.",
            "(55) ‘AI regulatory sandbox’ means a controlled framework for innovative AI development.",
            "(56) ‘AI literacy’ means skills to make informed deployment and awareness of risks.",
            "(57) ‘testing in real-world conditions’ means temporary testing outside a lab.",
            "(58) ‘subject’ means a natural person participating in real-world testing.",
            "(59) ‘informed consent’ means voluntary expression of willingness to participate.",
            "(60) ‘deep fake’ means manipulated content that falsely appears authentic.",
            "(61) ‘widespread infringement’ means acts harming collective interests in multiple Member States.",
            "(62) ‘critical infrastructure’ as defined in Directive (EU) 2022/2557.",
            "(63) ‘general-purpose AI model’ means a model displaying significant generality capable of distinct tasks.",
            "(64) ‘high-impact capabilities’ means capabilities matching advanced models.",
            "(65) ‘systemic risk’ means risk propagated at scale across the value chain.",
            "(66) ‘general-purpose AI system’ (GPAI) means a system based on a GPAI model.",
            "(67) ‘floating-point operation’ means mathematical operations on real numbers.",
            "(68) ‘downstream provider’ means a provider integrating an AI model."
          ]
        },
        {
          "article_number": 4,
          "title": "AI literacy",
          "content": "Providers and deployers must ensure staff have a sufficient level of AI literacy, taking into account their knowledge and the context of use."
        }
      ]
    },
    {
      "chapter_number": "II",
      "chapter_title": "PROHIBITED AI PRACTICES",
      "articles": [
        {
          "article_number": 5,
          "title": "Prohibited AI practices",
          "prohibitions": [
            "(a) Subliminal or manipulative techniques that materially distort behavior and cause significant harm.",
            "(b) Exploiting vulnerabilities due to age, disability, or social/economic situation to cause significant harm.",
            "(c) Social scoring leading to detrimental/unjustified treatment in unrelated contexts.",
            "(d) Predictive policing/risk assessments based solely on profiling or personality traits.",
            "(e) Untargeted scraping of facial images from the internet or CCTV for recognition databases.",
            "(f) Emotion recognition in workplace and education, except for medical/safety reasons.",
            "(g) Biometric categorisation based on sensitive data (race, politics, religion, etc.).",
            "(h) Real-time RBI in publicly accessible spaces for law enforcement, unless strictly necessary for searching for victims, preventing specific threats/terror attacks, or localising serious crime suspects."
          ],
          "rbi_conditions": "Real-time RBI use must be targeted, limited in time/geography, and subject to prior judicial or independent administrative authorisation. Decisions producing adverse legal effects cannot be based solely on RBI output. Notification to market surveillance and data protection authorities is required."
        }
      ]
    },
    {
      "chapter_number": "III",
      "chapter_title": "HIGH-RISK AI SYSTEMS",
      "sections": [
        {
          "section_number": 1,
          "section_title": "Classification of AI systems as high-risk",
          "articles": [
            {
              "article_number": 6,
              "title": "Classification rules for high-risk AI systems",
              "content": "High-risk if: (a) system is a safety component or product under Annex I legislation AND (b) required to undergo third-party conformity assessment. Systems in Annex III are also high-risk. Derogation: Annex III systems are not high-risk if they don't pose significant risk (e.g., narrow procedural tasks), UNLESS they perform profiling. Providers must document non-high-risk assessments."
            },
            {
              "article_number": 7,
              "title": "Amendments to Annex III",
              "content": "Commission empowered to add/modify use-cases in Annex III if they pose equivalent or greater risk. Criteria include intended purpose, extent of use, data nature, autonomy, and imbalance of power."
            }
          ]
        },
        {
          "section_number": 2,
          "section_title": "Requirements for high-risk AI systems",
          "articles": [
            {
              "article_number": 8,
              "title": "Compliance with the requirements",
              "content": "High-risk AI must comply with this Section, taking into account intended purpose and state of the art."
            },
            {
              "article_number": 9,
              "title": "Risk management system",
              "content": "Continuous iterative process throughout the lifecycle. Includes identification, evaluation of foreseeable risks/misuse, and adoption of mitigation measures. Residual risk must be judged acceptable."
            },
            {
              "article_number": 10,
              "title": "Data and data governance",
              "content": "Systems using training models must use high-quality data sets meeting criteria for relevance, representativeness, and error-correction. Special categories of data may be processed exceptionally for bias detection/correction."
            },
            {
              "article_number": 11,
              "title": "Technical documentation",
              "content": "Must be drawn up before market entry and kept up-to-date. SMEs/start-ups can provide info in simplified manner."
            },
            {
              "article_number": 12,
              "title": "Record-keeping",
              "content": "Technically allow for automatic recording of events (logs) over lifetime for traceability."
            },
            {
              "article_number": 13,
              "title": "Transparency and provision of information to deployers",
              "content": "Operation must be sufficiently transparent for deployers to interpret output. Accompanied by instructions for use including identity, characteristics, and limitations."
            },
            {
              "article_number": 14,
              "title": "Human oversight",
              "content": "Designed to be effectively overseen by natural persons during use to minimise risks. Enhanced oversight (two persons) required for certain biometric identification."
            },
            {
              "article_number": 15,
              "title": "Accuracy, robustness and cybersecurity",
              "content": "Achievement of appropriate levels throughout the lifecycle. Resilience against errors and unauthorized third-party attempts to alter performance."
            }
          ]
        },
        {
          "section_number": 3,
          "section_title": "Obligations of providers and deployers of high-risk AI systems and other parties",
          "articles": [
            {
              "article_number": 16,
              "title": "Obligations of providers of high-risk AI systems",
              "obligations": [
                "(a) Ensure compliance with Section 2.",
                "(b) Indicate contact details on system/packaging.",
                "(c) Have a quality management system.",
                "(d) Keep documentation.",
                "(e) Keep automatically generated logs.",
                "(f) Undergo conformity assessment.",
                "(g) Draw up EU declaration of conformity.",
                "(h) Affix CE marking.",
                "(i) Comply with registration.",
                "(j) Take corrective actions.",
                "(k) Demonstrate conformity to authorities.",
                "(l) Ensure accessibility compliance."
              ]
            },
            {
              "article_number": 17,
              "title": "Quality management system",
              "content": "Documented strategy for compliance, design control, data management, and risk management. Proportionate to organisation size."
            },
            {
              "article_number": 20,
              "title": "Corrective actions and duty of information",
              "content": "Immediately take actions to bring non-compliant systems into conformity, withdraw, or recall them."
            },
            {
              "article_number": 22,
              "title": "Authorised representatives",
              "content": "Third-country providers must appoint a representative in the Union by written mandate."
            },
            {
              "article_number": 25,
              "title": "Responsibilities along the AI value chain",
              "content": "Distributors, importers, or deployers become providers if they put their name on a system, modify it substantially, or change the intended purpose to make it high-risk."
            },
            {
              "article_number": 26,
              "title": "Obligations of deployers of high-risk AI systems",
              "content": "Use systems according to instructions; assign human oversight to competent persons; monitor operation; keep logs for at least 6 months. Employers must inform workers."
            },
            {
              "article_number": 27,
              "title": "Fundamental rights impact assessment for high-risk AI systems",
              "content": "Public bodies and certain private entities (banking/insurance) must perform FRIA before deployment. Must describe processes, categories of persons affected, risks, and mitigation measures."
            }
          ]
        }
      ]
    },
    {
      "chapter_number": "V",
      "chapter_title": "GENERAL-PURPOSE AI MODELS",
      "sections": [
        {
          "section_number": 1,
          "section_title": "Classification rules",
          "articles": [
            {
              "article_number": 51,
              "title": "Classification of GPAI models with systemic risk",
              "content": "Classified if: (a) high impact capabilities or (b) Commission decision/alert. Presumed high impact if training computation > 10^25 FLOPs."
            }
          ]
        },
        {
          "section_number": 2,
          "section_title": "Obligations for providers of GPAI models",
          "articles": [
            {
              "article_number": 53,
              "title": "Obligations",
              "content": "Draw up technical documentation and information for downstream providers; put in place copyright compliance policy; publish training content summary. Open-source exceptions apply to certain transparency rules unless systemic risk."
            }
          ]
        },
        {
          "section_number": 3,
          "section_title": "Obligations for providers of GPAI models with systemic risk",
          "articles": [
            {
              "article_number": 55,
              "title": "Obligations",
              "content": "In addition to Article 53: model evaluation (adversarial testing); systemic risk assessment/mitigation; serious incident reporting; cybersecurity protection."
            }
          ]
        }
      ]
    },
    {
      "chapter_number": "VI",
      "chapter_title": "MEASURES IN SUPPORT OF INNOVATION",
      "articles": [
        {
          "article_number": 57,
          "title": "AI regulatory sandboxes",
          "content": "Member States must establish at least one sandbox by August 2026. Provides controlled environment for development/testing innovative AI. Participants remain liable but administrative fines are generally not imposed for sandbox activity."
        },
        {
          "article_number": 60,
          "title": "Testing of high-risk AI systems in real world conditions outside sandboxes",
          "content": "Requires real-world testing plan submitted to market surveillance authority; informed consent from subjects; max 6 months duration."
        }
      ]
    },
    {
      "chapter_number": "XII",
      "chapter_title": "PENALTIES",
      "articles": [
        {
          "article_number": 99,
          "title": "Penalties",
          "fines": [
            "Prohibited practices: up to EUR 35,000,000 or 7% of annual turnover.",
            "Non-compliance with other obligations: up to EUR 15,000,000 or 3%.",
            "Supplying incorrect info: up to EUR 7,500,000 or 1%.",
            "For SMEs, the lower percentage/amount applies."
          ]
        }
      ]
    }
  ],
  "annexes": [
    {
      "annex_number": "I",
      "title": "List of Union harmonisation legislation",
      "content": "Includes Section A (New Legislative Framework like Machinery, Toys, Medical Devices) and Section B (Other like Civil Aviation, Vehicles)."
    },
    {
      "annex_number": "II",
      "title": "List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii)",
      "content": "Includes terrorism, trafficking, sexual exploitation, drugs, weapons, murder, kidnapping, etc."
    },
    {
      "annex_number": "III",
      "title": "High-risk AI systems referred to in Article 6(2)",
      "areas": [
        "1. Biometrics (Remote identification, categorisation, emotion recognition).",
        "2. Critical infrastructure (Digital, traffic, water, gas, electricity).",
        "3. Education and vocational training (Access, admission, evaluation).",
        "4. Employment, workers management (Recruitment, task allocation, performance monitoring).",
        "5. Access to essential private/public services (Benefits eligibility, creditworthiness, life/health insurance, emergency dispatch).",
        "6. Law enforcement (Risk assessments for victims, polygraphs, evidence reliability, profiling).",
        "7. Migration, asylum and border control (Polygraphs, risk assessment, examination of applications).",
        "8. Administration of justice and democratic processes (Judicial assistance, influencing elections)."
      ]
    },
    {
      "annex_number": "VIII",
      "title": "Information to be submitted upon registration",
      "sections": [
        "Section A: Info by providers (Name, intended purpose, status, copy of declaration).",
        "Section B: Info by providers of non-high-risk AI under Article 6(3).",
        "Section C: Info by deployers (FRIA summary, DPIA summary)."
      ]
    }
  ],
  "missing_data": "NOT FOUND [No missing data detected for the comprehensive JSON structure within source context]."
}