Governing AI in the EU: Regulatory Leadership and Its Consequences for Investment Flow and Innovation Capacity
- Arina Nesterenko and Valentino Francesco Lacapria
- 8 hours ago
- 6 min read

As the European Union seeks to regulate artificial intelligence through the AI Act and related digital rules, it is shaping the global debate on how to balance technological progress with fundamental rights and social stability. This has far-reaching consequences for investment flows and Europe's own innovation capacity, creating a complex mix of opportunities and risks.
EU regulatory leadership in AI
The AI Act is the world's first comprehensive horizontal law on artificial intelligence, designed to apply uniformly across all 27 EU member states. It represents the EU's longstanding human-centric digital strategy by treating AI not just as an economic asset, but as a technology that must respect fundamental rights, safety, and democratic values.
The Act adopts, at its core, a risk-based approach that categorises AI systems into prohibited, high-risk, limited-risk, and minimal-risk groups, with corresponding obligations for each level. It is supported by a governance ecosystem that includes a European AI Office, national supervisory authorities and expert bodies to coordinate enforcement, standardisation and guidance across the single market.
Brussels Effect and Global Influence
The AI Act reinforces the Brussels effect, a term that describes how EU rules often become de facto global standards because access to the European market is commercially indispensable. Large technology companies and international suppliers usually design their products and internal governance to comply with EU requirements, then roll those standards out globally to avoid maintaining multiple versions.
This dynamic means that the EU regulates its internal market and also shapes global norms on transparency, accountability, and human oversight of AI systems. Policymakers in other jurisdictions, including in North America and parts of Asia, are drawing on EU concepts such as risk tiers, documentation duties, and limits on biometric surveillance when designing their own frameworks, even where legislation remains less comprehensive.
Investment Attraction and Legal Certainty
From an investment perspective, the AI Act creates a more predictable legal environment across a large area. It reduces regulatory fragmentation and provides firms with clearer expectations about compliance, liability and permissible business models by replacing potential national patchworks. Especially in highly regulated sectors such as healthcare finance and critical infrastructure, where investors are sensitive to risk and reputational exposure.
According to some institutional investors and corporates, alignment with EU standards can serve as a positive signal of responsible AI governance, complementing environmental, social, and governance criteria. By designing products and services that meet EU benchmarks, companies may enjoy competitive advantages in procurement, cross-border deployment, and partnerships with risk-averse clients such as public authorities, hospitals, and banks.
Investment Gap and Relocation Risks
The EU starts from a structurally weaker position in AI funding than the United States and parts of Asia, despite these potential advantages. Recent data show that European AI firms have attracted far less private investment over the last several years than US companies, reflecting a persistent transatlantic funding gap. It stems from deeper issues such as fragmented capital markets, a relative lack of late-stage growth funding and dependence on foreign cloud and semiconductor providers.
Critics have said that the AI Act's complex obligations, especially for high-risk systems, could interact with these structural weaknesses, pushing capital and talent abroad. Startups have warned of a jurisdiction-shopping effect in which companies locate core R&D, model training, and high-risk experimentation in more permissive environments, such as the US or Asian hubs, while keeping EU entities for commercialisation and deployment under EU rules.
Influence of Regulation on Innovation Capacity
The impact on innovation capacity is ambivalent. On the one hand, the stringent obligations for high-risk AI, covering risk management, technical documentation, robustness testing, human oversight, and post-market monitoring, are designed to reduce harms and build societal trust, potentially making adoption easier in the long run. This trust can be the precondition for scaling AI solutions for critical use cases in health transport or employment, aligning Europe's preference for socially embedded innovation.
This obligation can also impose high fixed costs, particularly for small and medium enterprises (SMEs) and research spinouts that lack dedicated compliance teams. The risk is a skewed innovation landscape in which large incumbents and big tech platforms absorb regulatory burdens. At the same time, smaller innovators struggle to bring high-risk products to market or pivot to lower-risk niches with less transformative potential.
Innovation Safeguards
In the past, EU policymakers have tried to mitigate these dangers by embedding innovation support mechanisms directly into the AI Act and its surrounding policy framework. The Act allows companies and researchers to test AI systems in controlled environments under the supervision of authorities, lowering barriers to experimentation while maintaining oversight. Also, simplified procedures and obligations for some SMEs and research activities are designed to reduce compliance friction without compromising core safeguards.
Besides the law itself, the EU is rolling out complementary digital and industrial strategies. Among them are the creation of AI factories that bring together supercomputing resources, high-quality datasets, and expert support, as well as the expansion of EuroHPC infrastructure to provide compute access for European startups and researchers. Multibillion-euro investment packages have been announced to strengthen Europe's AI ecosystem, crowd in private capital, and close part of the investment and computational gap with the US and China.
Balance of Competitiveness and Protection
The EU's main challenge is to balance its protection goals with the need to stay competitive globally in AI. Critics argue that the combined effect of the GDPR, the AI Act, and other digital regulations could create a compliance-heavy environment that hampers iterative development and promotes risk-averse innovation strategies. Researchers warn that if regulation is seen as inflexible or overly prescriptive, it could hinder the high-risk, high-reward experimentation that is crucial to developing world-class AI models.
Supporters of the regulation claim that strict rules can serve as a quality filter, directing capital toward trustworthy AI and reducing the risk of scandals that could trigger public backlash. Consequently, regulatory leadership forms part of a long-term strategy for competitiveness. By setting the global standard for safe and ethical AI, the EU aims to distinguish its ecosystem and attract value-aligned partners, investors, and founders.
Strategic Significance and Future Pathways
Ultimately, the long-term strategic importance of the AI Act will depend on the EU’s ability to complement its regulatory leadership with its capabilities in the areas of industry, geopolitics, and enforcement among the Member States. The future of the EU may lie in enhancing a leadership role in global rule-making or in a competitive challenge for the EU if the existing technology deficit is not bridged in the coming decade.
References:
European Commission. (2025, December 4). AI Act: Regulatory framework for artificial intelligence. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
European Parliament. (2025, February 18). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
ESCP Business School. (2024, July 3). How Europe’s AI Act could affect innovation and competitiveness. ESCP The Choice. https://escp.eu/thechoice/tomorrow-choices/how-europes-ai-act-could-affect-innovation-and-competitiveness
Funcas. (2025, August 25). Europe’s AI moment: Between regulation and global relevance. https://www.funcas.es/articulos/europes-ai-moment-between-regulation-and-global-relevance
Intereconomics. (2024, December 31). Balancing flexibility and better regulation: The EU’s Artificial Intelligence Act. https://www.intereconomics.eu/contents/year/2025/number/3/article/better-regulation-and-the-eu-s-artificial-intelligence-act.html
Kirey Group. (2025, December 4). AI regulation worldwide and its impact on Europe’s innovation capacity. https://newsroom.kireygroup.com/en/news/ai-regulation-worldwide-and-its-impact-on-europe-s-innovation-capacity
Nizza, V. (2023). Assessing the impact of the European AI Act on innovation [Conference paper]. Northwestern Pritzker School of Law. https://www.law.northwestern.edu/research-faculty/clbe/events/standardization/documents/nizza_assessing_impact_ai_act_innovation
Policy Review. (2025, August 26). Brussels effect or experimentalism? The EU AI Act and global governance. Internet Policy Review. https://policyreview.info/articles/analysis/brussels-effect-or-experimentalism
Reach Incubator. (2023, December 8). How will the EU AI Act affect data‑driven innovation? https://reach-incubator.eu/how-will-the-eu-ai-act-affect-data-driven-innovation
Sparkco. (2025, November 30). Brussels effect in AI regulation: A deep dive analysis. https://sparkco.ai/blog/brussels-effect-in-ai-regulation-a-deep-dive-analysis
Tech Policy Press. (2024, November 10). The real “Brussels effect” and responsible global use of AI. https://techpolicy.press/the-real-brussels-effect-and-responsible-global-use-of-ai
Tech Policy Press. (2025, April 7). Can the EU’s dual strategy of regulation and investment redefine AI leadership? https://techpolicy.press/can-the-eus-dual-strategy-of-regulation-and-investment-redefine-ai-leadership
Trends Research & Advisory. (n.d.). The Brussels effect revisited: How EU rules shape global choices. https://trendsresearch.org/insight/the-brussels-effect-revisited-how-eu-rules-shape-global-choices
Vestbee. (2025, September 16). EU AI Act takes effect, and startups push back: Here’s what you need to know. https://vestbee.com/insights/articles/eu-ai-act-takes-effect-what-you-need-to-know
A&O Shearman. (2025, December 10). AI leadership: Practicing what we preach. https://www.aoshearman.com/en/insights/alumni-yearbook/ai-leadership-practicing-what-we-preach
Be Informed. (2025, December 3). One year on: What the EU AI Act means for regulated industries and compliance. https://www.beinformed.com/one-year-on-what-the-eu-ai-act-means-for-regulated-industries-and-compliance
Carnegie Endowment for International Peace. (2025, May 19). The EU’s AI power play: Between deregulation and innovation. https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation
Carv. (2025, August 31). The EU AI Act and its impact on recruitment: What leaders need to know. https://www.carv.com/blog/the-eu-ai-act-and-its-impact-on-recruitment
Digital Competition (Project DisCo). (2025, April 1). Balancing AI innovation and regulation: Why the EU (still) needs a risk‑based approach. https://project-disco.org/european-union/balancing-ai-innovation-and-regulation-a-risk-based-approach
EY. (2025, August 3). The EU AI Act: What it means for your business. https://www.ey.com/en_ch/insights/forensic-integrity-services/the-eu-ai-act-what-it-means-for-your-business
Horton International. (2025, May 28). The EU AI Act and beyond: A leadership guide to ethical AI governance. https://hortoninternational.com/the-eu-ai-act-leadership-guide-to-ethical-ai-governance
IESE Business School. (n.d.). Artificial intelligence in Europe: Balancing innovation with regulation. https://www.iese.edu/insight/articles/artificial-intelligence-europe-innovation-regulation
Kirey Group. (2025, December 4). AI regulation worldwide and its impact on Europe’s innovation capacity. https://newsroom.kireygroup.com/en/news/ai-regulation-worldwide-and-its-impact-on-europe-s-innovation-capacity
Senior Executive. (2025, September 23). How the EU AI Act will reshape global innovation and regulation. https://seniorexecutive.com/impact-of-eu-ai-act
ScienceDirect. (2025). A turning point in AI: Europe’s human‑centric approach to governance [Journal article]. https://www.sciencedirect.com/science/article/pii/S2666659625000241
EuropeanRelations. (2025, September 9). The Brussels effect in retreat. https://europeanrelations.com/the-brussels-effect-in-retreat













Comments