Learn more
The UK government’s vision for AI: What it means for responsible investors
Insights

The UK government’s vision for AI: What it means for responsible investors

The UK government’s AI Action Plan aims to drive economic growth, enhance public services, and shape the AI workforce. With a light regulatory approach, investor engagement is key to ensuring ethical AI adoption. How can responsible investors influence AI governance, transparency, and risk mitigation? Read more.

Impactive Team
January 21, 2025

The UK government’s announcement of a vision to position artificial intelligence (AI) at the core of its economic and infrastructure strategy marks an important moment for the country in how technology and governance intersect.

The new AI Opportunities Action Plan, published mid-January, aims to position Britain as a global leader in the development and application of AI. This bold vision is rooted in principles of shared prosperity, improved public services, and expanded opportunities for all.

UK Government Key Goals:

  • Drive Economic Growth: Leverage AI to boost productivity and secure the Uk's position as a leader in sustained economic growth among G7 nations.
  • Enhance Public Services: Apply AI to improve healthcare, education, and citizen interactions with government services.
  • Shape the AI Workforce: Equip citizens with the skills needed for AI-powered industries through initiatives like Skills England.

Strategic Pillars:

  1. Invest in AI Foundations: strengthen computing infrastructure, attract global talent, and ensure effective regulation.
  2. Promote AI Adoption Across Industries: Pilot and scale AI solutions in the public and private sectors to deliver better outcomes.
  3. Support Domestic AI Innovation: Establish the UK as a hub for frontier AI development with national champions at critical levels of the AI stack.

So, how should investors engage with companies to ensure responsible and sustainable AI practices?

The Growing Focus on AI Stewardship

Institutional investors globally are emphasising the need for greater transparency and governance regarding AI usage.

Recent examples illustrate this growing scrutiny. In the US,the Securities and Exchange Commission has heightened its focus on AI disclosures, demanding clarity on applications, risks, and potential misrepresentation of AI capabilities. The UK’s Financial Conduct Authority (FCA), while not introducing AI-specific regulations, has underscored the relevance of existing transparency, fairness, and accountability principles to AI applications. The UK government appears to want to preserve a more principles-based light regulatory touch:

“The UK’s current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this.”

One can argue this leaves more onus on investors to do their homework. Many in Europe and the US are already spearheading efforts to ensure that companies adopt robust AI governance frameworks.

For example, activist investor group Tulipshare recently submitted a resolution for Berkshire Hathaway’s annual meeting, advocating for a dedicated committee to oversee AI risks across the conglomerate’s portfolio.

This proposal signals a broader push by investors to embed AI oversight into corporate governance structures, reflecting a growing demand for transparency on ethical guidelines, workforce implications, and strategic alignment.

Such proposals often call for disclosures that ensure AI deployment aligns with societal and regulatory expectations. Between January 2023 and June 2024, AI-related shareholder proposals more than doubled in the US, with 23 filed across sectors. For 2025, this momentum is set to continue, reflecting investors' growing focus on AI risks, governance, and accountability.

According to research conducted by FTI Consulting and published by Harvard University, while tech companies remain in the spotlight, AI proposals are targeting a broader range of sectors:

  • 9 of 24 proposals focused on tech.
  • Others spanned media, restaurants, and—for the first time—healthcare.

These proposals have asked for:

  1. Transparency: Investors demand clarity on AI usage and ethical guidelines.
  2. Board oversight: 2024 saw the first proposals for board-level AI governance.
  3. Workforce impact: Increasing emphasis on the social implications of AI.
  4. Human rights: Focus on societal and ethical risks.

Although no proposals have passed yet, support is growing. Netflix saw 43.3% backing, nearing majority approval, while Apple saw 37.5% support from voting shares. Three 2024 proposals surpassed 20% support, indicating strong and increasing momentum. Furthermore, resolutions were withdrawn for Comcast, Disney, and UnitedHealth Group following disclosure commitments, highlighting the power of engagement.

Such a backdrop provides a fertile ground for investors to apply lessons learned from global counterparts while tailoring approaches to the UK market. The government’s emphasis on existing regulatory principles sets a tone for companies and investors to collaborate on responsible AI practices without waiting for new legislation. This approach allows investors to shape corporate strategies from the outset.

That said, in the UK the government has made clear it expects regulators, like the FCA to be more proactive. One requirement will be to publish annually how they have “enabled innovation and growth driven by AI in their sector. To ensure accountability, this should include transparent metrics”.

Moving Beyond Disclosure

And while transparency is a necessary foundation, engagementmust extend beyond disclosure to foster responsible AI development. Investorscan influence companies in several key areas:

  1. Board Expertise: Advocating for AI expertise at the board level is critical. Boards with the right knowledge can better oversee AI strategies and risks, ensuring that technology is deployed ethically and effectively.
  2. Ethical Guidelines: Investors should push for the adoption of clear ethical frameworks governing AI use. These guidelines must address issues such as bias, privacy, and accountability to mitigate risks and build public trust.
  3. Workforce Implications: Companies need to articulate how AI will impact their workforce, from job displacement to upskilling initiatives. Transparent     communication about these impacts can strengthen stakeholder relationships and foster resilience.
  4. Risk Mitigation: Investors should demand comprehensive risk assessments that include cybersecurity, regulatory compliance, and reputational risks associated with AI applications.

The Role of Activist and Institutional Investors

Activist investors have a unique role to play in driving change. Their ability to propose resolutions and galvanize public support can push companies toward greater accountability.

Similarly, institutional investors wield significant influence through their capital allocations. By prioritising investments in companies with strong AI governance, they can incentivise responsible practices. Investment firms like AllianceBernstein are already leading the way by providing guidance on managing AI-related challenges, emphasising transparency andexplainability.

Norway’s sovereign wealth fund has also made AI a governance priority. Norges Bank Investment Management, which oversees the world’s largest sovereign wealth fund, advocates for bolstering AI expertise at the board level to ensure proper oversight. These actions highlight the recognition of AI as both an opportunity and a significant risk that requires proactive engagement.

Seizing the Opportunity

For investors in the UK, the government’s AI-centric vision presents an opportunity to shape a landscape where AI technologies are developed anddeployed responsibly. To achieve this, investors should:

  • Collaborate with Regulators: Working alongside the FCA and other bodies can help align corporate practices with evolving regulatory expectations.
  • Engage Early: Early-stage AI companies are poised for rapid growth but often lack robust governance structures. Engaging with these companies at an early stage can set the tone for responsible development.
  • Champion Best Practices: Sharing insights and advocating for industry-wide standards can elevate the overall quality of AI governance.

The UK’s commitment to AI as a central pillar of itseconomic strategy is both a challenge and an opportunity for investors. By engaging with companies and demanding robust governance, investors can ensure that AI’s transformative potential isrealised responsibly.

The experiences of US and European investors provide valuable lessons, but success in the UK will require a tailored approach that balances innovation with accountability. In this new AI-driven era, investors’ stewardship will be a decisive factor in shaping a sustainable and equitable future.

Enjoyed this read?

Stay up to date with the latest updates, strategies, and insights sent straight to your inbox!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

  • bullet points

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.