The AI challenge – unpicking risks and opportunities for investors

Eva Cairns Head of Responsible Investment Scottish Widows

Eva Cairns

Head of Responsible Investment


Widespread use of AI highlights the need for robust governance

AI is fast becoming an essential part of our daily lives and often in surprising ways. From the road network and the healthcare we receive, to the news we read and food we order, AI is transforming the world we live in. 

AI will shape the next era of corporate strategy, economic growth and market transformation. Indeed, in the UK alone 75% of financial firms are already using AI1 and a further 10% are planning to use it in future.  

But with such rapid and widespread adoption, AI is quickly becoming a core governance and sustainability challenge that presents material risks to companies. This is a core theme in our new report, Governing the Algorithm: Investor Priorities for Responsible AI (PDF, 2MB) in which we analyse the clear opportunities – and responsibilities – for investors to help shape the governance standards needed to manage the emerging risks of AI.  

Ultimately, it’s in everyone’s interests to ensure that AI is developed and deployed in a way that supports not only innovation, but inclusion, stability and shared prosperity.  

This is exactly why we incorporated AI and ethics into our stewardship priorities in 2023. Since then, we have been researching and engaging with asset managers to explore how AI oversight can be more effectively embedded into ESG analysis and investment practice. 
 

AI governance priorities 

While AI offers efficiency and innovation, it also introduces systems that lack transparency where decision-making logic is hard to pin-point. 

These so-called ‘black box’ AI models – where even the models’ developer is unable to determine how it makes decisions – raise serious risks related to bias, misinformation, privacy and operational integrity. This could create serious challenges for businesses around issues such as legal exposure, reputational damage and eroded stakeholder trust. 

Despite the scale of adoption, Stanford’s 2024 AI Index2 finds that fewer than 20% of public companies currently disclose details about their AI risk mitigation strategies, and only 10% report on fairness or bias assessments.  

This lack of transparency presents a material blind spot for both investors, and regulators. In our report we found this transparency gap makes it increasingly difficult for investors to understand how AI is being governed, especially in high-impact sectors such as healthcare, finance and retail. 

To tackle this, boards must consider AI as a cross-cutting governance concern – much like cyber-security or climate risk – that requires appropriate oversight and clear risk mitigation processes. 
 

Our framework for investor action 

Our report highlights analysis by ISS-Corporate3 that reveals only 15% of S&P 500 companies disclosed some form of board oversight of AI in their proxy statements. Even fewer, just 1.6%, provide explicit disclosure of full board or committee-level responsibility. 

To help address this, we advocate a three-part approach.  

First, we believe AI governance should be integrated into ESG investment analysis, with investors assessing how companies disclose AI use, establish internal safeguards, and assign oversight to executive or board-level leaders.  

Second, stewardship and engagement must focus on how companies govern AI day-to-day. This includes engaging on bias assessments, explainability mechanisms, and ensuring human oversight is embedded in high-impact use cases. Where transparency or risk management is lacking, escalation through proxy voting can be an appropriate tool. 

And lastly, investors have a crucial role in setting clear expectations. This means aligning stewardship practices with global standards like the OECD AI principles, and EU AI Act. By setting and following clear standards, we can help shape an investment environment where innovation is matched by accountability. 
 

Supporting responsible AI  

With long-term investment horizons and systemic influence, pension schemes are uniquely placed to drive stronger governance standards across the economy. As long-term stewards of capital, we are accountable not only for today’s performance, but for the sustainability and resilience of people’s futures. 

By encouraging better governance and disclosure, pension funds can carefully guide the widespread adoption of AI and contribute to more transparent, equitable, and future-fit corporate behaviour. This doesn’t mean limiting innovation but ensuring that it is guided in a way that aligns with societal expectations and legal standards, and supports long-term economic stability, inclusion and accountability. 

We’re committed to working alongside companies, policymakers and industry peers to ensure this governance evolves in step with innovation. Because contributing towards a resilient, trustworthy economy in the age of AI is not just good governance, but an essential part of our duty to current and future beneficiaries. 

Read the report: Governing the Algorithm: Investor Priorities for Responsible AI (PDF, 2MB)
 



Employer

Employer

If you are a UK employer, please visit our employer hub for further information.

Employer hub Go to employer hub

Adviser

Adviser

If you are a UK adviser, please visit out adviser site for further information.

Adviser website Go to our adviser website.

For use by UK employers and advisers only.