Exploring AI risks and opportunities

Eva Cairns
Head of Responsible Investment
Widespread use of AI highlights the need for robust governance, our new report explains.
Artificial Intelligence (AI) is embedding itself in our daily lives and often in the most unexpected ways. From the road network and the healthcare we receive, to the news we read and the food we order, AI is transforming the world we live in.
AI will shape the next era of corporate strategy, economic growth and market transformation. Indeed, in the UK alone 75% of financial firms are already using AI1 and a further 10% are planning to use it in future.
With such rapid and widespread adoption, AI is quickly becoming a core governance and sustainability challenge that presents material risks to companies. This is a central theme of our new report, Governing the Algorithm: Investor Priorities for Responsible AI (PDF, 2MB) in which we analyse the clear opportunities – and responsibilities – for businesses using AI and for investors to help shape the governance standards needed to manage the emerging risks it presents.
Ultimately, it’s in everyone’s interests to ensure that AI is developed and used in a way that supports not only innovation, but inclusion, stability and shared prosperity.
Because of this, we incorporated AI and ethics into our stewardship priorities in 2023. Since then, we have been working closely with asset managers to explore how AI oversight can be more effectively embedded into environmental, social and governance (ESG) analysis and investment practice.
Creating greater transparency
While AI can offer more efficiency and innovation, it also introduces systems that lack transparency, where decision-making logic is hard to pin-point.
‘Black box’ AI models – where even the models’ developer is in the dark about how it makes decisions – raise risks related to bias, misinformation, privacy and operational integrity. This could create serious challenges for businesses reliant on this type of system, including potential issues such as legal exposure, reputational damage and stakeholder trust. As an example, Amazon discontinued its AI-powered recruitment tool after discovering that it was biased against women. The tool had been trained on historical hiring data, which favoured male applicants2.
Despite the scale of AI adoption, Stanford’s 2024 AI Index3 finds that fewer than 20% of public companies currently disclose details about their AI risk mitigation strategies, and only 10% report on fairness or bias assessments.
This lack of transparency presents a material blind spot for companies themselves and for both investors and regulators. In our report we found this transparency gap makes it increasingly difficult for investors to understand how AI is being governed, especially in high-impact sectors such as healthcare, finance and retail.
To tackle this, boards must consider AI as a cross-cutting governance concern – much like cyber-security or climate risk – that requires appropriate oversight and clear risk mitigation processes.
Our framework for investor action
Our report highlights analysis by ISS-Corporate4 that reveals only 15% of S&P 500 companies disclosed some form of board oversight of AI in their proxy statements. Even fewer, just 1.6%, provide explicit disclosure of full board or committee-level responsibility.
To help address this, we advocate a three-part approach.
- We believe AI governance should be integrated into ESG investment analysis, with investors assessing how companies disclose AI use, establish internal safeguards, and assign oversight to executive or board-level leaders.
- Stewardship and engagement must focus on how companies govern AI day-to-day. This includes engaging on bias assessments, explainability mechanisms, and ensuring human oversight is embedded in high-impact use cases. Where transparency or risk management is lacking, escalation through proxy voting can be an appropriate tool.
- Investors have a crucial role in setting clear expectations. This means aligning stewardship practices with global standards like the OECD AI principles, and EU AI Act. By setting and following clear standards, we can help shape an investment and business environment where innovation is matched by accountability.
Supporting responsible AI
As a result of long-term investment horizons and systemic influence, pension schemes are uniquely placed to drive stronger governance standards across the economy. We are accountable not only for today’s performance, but for the sustainability and resilience of people’s futures.
By encouraging better governance and disclosure, pension funds can carefully guide the widespread adoption of AI and contribute to more transparent, equitable, and future-fit corporate behaviour. This doesn’t mean limiting innovation but ensuring that it is guided in a way that aligns with societal expectations and legal standards, and supports long-term economic stability, inclusion and accountability.
References:
1Artificial intelligence in UK financial services - 2024 | Bank of England
2Insight - Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
3https://hai.stanford.edu/ai-index/2024-ai-index-report
4https://adviser.scottishwidows.co.uk/assets/literature/docs/61448.pdf (PDF, 2MB)
For employer use only.