What does AI mean for a responsible business?

 
Chairmens.jpg

Advisory Partners

 

The FT Moral Money Forum is supported by its advisory partners, Vontobel and White & Case. They help to fund the reports.

The partners share their business perspectives on the forum advisory board. They discuss topics that the forum should cover but the final decision rests with the editorial director. The reports are written and edited by Financial Times journalists and are editorially independent.

Our partners feature in the following pages. Partners’ views stand alone. They are separate from each other, the FT and the FT Moral Money Forum.

 
 

AI’s transformative role in asset management and ESG

By Christel Rendu de Lint

Artificial Intelligence, one of the most hotly discussed and debated topics of late, continues to advance its technology and capabilities, as well as its impact on multiple facets of business and society. It looks set to affect nearly every industry and has the potential to transform society, yet an open question remains: will it be a net positive or negative overall?

While well-documented fears surround AI, its many potential benefits often attract less focus. As with much of technology, whether it’s ultimately advantageous or destructive hinges upon how we manage its emergence and integrate it into our lives.

Encouraging transparency and reducing bias

Introduced carefully — ie with an understanding of its potential drawbacks — AI can lead to much greater transparency and the reduction of bias in decision-making. As AI systems can inadvertently reflect current human biases, and lead to harmful results known as algorithmic bias, developers, owners and users need to urgently prioritise both the awareness and mitigation of this risk. Possible measures include establishing responsible processes, the use of technical tools, and operational practices such as internal “red teams” or third-party audits. Luckily, as awareness of inherent bias and its risks increase, the threat of AI unintentionally reversing progress in crucial areas such as workplace diversity lessens.

If AI is to improve transparency and reduce bias, it must be trusted. Demand from regulators and the public for explanations and clarity around data use is growing. Transparent AI relies upon understandable and clean inputs, making it easier for humans to understand and trust AI decisions, and forming the basis for testing and explaining those decisions to stakeholders. At the policy level, both US President Joe Biden's executive order of October 2023 and the European parliament’s Artificial Intelligence Act of December 2023 focus on transparent and trustworthy usage of new technology.

AI’s contribution to ESG

A marked opportunity exists for AI to play a transformative role in the world of ESG. Again, success depends on how we guide the technology; namely, we must route it towards making a positive difference. An example is the AI-driven development of technology that aims to remove atmospheric greenhouse gases and help mitigate climate risk. AI’s integration of resources into climate change predictions can increase accuracy, help measure and monitor emissions, and provide essential data for climate action. In addition, smart manufacturing can enable significant reductions in energy consumption and waste and carbon emissions, while AI-based forecasts can improve electrical system efficiency via accurate supply and demand predictions.

Boston Consulting Group and Google estimate that with the thoughtful guidance of experts, AI could help mitigate 5-10 per cent of global greenhouse gas emissions by 2030, about 10-20 per cent of the Intergovernmental Panel on Climate Change’s interim target for achieving net zero by 2050. Implemented correctly, we believe AI’s impact on our journey towards increased sustainability includes faster technological experimentation and, therefore, hopefully also faster than expected breakthroughs. 

AI’s transformative effect in the investment space

Despite concerns of AI-related workforce reductions in the banking and investment sector, many argue that AI will complement people at work, not replace them. Daron Acemoglu, a professor of economics at MIT, asserts that when AI and humans work together, they can do better than either would alone.

As computers get smarter, their potential to automate complex, repetitive tasks — such as trade execution, performance reporting and compliance monitoring — increases. AI can therefore help identify and implement opportunities for cost savings and operational improvements. This can result in investment professionals having more time for higher-level analysis and strategic decision-making, which is at the core of solid portfolio management. Its potential to process and analyse vast amounts of data at speeds unattainable by humans means AI can enable more sophisticated market analysis, leading to better-informed investment decisions. When it comes to client management, AI can analyse individual client data to provide highly personalised investment advice and recommendations, potentially increasing client satisfaction and retention.

Rather than centring discussion on the potential of a minimised workforce, we should also focus on how these technological advances will reshape our industry and others. The benefits of AI will probably inspire a skill shift: as AI becomes more integrated into the investment management industry, demand will grow for professionals skilled in data science, machine learning and AI implementation. Nonetheless, similar to the medical field, when dealing with matters of sensitive financial planning and investment, it’s hard to imagine a world in which chatbot interaction would completely replace human interaction. 

So, while AI will enhance efficiency and systematics, it’s unlikely to eliminate the human touch in our sector and others. It appears, therefore, that the future we are looking at is one where humans and technology will complement one another. We believe that the careful implementation of AI, with an understanding of its risk landscape, should produce a net positive result for the world of investments and beyond.

 
 

Legal perspectives on developing and deploying responsible AI

Janina Moutia-Bloom and Tim Hickman

Despite previous calls for an “ethical pause” on AI development, corporate investment in it is expected to reach $200bn by 2025. As such, board oversight in this area continues to evolve at pace, particularly in light of concerns raised regarding AI including: the potential for misuse of AI technologies; bias amplification; potential lack of transparency and accountability; and the treatment of personal data and intellectual property used in AI systems. 

The growing demand for, and availability, impact and stakeholder scrutiny of generative AI is pushing it higher up the board agenda across all sectors. Businesses stand to benefit from embracing AI — for example, it could improve efficiency, assist decision-making and contribute to risk management. Yet, to realise AI’s benefits safely and responsibly, boards must be able to navigate the associated legal/compliance, shareholder activism, ethical and reputational risks.  

AI is technically complex and fast-moving, and therefore challenging for governments to develop effective regulation, standards and/or guidance. As a result, the international landscape of legal frameworks governing AI is fragmented — with even the definition of AI differing across jurisdictions. However, a new, global phase of AI regulation is starting to emerge, as indicated by the publication of the G7 AI principles and Hiroshima AI Process, the Bletchley Declaration on AI safety, the Blueprint for an AI Bill of Rights, the “first-of-a-kind” Framework Convention on AI, Human Rights, Democracy and the Rule of Law, the UN’s B-Tech Project on Generative AI with the UN Guiding Principles on Business and Human Rights, and the UN General Assembly’s recently adopted resolution on “safe, secure and trustworthy” AI that will also benefit sustainable development for all (backed by more than 120 States).

The EU’s proposed AI Act is designed to provide a horizontal legal framework for the regulation of AI systems across the EU. Once in force, the AI Act’s risk-based approach — which has fundamental rights protection at its core — will have global reach and affect actors across the entire AI value chain. However, several of the concepts set out in the AI Act will require clarification by courts and regulators to provide businesses with greater certainty regarding their compliance obligations. Alongside the AI Act, companies operating in the EU must still consider obligations under other applicable sector-specific instruments, such as the General Data Protection Regulation, the Digital Services Act, and the forthcoming AI Liability Directive. Companies should also be aware of regulatory initiatives at national level in the Member States in which they operate. For example, in February 2024, France’s competition authority announced that it would investigate big tech’s competitive functioning in the generative AI sector, and would issue an opinion in the coming months.

The UK has taken a different approach to the EU, declining to issue new legislation at this stage, and instead adopting a flexible framework of AI Regulatory Principles that will be enforced by existing regulators. This framework is intended to be both pro-innovation and pro-safety. In February 2024, a Committee of the House of Lords published a report cautioning the Government against a regulatory approach too narrowly focused on AI safety. Days later, the Government published: (i) its consultation response to its AI Regulation White Paper, articulating a principles-based (rather than the EU’s risk-based) approach towards regulating AI; and (ii) guidance for regulators on implementing the AI Regulatory Principles.  

On the other side of the pond, in late 2023, the Biden administration signed Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI. However, in contrast to the EU’s risk-based regulatory approach, the Order places near-equal emphasis on the pressing need to (responsibly) develop and harness the potential benefits of AI as it does on the need to understand and mitigate novel risks. Initiatives also unfold at State-level, including California Senator Scott Wiener’s recent proposal for sweeping safety measures for AI in SB 1047, while New York City has already introduced Local Law 144 to regulate the use of AI in hiring decisions.

Shareholders have also started to become active, with AI-focused resolutions having made their debut in the US (for example, calling for tech and motion picture companies to publish an “AI transparency report”). Such resolutions are expected to feature more prominently at future AGMs.

Boards should also be alive to the types of AI-related disputes and class actions being filed in national courts, as judgments in these early legal actions will be instructive in evaluating a company’s potential exposure to litigation risk. Disputes have already arisen in relation to issues such as whether training data fed into AI systems infringes copyright, other IP rights and/or personal data protections; alleged bias in the output of AI tools; misrepresentation of AI systems’ capabilities; and whether an AI system can itself be an “inventor” under patent law or an “author” under copyright law.

To mitigate the risks explored above, companies should implement effective AI governance. This may involve: 

  1. developing clear and robust policies which govern — and embed ethical practices into — the use of AI; 

  2. developing strategies for negotiating AI-specific contractual clauses, including in relation to policies, procedures and testing with respect to the various concerns, and liability attribution for AI failures; 

  3. establishing a cross-functional team of specialists from legal, compliance, ethics, data science, marketing (among others) to oversee and report to management on AI governance; and 

  4. undertaking regular risk assessments and audits of AI models and data sets to remediate legal and ethical concerns (e.g., bias).