As artificial intelligence rapidly evolves, investors are working together to understand and address the rapidly evolving risks to privacy, job security, democracy and society.
The rapid advancement and sophistication of artificial intelligence (AI) is dominating headlines, with generative AI – a new technology that can create content indistinguishable from human work – inspiring fear and wonder in equal measure.
In March, tech giants Elon Musk, Steve Wozniak and more than 1,200 other founders and top researchers signed a open letter calling for a six-month pause on generative AI experiments to better understand the technology’s risks, benefits and impact on the world.
Goldman Sachs said last month that AI could replace the equivalent of 300 million full-time jobs and Italy forbidden popular generative AI tool ChatGPT due to privacy concerns. Sir Jeremy Fleming, director of the British intelligence service GCHQ, reportedly warned last week that disinformation is a primary threat to advancing AI.
A key problem is that AI is both ubiquitous and conceptually slippery, making it notoriously difficult to regulate. say Matt O’Shaughnessy, a visiting scholar at the Technology and International Affairs Program of the Carnegie Endowment for International Peace.
Europe and China are trying to take the lead on AI, he says, with the latter rolling out regulations focused on AI capabilities and algorithms and the EU rushing to implement its sweeping draft Artificial Intelligence Act (EU AI Act) to approve.
Human and civil rights considerations
But investors say the EU AI Act must go further to take action on the risks to human and civil rights posed by AI abuse signed a joint letter demand further action. They call for additional provisions in the law, such as human rights impact assessment requirements for developing and deploying AI systems and publicly visible database requirements for AI providers to be available to users.
Aviva Investors is one of 149 investors who signed the letter. Louise Piffaut, Head of ESG Equity Integration at Aviva Investors, explains ESG that while the primary purpose of AI is to serve a positive purpose, its increasing importance in society has also resulted in significant harmful consequences for society.
“As an investor, we have been concerned with human rights in all sectors and regions for a number of years. AI will ultimately impact all sectors in which we invest. Ultimately, we want to make investment decisions that respect human rights,” she says.
Piffaut says that because AI is currently largely unregulated, the full spectrum of human rights risks that arise through the AI value chain, from product development to the use of an AI system, remains largely underappreciated.
“Safety measures are scarce, and the companies that build these systems are therefore rarely held accountable for the impact of the technology on people,” she says. “Risk-based regulation, as proposed in the EU AI Act, will provide clear rules to ensure the worst outcomes are prohibited. The regulation proposes different rules and levels of transparency that will encourage companies to better manage key risks.”
Involvement in AI
In the absence of robust AI regulations, investors are partnering with companies in AI areas that they believe pose significant human rights risks. In 2021, Brussels-based investor Candriam started an engagement campaign on the human rights risks of facial recognition technology.
Benjamin Chekroun, Stewardship Analyst, Proxy Voting and Engagement at Candriam, explains ESG that the country began to notice the risk associated with facial recognition technology around the 2020s, as civil society groups began campaigns to ban its use by police and companies introduced moratoriums on the sale of products and systems to law enforcement, especially in the USA, following the black Lives Matter movement.
“Some cities and countries are also introducing bans,” he notes. “At the end of 2021, the European Parliament called for a ban on the use of facial recognition technology by police in public places, and on predictive policing.” Chekroun says there are a billion cameras worldwide that can be linked to facial recognition, in an area that is poorly regulated but where the technology is developing rapidly.
“It’s very tempting to use because it’s cheap,” he adds. “The technology comes online so quickly and is always miles ahead of regulations. That gap will widen – technology will evolve exponentially, while regulation will be relatively linear – and that is a risk for investors.”
Candriam is the protagonist on one engagement campaignpartnered with 20 investors, targeting 30 companies, to improve transparency on their use of AI and how they address ethical and social issues related to the technology.
The engagement campaign managed to reach fifteen companies, including Microsoft and Motorola.
Candriam has entered the second phase of the commitment, in which it is advocating that companies have one board director with experience or responsibility in ethics and AI, and a department that reports on this to the board of directors.
Like Candriam, Fidelity International is another investor concerned about the rapid advancement of AI and the lack of regulation.
“SSince the public release and exposure of ChatGPT, AI is clearly evolving much faster than many might have expected, and certainly well ahead of any common oversight or governance or regulatory constraints,” says Christine Brueschke, sustainable investing analyst at Fidelity International.
“Recent developments have shown that we have quickly moved from concerns about social risks such as privacy concerns, algorithmic biases and job security, to actual existential concerns about the future of democracy and even humanity.”
Fidelity International is co-leading a workstream for investors to engage with companies to advance ethical AI under the umbrella of the World Benchmarking Alliance (WBA) Digital Collective Impact Coalition. Brueschke says it takes inspiration from the WBA’s Digital Inclusion Benchmark, which, among other things, measures companies’ public commitment to ethical AI.
The workflow sent a letter to 130 digital technology companies last year, asking them to promote a more inclusive and trustworthy digital economy and a sustainable society.
“The response from companies to our joint involvement has been somewhat mixed, but overall positive and encouraging,” says Brueschke. “Many companies are definitely considering the ethical issues of AI, but there is still a long way to go.”
Jamie Jenkins, head of Global ESG Equities at Columbia Threadneedle, says AI offers huge potential benefits to modern life, such as computing power and autonomous technology, but there are also dangers to the adoption of AI that may require an oversight process that is clear and inclusive is. and transparent.
“Current 21st century geopolitics makes some of that global standardization a little more difficult. But I think creating specific, globally applicable guidelines to ensure that AI activities aim to maximize the public good and minimize abuse would be desirable,” says Jenkins.
“It is not a leap to assume that there could be misuse of AI in terms of the spread of disinformation.”
Aviva Investors’ Piffaut agrees, saying: “Our society and its functioning may be in danger. We agree with recent calls for further guidance on mitigating and reducing risks, as evidenced by numerous open letters in recent months.”
Another major risk of AI, according to Piffau, is job security.
“One of the big ESG topics we’ve been thinking more about is what a just transition means in the context of a transition to a low-carbon economy. Likewise, investors need to start thinking about the just transition that will have to happen in parallel as a result of advances in technology and AI.”
Ultimately, the technology, which is developing so quickly and exceeding regulations, will be crucial, according to Candriam’s Chekroun. “We need companies to embrace ethics and include AI in their human rights principles.
“One thing we noticed when we talked to companies in the field dealing with facial recognition was that the ones closest to actually writing the algorithm were the ones who realized, ‘we have biases here and it’s very important that we don’t screw this up. ‘ and they were more willing to talk about publishing principles.