Artificial intelligence (AI) offers the opportunity to process and analyse significantly more information than before. This could be revolutionary in many areas including drug discovery, national security and the development of smart cities. In fact, any area that has large data sets to scrutinise is likely to find the emerging technology a productive new tool capable of processing massive tasks that human employees could not complete at a reasonable speed. Investing is one such area where such productivity gains are likely.
Analysts and investment managers can use AI tools to identify new companies offering exciting investment possibilities. Screening data that is freely available on the internet will be positive and negative – looking for traits that make a company’s shares attractive, but also looking for factors that could derail an investment case. Clearly, it could be useful for those using environmental, social and governance (ESG) factors to guide their investment strategy to process mountains of data that contain essential information for sustainable investors.
Digital sunlight and shadows
Last year, one of the Financial Times’ star columnists, Gillian Tett, argued that “AI can direct digital sunlight onto company greenwashing”. She thinks that AI tools can discern whether a company’s green credentials are true and such transparency will force companies to improve their environmental records. In this way AI could be an overall positive force as sceptical analysis of corporate claims would drive positive action by boards in these areas.
However, AI also has the potential to have a darker side too. In March this year, more than 1,000 technology leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful AI systems and tools that now present “profound risks to society and humanity.” They want a pause so the ethics of the technology can be assessed – and the sector regulated properly. Signatories included Apple co-founder Steve Wozniak, entrepreneur Elon Musk, Tristan Harris of the Center for Humane Technology and AI experts working for Meta Platforms and Alphabet’s Google.
Technology is at the heart of the geopolitical race between Washington and Beijing to be the world's hegemon
However, such a moratorium seems unlikely due to the political importance of the digital revolution. Technology is at the heart of the geopolitical race between Washington and Beijing to be the world's hegemon. From Huawei’s 5G infrastructure to the development of new semiconductors, cutting-edge technology is the battleground for the ideological clash between East and West. If America and its allies take the foot off the AI R&D accelerator then companies such as Alphabet, Microsoft and Meta Platforms will be at a disadvantage compared with Chinese companies pursuing technological breakthroughs at a rapid pace.
The Chinese government has direct influence over the direction its companies take. In many strategic businesses, they have a representative on the board and can intervene directly with research priorities. The significance of the AI race with China emerges as both nations compete to be the global leader in future technology. This race implies no such moratorium is likely in the West – it will be seen as hamstringing the West’s global technology champions and is politically unpalatable, especially in the US where both Republicans and Democrats tend to agree on this issue. State-directed capitalism in China is forcing more state intervention by democratic governments as they become more involved in business and its direction at the frontiers of technology.
A good technology in nefarious hands
So, what would happen to a major company that, unfettered by any ethical moratorium on AI development or an accepted global regulatory framework, produced an AI tool that could be used for bad. A civilian tool that could be used nefariously by bad actors or rogue governments, for example. What if a good product got into the wrong hands? Could this act of product development have a negative impact on a company’s ESG rating and make it a less attractive proposition for such-minded investors?
However, should a company develop a system that could have a negative impact on the world at large if used by a bad actor, it may not have a substantial negative impact on its rating. Risks are assessed and the company is rated as a whole. If the provider of ESG analysis rates the management of a company highly and trusts them not to sell the product to undesirable customers – and do all it can to ensure it does not fall into ‘bad hands’ – then it may have little impact at all.
A good example of this is the defence group BAE Systems. Arguably it is not an ethical investment because it manufactures arms. It even sells them to Saudi Arabia, which would be an ethical flag for many. However, the management is respected, and Saudia Arabia is on a UK government ‘approved list’ for arms sales. This means its ESG score is quite respectable, with an AA rating from MSCI.
The only sensible conclusion is that: it’s complicated. The industrial development of ESG scoring and ratings are still in flux. An ESG rating is an oversimplification of looking at a company through various lenses. Many of the decisions, factors and scoring systems involved are subjective with no agreed global standard – several competing providers offer ESG scoring tools for analysts and investment managers that arrive at different results. AI tools could potentially make these scoring methodologies even more complex as they try to process greater quantities of data.
How AI will impact the future is a subject to debate – but the reality remains that it is unknown. ESG scoring methodologies continue to evolve with transparency and understanding of the “black box” needing to improve over time too. It also involves a large amount of subjectivity on any data that is produced. This makes it difficult to have any certainty on predictions of what will happen when ESG meets AI or AI meets ESG. What can be said is that more information is certainly welcome, especially to tackle individual topics. Using the mountains of data in the digital space can better inform decision-making in all areas but be wary, as simpler does not always mean better.
Please note: This article was released prior to SDR and thus the information may not be in line with the Anti-Greenwashing rule but contextually is appropriate for the time it was written.
Nothing on this website should be construed as personal advice based on your circumstances. No news or research item is a personal recommendation to deal.
The implications of artificial intelligence on ESG
Read this next
Financial coach vs financial adviser – what are the differences?
See more Insights