Sat. Mar 14th, 2026

Ethical AI: Investing in a Responsible Future


Risks to Investors and Regulatory Momentum

Despite its potential, AI carries its own unique and significant risks. It can amplify subjectivity, compromise privacy, and make opaque, unaccountable decisions, which could prove especially detrimental in high-stake sectors such as finance, law enforcement, and healthcare. Key concerns include inaccuracy, discrimination from biased data, as well as privacy breaches due to cyber vulnerabilities. Additionally, the environmental footprint of AI is swiftly expanding, with inference from models like ChatGPT already consuming over 124 GWh annually, and with compute demand doubling every 100 days, a potential trajectory toward tens of terawatt-hours annually over the next few years. Water usage is heading in a similar direction, with up to up 6.6 billion cubic meters of water projected to be consumed by 2027 – enough to meet Denmark’s yearly water needs.

“Greenwashing”, which can arise when businesses overstate their “green” credentials (which could include situations where businesses underestimate or fail to fully understand the environmental impact of their AI use), is increasingly coming into focus. This can be particularly pertinent to AI, as AI providers’ claims on their model’s energy and water usage are often opaque.  In the UK, under new powers introduced in the Digital Markets, Competition and Consumers Act 2024, the Competition and Markets Authority can impose fines of up to 10% of a company’s global turnover where companies engage in unfair commercial practices, including for misleading environmental claims. As ESG becomes more important in supply chains, scrutiny of AI usage and its underlying environmental impact is only likely to increase.

To consider another ethical angle; Getty’s claim against Stability AI for copyright and trademark infringement in respect of the data that Stability AI has used to train its AI model has drawn into sharp focus the ethics of the way in which AI developers acquire their training data. Investors may want reassurance that AI businesses in which they invest will not face the threat of litigation as a result of “stealing” data to develop their models. 

Encouragingly, investor awareness of these issues is growing. The World Benchmarking Alliance’s Collective Impact Coalition for Digital Inclusion brings together 34 institutional investors representing over $6.9 trillion in assets, alongside 12 civil society groups. Their collective engagement has reportedly prompted 19 companies to adopt ethical AI principles since 2022; however, the work is far from over, with a recent report revealing that only 52 of 200 major tech firms disclose their ethical AI principles.  

Regulatory momentum is building globally. The EU AI Act is the most comprehensive AI regulatory framework implemented so far and, much in the way that GPDR set privacy standards globally, looks sets to be the “gold standard” in AI regulation. The Act introduces a risk-based framework which bans the use of AI in high-risk applications, mandates transparency and introduces requirements for those developing and deploying AI to be AI literate. As noted above, other countries are also increasingly regulating, although in its recently published Digital and Technologies Sector Plan the UK Government has stated its aim to take a more pro-innovation and anti-administration sector specific stance rather than implement a single piece of overarching regulation..

As AI becomes more accessible, thanks to a 280-fold drop in inference costs between November 2022 and October 2024, deployment is accelerating, making inclusive and ethical AI governance more urgent than ever. Businesses and investors alike would be wise to stay alert to the risks, particularly if ethical applications are key to their business plans or investment strategies. 

To help mitigate risks to investors, the Responsible Investment Association Australasia recommends stewardship and integration strategies, including human rights due diligence aligned with the UN Guiding Principles on Business and Human Rights (UNGPs). It also advocates prioritising engagement based on the severity and likelihood of impacts, and pushing for greater transparency, contestability, and accountability in AI governance.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *