ESG Meets AI: Balancing Risk and Opportunity
July 27, 2023
Advancements in technology and engineering have always changed the way we live our lives. However, the pace of change in recent decades is unprecedented—and isn’t slowing down any time soon. The size of the global artificial intelligence (AI) market is expected to grow from $86.9 billion in 2022 to $407 billion by 2027, according to a MarketsandMarketsTM report. That is a growth rate of over 36%.
With this meteoric rise, companies are struggling to find a balance between keeping pace with new technology and managing its risks and unintended consequences. While finding and maintaining that balance is difficult, it is essential for long-term success. As business clients turn to attorneys with questions about Environmental, Social and Governance (ESG) and AI, it is important to understand what these terms mean and how they impact clients.
This rise in AI also comes at a time when more and more individuals are incorporating environmental and social issues into their everyday lives—including their choices as consumers. This focus has only increased following the COVID-19 pandemic, leading to a renewed focus on organizations’ ESG frameworks.
A Governance & Accountability Institute examination of sustainability reporting trends found that more than 90% of S&P 500 companies participate in some form of ESG reporting. Investments in sustainable funds rose from $5 billion in 2018 to over $50 billion just two years later. While some organizations may see ESG as nothing more than a list of governmental regulations, many are beginning to realize that consumers and other stakeholders are looking for a genuine commitment to these issues—and they are getting better at spotting a fake. The sheer amount of information readily available to the masses thanks, in large part, to the internet, has led to increased pressure on organizations to operate ethically and sustainably, and to be increasingly transparent. It seems natural, then, that as technology and AI evolve, new ways to use these tools to advance those environmental and social goals would also evolve. However, with this rapidly evolving technology comes countless risks—both known and unknown.
Opportunities and Risks
Using AI to sift through and analyze vast amounts of data, carbon emissions of various governments and organizations can be better monitored and verified. This data will not only help to increase transparency and hold organizations accountable but will also provide greater understanding and insight into specific areas of environmental concern, such as air and water pollution, waste production, deforestation, natural disaster tracking and patterns and overall climate trends.
These benefits, however, are not without tradeoffs. Indeed, the most immediate tradeoff with respect to the “E” environmental considerations is the sheer amount of energy required to not only power AI systems but also to store and process the data they collect. Already, data centers are among the top energy consumers in the world. Additionally, this new data will still be subject to manipulation—particularly where there is a lack of historical data to serve as a basis for new applications. Organizations could also face legal risks if they are not in compliance with the growing body of environmental regulations, particularly as government and regulatory agencies incorporate AI into their own tracking and monitoring systems.
At its best, AI has the potential to reduce bias, enhance equity, and increase transparency and accountability. AI can be used to understand and combat unfair labor practices, corruption and discrimination. Organizations can more effectively expose and address these issues and use AI to advance the social issues important to that organization.
At its worst, of course, AI can be used for malicious and nefarious purposes. In the social context in particular, the data at issue is likely personal in nature. This makes data privacy the top concern for individuals and organizations alike. The potential for such data to be misused makes it imperative for organizations to have effective policies and systems in place from the outset.
In addition to data privacy concerns, there is a growing concern regarding the impact of AI on the job market as we know it. Many organizations are trying to reconcile taking advantage of all AI has to offer while simultaneously prioritizing individuals, particularly those whose role may be replaced by AI. There are also significant risks that AI may actually exacerbate existing social inequities. These systems are, for now, designed and programmed by humans. Therefore, the ultimate outcome may have unintended consequences depending upon what data is used and how it is applied. Furthermore, there is the potential that these systems may be manipulated by social media trends or other public sentiment indicators that are easily influenced by third parties. On the legal front, organizations may face claims of discrimination, unfair competition or invasion of privacy if their technology is not properly aligned with their social goals.
AI has the potential to streamline organizations’ governance framework and create more efficient and accurate systems. AI can help to implement and track various ESG initiatives and accurately measure an organization’s progress on those initiatives. This technology will be able to more accurately monitor internal compliance and assist with internal investigations should issues arise. Organizations can also use AI to monitor changes in regulatory compliance and increase transparency and accountability both internally and externally.
However, integration of AI into existing governance frameworks and systems may initially prove to be difficult on both a technical and practical level. There are few professionals who possess a deep understanding of AI, let alone a deep understanding of AI and governance issues.
Additionally, with many of these applications, even small errors can render the entire application useless. Therefore, these systems require ongoing maintenance to ensure both the ethical and technical aspects of AI are being implemented correctly and compliantly. Indeed, Australia, Singapore and the European Commission have all introduced frameworks and guidelines for ethical use of AI. As governments attempt to keep up with changing technology, it will be critical to monitor and update governance policies accordingly. Organizations must ensure they have a robust compliance plan in place to avoid legal ramifications for regulatory violations.
With all of these issues at play, it can seem like everything is everywhere all at once—and, in many ways, it is. What this means is that organizations will need to take a wholistic approach to the integration of AI in order to ethically and sustainably maximize its potential. While it may be obvious that having a technology expert, particularly one who understands AI, is critical, there are many other key roles to fill. Organizations should have a point person from their ESG group assigned to work closely with whoever is leading the AI initiatives. The ESG representative should then identify key players to be involved in both the integration and maintenance of AI within the organization. Such key players would include individuals with expertise in human resources, cyber security and data privacy, legal, compliance, supply chain and procurement, and any other areas unique to the organization. In order to have a robust AI system that complements and supports an organization’s goals and ESG mission, each of these areas should be involved in the implementation and maintenance of AI systems as each brings a unique and crucial perspective—on both the opportunities and risks AI may pose.
Martha “Frannie” Reilly is co-chair of McNees Wallace & Nurick’s public finance and government services group. She also practices in the corporate and tax group and leads the firm’s charitable and nonprofit and environmental, social and governance (ESG) groups. Elizabeth Smith is an associate practicing in the firm’s litigation group. Contact them at firstname.lastname@example.org and email@example.com.