Artificial Intelligence: Ethical Concerns and Sustainability Issues
We unpack the reputational, ethical, legal and environmental challenges of using artificial intelligence.

Key Takeaways
Artificial intelligence (AI) offers significant opportunities to create value but poses serious reputational and cybersecurity risks.
AI risks include misusing personal data, creating false representations and abusing intellectual property rights.
AI's high demand for electricity and water has major environmental impacts and may compete with consumers and businesses that rely on these resources.
Our series on AI opportunities explores the transformational potential of this technology and its integration into the global economy. While exciting at many levels (societal, economic, scientific, and more) and perhaps incalculably valuable, we must be thoughtful in evaluating AI’s potential drawbacks and benefits.
AI’s power and wide-ranging applicability mean its impacts on society, both good and bad, are likely to be far-reaching. The adage, “With great power comes great responsibility,” is especially true for AI.
AI with a Sustainability and Risk Management Lens
At American Century Investments, many of our investment teams incorporate certain sustainability factors into parts of their processes when it is consistent with their fiduciary duty and when it can potentially influence investment performance. This perspective includes risks associated with AI. By considering this technology's societal, corporate governance and environmental aspects, we can identify ways to support its responsible, sustainable use. Ignoring them could open the door to a range of abuses that could cause actual harm to the economy, individuals and the environment.
In this article, we evaluate AI’s potential reputational, ethical, legal and societal impacts and the new cybersecurity risks and environmental concerns it creates.
AI: A Minefield for Human Capital and a Company’s Reputation
In a 2022 survey, MIT Technology Review Insights asked participants to identify the tangible business benefits of adopting responsible technology practices. The top three responses:
Better customer acquisition and retention.
Improved brand perception.
Preventing negative unintended consequences and associated brand risk.1
AI can support these efforts, but without proper oversight, it could also seriously damage a company’s reputation, harm profits, and threaten its long-term sustainability.
These risks are not just hypothetical. In 2018, Amazon scrapped an AI recruiting tool after determining that the algorithm, trained on data about prior job applicants who were primarily male, had a clear bias against female applicants.2 Any company that uses a biased algorithm in hiring faces similar risks that could tarnish its reputation and hurt its ability to attract and retain talent, which could reduce productivity and innovation.
AI’s human capital management risks go beyond biased hiring. Training an AI model often requires human beings who affect and are affected by the process of training the model.
For example, OpenAI, the company behind ChatGPT, needed to train its algorithm to recognize and avoid toxic speech. This work was outsourced to a firm that hired workers in Kenya, Uganda and India to view and label graphic descriptions and images of sexual abuse, violence and other deeply disturbing actions over nine-hour shifts. Many of them experienced nightmares, depression and other mental health problems.3 Making matters worse, they were paid roughly $1.32 and $2.00 per hour, below the World Bank’s lowest-income poverty line.4,5
Companies that use AI will face increased scrutiny as it becomes more widespread. C-suite executives, managers and investors must be aware of the human capital element of the technology and understand that it could have unintended, harmful effects on employees and a company’s reputation.
AI Raises Significant Ethical and Legal Concerns
Companies face various ethical and legal questions arising from AI’s impact on individuals. For example, AI is becoming more common in health care settings — as of early August 2024, the Food and Drug Administration had approved more than 950 AI or machine learning-enabled devices.6 However, its use in medical diagnoses, treatments and patient care can be largely invisible to patients.
While AI algorithms “learn” from their mistakes and AI-based diagnostic tools have shown a remarkable ability to support human judgment, freeing up more time to treat patients, doctors should not simply hand over their stethoscopes to these algorithms.
Denied Claims, Privacy Violations and Intellectual Property Rights
The uncertainties and potential liabilities arising from AI-based decision-making will pressure businesses to use AI transparently and ethically. However, AI models are typically black boxes — even the people who build them don’t know how the models reach their conclusions. This could have far-reaching and troubling consequences.
For example, insurance providers’ use of AI to review claims more efficiently has left many patients with unexpected medical bills. Over two months in 2022, health insurer Cigna denied over 300,000 requests for payment using an AI tool that spent an average of 1.2 seconds on each case.7 Such practices may invite regulatory scrutiny because insurance regulations in many states require doctors to review claims before health insurers can reject them.
AI could lead financial services firms to violate consumers’ privacy. The U.K.’s Financial Conduct Authority found that credit card data had been mined to detect when the cardholders sought marriage counseling. An AI algorithm reduced the cardholders’ credit limits based on a correlation between marital difficulties and credit card defaults.8
AI algorithms may cause banks to charge different loan rates or deny credit based on personal characteristics, including religious or political affiliations and shopping habits. These biases, which may result from inaccurate information, could lead to more significant financial exclusion and distrust in the technology, especially among society’s most vulnerable.9
Deploying AI is introducing new legal and regulatory issues, resulting in lawsuits related to intellectual property rights and questions about the fair use of content in creative industries. In one case, a group of artists filed a federal class action lawsuit against Stability AI, Midjourney and DeviantArt for alleged violations of the Digital Millennium Copyright Act for their use of AI, focusing on the rights of publicity and unlawful competition.10 Companies that use third-party content to train AI algorithms must address the legal and regulatory risks surrounding intellectual property, particularly in using generative AI that creates new images and written content.
Potential AI Threats in Cybersecurity
The proliferation of AI is part of the expansion of digitalization in every industry. While a robust digital infrastructure supports economic growth and provides access to essential services, rapid growth in the digital economy makes cybersecurity a sustainability issue. Cybersecurity firm Barracuda found that ransomware attacks on municipal, health care and educational entities quadrupled from August 2021 to July 2023.11
The risk that AI will be used in cyber threats is real and growing. Generative AI in chatbots and deepfake technologies helps bad actors create more convincing scams. A phishing attack used Facebook Messenger chatbots to impersonate the company’s support team and was able to steal the credentials used to manage Facebook pages.12 Deep fakes of Elon Musk’s voice have been used to scam consumers, who think they are buying Musk-endorsed products, out of millions of dollars.
In 2024, businesses across all industries reported losing an average of roughly $450,000 to deepfake scams (higher in the financial sector).13 Deloitte reports that AI-generated content contributed over $12 billion in fraud losses last year and could reach $40 billion in the U.S. by 2027. The damages will likely keep growing with the technology still in its early stages.14 While some industry groups and governments are considering requiring AI-generated content to be labeled “generated by AI,” enforcing such rules would probably be difficult, and perpetrators with malicious intent would likely ignore them.
AI’s Environmental Footprint
Although AI software doesn’t have physical properties, it can significantly impact the physical world. It requires servers to run the algorithms that produce analyses and create content. All these servers and the physical infrastructure to support them require electricity, which generates carbon emissions unless it comes from green energy sources.
Estimates show that to satisfy requests over a typical 24-hour period, ChatGPT consumes 260.42 megawatt-hours (MWh) of power.15 By comparison, the average three-bedroom house in the U.S. consumes 11.7 MWh per year.16
ChatGPT consumes more than 20 times more power in a single day than a three-bedroom home uses in an entire year.
OpenAI found that the computing power needed to train the largest AI models doubled every 3.4 months from 2012 to 2018. Compare that to the period from 1959 to 2012, when it took two years for the power used by AI models to double. In other words, the power needs of today’s AI models are doubling at least seven times faster than ever before.17
According to University of Massachusetts researchers, training several common large AI models can generate more than 626,000 pounds of carbon dioxide equivalents — nearly five times the lifetime emissions of the average American car, including the emissions from manufacturing the car.18
And that’s not all. This massive use of computer servers and electricity generates heat that must be removed to keep those servers functioning. Currently, most data centers use cooling towers that require significant amounts of fresh water. For example, a training cycle for GPT-3 using Microsoft’s state-of-the-art U.S. data centers uses an estimated 700,000 liters of water. It would take three times that amount if the training were done in the company’s data centers in Asia.19
Microsoft recently disclosed that its global water consumption increased by 34% from 2021-2022 to roughly 5.8 billion gallons.20 Third-party researchers tie the increase to Microsoft’s AI development. Meanwhile, tech giant Alphabet consumed 5.2 billion gallons of water at its data centers in 2022.21 This unsustainable trend has led to innovations to reduce AI’s water demands. For example, Microsoft announced a new data center design that consumes no water for cooling.
Consider AI’s energy consumption in the context of the growing demand for electricity to help reduce the use of fossil fuels. AI's water usage negatively impacts global water security in regions already strained by droughts worsened by climate change. In August 2022, over 37% of the U.S. experienced severe drought conditions or worse, and the summer of 2024 was the hottest on record.22,23
Combined with the pressure on electrical grids to power homes and the growing fleets of electric vehicles, power consumption associated with AI poses a material risk to scarce resources. Businesses and communities seeking to operate sustainably must consider these costs and the trade-offs they involve.
How to Address AI’s Sustainability Risks
Artificial intelligence offers transformative opportunities for the global economy, health care, education, and society. However, as the technology becomes more widespread, the potential risks increase. Managing these risks will require effective governance, oversight, standards and best practices to advance AI responsibly and sustainably. In some cases, the AI industry wants to address these issues.
Capgemini, a multinational information technology services and consulting company, created an internal Code of Ethics for AI that guides its approach and commitment to developing trustworthy and ethical AI tools.24 Employees are trained to apply these principles in their work, and the company uses the framework as a compliance mechanism in its third-party relationships.
Adobe created an Ethics Advisory Board to oversee the implementation of AI development requirements and respond to ethics concerns and whistleblowers.25
Microsoft offers resources such as its AI Business School to help companies create an effective AI strategy, enable an AI-ready culture and innovate and scale the technology responsibly.26 IBM is developing a cybersecurity solution using AI to improve monitoring capabilities. A third-party study showed these potential benefits for a company utilizing IBM’s tool:
Return on investment of 239%.
90% reduction in analyst time spent investigating incidents.
60% reduction in the risk of a significant security breach.27
Responding to the environmental impact of AI development, the Internet Initiative of Japan uses outside air cooling techniques to reduce the amount of energy and water required to operate its data center. It is also developing an AI tool to optimize its air conditioning settings based on fluctuating weather conditions.28 Separately, semiconductor company AMD offers a product that reduces customer costs by half and reduces power consumption by about 30% to 40%.29
These efforts suggest that the tech industry realizes the importance of managing AI’s risks and potential liabilities. Still, governments aren’t leaving it to the industry to self-regulate. Legislative bodies worldwide are moving to address concerns about AI’s potential societal impacts.
In June 2023, the European Union passed the AI Act to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” The EU Parliament said AI systems should be “overseen by people, rather than by automation, to prevent harmful outcomes.”30
In September 2023, the U.S. Senate held a hearing with tech titans Bill Gates, Mark Zuckerberg, Elon Musk and others to discuss AI risks. The hearing recognized the significant benefits AI could provide to the world but also highlighted its potential to create new forms of discrimination, risks to national security and other concerns.
As our series of articles on AI demonstrates, this technology is transforming how people work and businesses operate. But while the potential benefits are immense, we must also address its legal, societal and environmental risks. In our view, this calls for robust governance and oversight measures, a commitment to transparency, and regulations designed to manage these risks and societal impacts.
Who Is Liable for AI’s Mistakes?
A recent study found that clinicians’ answers to medical questions contained inappropriate or incorrect information only 1.4% of the time. In comparison, 18.7% of the responses from Google’s medicine-specific large language model (the type of model used in ChatGPT, Bard and others) were inappropriate, and 16.1% were incorrect.31 Furthermore, the model’s answers displayed evidence of incorrect information retrieval and reasoning, which could have life-threatening consequences in practice.32 Human physicians are not infallible, but there may be a tendency to trust technology too readily.
Given the potential for AI to make mistakes, who would be liable if an algorithm generated an incorrect diagnosis or prescribed a harmful treatment? The company that developed the algorithm or the entity (e.g., hospital, medical professional) that chose to apply it? Similarly, if a driverless car caused an accident, who would bear liability — the AI software developer or the owner of the fleet of vehicles?
As the use of customer service chatbots increases, companies must address the associated risks or face potentially embarrassing and financially costly AI-generated mistakes and incorrect information that customers rely on to make decisions. While AI’s increasing sophistication represents new opportunities, it also creates financially material risks to almost every business.
Generative AI also raises serious concerns about the legitimacy, accuracy and reliability of the information we access online. Deepfake technology uses AI to manipulate videos, images and audio to show people saying or doing things they didn’t say or do. The implications, both financial and societal, are frightening. Suppose you saw a photo or video showing a politician, celebrity or CEO doing or saying something offensive or criminal. Would you stop to think that it could be a fake?
Author
MIT Technology Review Insights, “The state of responsible technology,” January 2023.
Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018.
Billy Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” Time, January 18, 2023.
Perrigo, “OpenAI Used Kenyan Workers.”
World Bank, “Understanding Poverty,” accessed October 13, 2023.
Elise Reuter and Jasmine Ye Han, “The number of AI medical devices has spiked in the past decade”, MedTech Dive, October 9, 2024.
Patrick Rucker, Maya Miller, and David Armstrong, “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them,” Pro Publica, March 25, 2023.
Artificial Intelligence/Machine Learning Risk & Security Working Group, “Artificial Intelligence Risk & Governance,” Wharton AI & Analytics for Business, accessed October 12, 2023.
El Bachir Boukherouaa, Ghiath Shabsigh, and Khaled Alajmi, et al., “Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance,” International Monetary Fund Departmental Paper No. 2021/024, October 22, 2021.
Benj Edwards, “Artists File Class-Action Lawsuit Against AI Image Generator Companies,” Ars Technica, January 16, 2023.
Fleming Shi, “Threat Spotlight: Reported Ransomware Attacks Double as AI Tactics Take Hold,” Barracuda, August 2, 2023.
Bill Toulas, “Malicious Messenger Chatbots Used to Steal Facebook Accounts,” Bleeping Computer, June 28, 2022.
Henry Patishman, “The Impact of Deepfake Fraud: Risks, Solutions, and Global Trends,” Regula Forensics, November 15, 2024.
CBS News Texas, “Deepfakes of Elon Musk are contributing to billions of dollars in fraud losses in the U.S.”, November 24, 2024.
Zodhya Tech, “How Much Energy Does ChatGPT Consume?” Medium, May 19, 2023.
Sam Wigness, “What’s the Average Electric Bill for a 3-Bedroom House?” Solar.com, October 3, 2023.
Karen Hao, “The Computing Power Needed to Train AI Is Now Rising Seven Times Faster Than Ever Before,” MIT Technology Review, November 11, 2019.
Karen Hao, “Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes,” MIT Technology Review, June 6, 2019.
Pengfei Li, Jianyi Yang, and Mohammad A. Islam, et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” ArXiv, October 25, 2023.
Microsoft, “2022 Environmental Sustainability Report,” May 10, 2023.
Google, “2023 Environmental Report,” July 24, 2023.
Pengfei Li, et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models.”
National Oceanic and Atmospheric Administration, “Earth Had Its Hottest August in 175-year Record,” News & Features, September 12, 2024.
Capgemini, Code of Ethics for AI, accessed October 16, 2023.
Adobe, AI Ethics, accessed October 16, 2023.
Microsoft, AI Business School, accessed October 17, 2023.
IBM, IBM Security QRadar SIEM, accessed October 17, 2023.
Internet Initiative Japan, “Data Centers: Societal Role and Challenges,” accessed October 18, 2023.
AMD, “Northern Data Takes HPC to New Affordability Levels with AMD,” 2021.
European Parliament, “EU AI Act: First Regulation on Artificial Intelligence,” June 14, 2023.
Emily Harris, “Large Language Models Answer Medical Questions Accurately, but Can’t Match Clinicians’ Knowledge,” JAMA 330, no. 9 (2023): 792-794.
Harris, “Large Language Models.”
References to specific securities are for illustrative purposes only and are not intended as recommendations to purchase or sell securities. Opinions and estimates offered constitute our judgment and, along with other portfolio data, are subject to change without notice.
The opinions expressed are those of American Century Investments (or the portfolio manager) and are no guarantee of the future performance of any American Century Investments' portfolio. This material has been prepared for educational purposes only. It is not intended to provide, and should not be relied upon for, investment, accounting, legal or tax advice.