Skip to Main Content

AI at SDSU

Copyright statement and reuse

This ethical AI assessment tool was developed based on a roundtable presentation, Assessing fair AI use: Rubric development for assessing ethical and social equity concerns in AI, at ACH2024: the 2024 virtual conference of Association for Computers and the Humanities, November 6–8, 2024.

Creative Commons License:

  • Ethical AI Assessment Tool © 2024 by Sarah Tribelhorn is licensed under CC BY-NC-SA 4.0 

 

Ethical AI Assessment Tool

Abstract

Assessing Fair AI Use: Rubric Development for Assessing Ethical and Social Equity Concerns in AI

 

As a librarian focusing on sustainability, the use of Artificial Intelligence (AI) and digital storage is of concern. The burgeoning field of AI and its associated supercomputing capabilities as well as that of related digital storage rely heavily on resource extraction. However, this progress has ethical and social concerns. Issues of power, oppression, and justice must be considered alongside the environmental impact of AI, e.g., a surprisingly high water consumption associated with AI use has been demonstrated. In regions facing increasing water scarcity, such costs become significant, particularly for educational institutions. Similarly, there are devastating environmental and human costs of mineral mining, a practice that intensifies as AI technology advances. The profitability of mining often fails to account for its full societal impact, including environmental damage, worker illness and fatalities, and the displacement of local communities. These hidden costs are often obscured from those who benefit from AI advancements. This lack of transparency can be perpetuated by those in positions of power or those who are unwilling to acknowledge or address these issues. The energy production needed to support AI operations necessitates a constant draw on our reserves of minerals, water, and fossil fuels. This resource depletion could indirectly contribute to a range of global problems, including violence, conflict, climate displacement, pollution, species extinction, and environmental degradation. The brunt of these consequences is disproportionately borne by disadvantaged populations and ecosystems worldwide. There are similar concerns with data centers where our digital data is stored that have a large carbon footprint, as well as social equity issues.  These are all complex issues and this round table aims to discuss some questions around these concerns to develop a rubric to assess AI use and digital storage that could be used by librarians, academics, students, or anyone wishing to understand the environmental and social equity consequences of their AI use. Some of the questions that could be discussed include ethical implications, such as transparency, fairness, privacy,  accountability and social impact, and some could include environmental implications, such as energy consumption, resource utilization, and sustainability. Examples of these could be: Does the AI system avoid bias against any individual or group? Is there clear policy on data collection, storage, and usage? Who is responsible for the AI system’s decisions? How much energy does the AI system require? What materials are needed for hardware that runs the AI? Does the AI system contribute to sustainable practices? Do you know where your data center is located? Discussion around these and other questions will result in the development of a robust rubric that will be a useful tool to help guide the use and development of AI resources and digital storage with a conscientious approach to ethical and environmental concerns.

Keywords:

Artificial intelligence; assessment tool; sustainability; social equity; environmental concerns

Citation:

Tribelhorn, S. (2024, November 7). Assessing fair AI use: Rubric development for assessing ethical and social equity concerns in AI. [Virtual Presentation].  ACH2024: the 2024 virtual conference of Association for Computers and the Humanities, November 6–8, 2024.

 

Questions for Tool Development

These are some of the considerations that were used to develop the rubric to assess equitable use of AI:

Ensuring Fairness, Bias, and Accountability

  • Does the AI system avoid bias against any individual or group?
  • Are measures in place to ensure equitable treatment and outcomes?
  • Who is responsible for the AI system’s decisions?
  • Are there protocols for addressing mistakes or harmful outcomes?

Transparency in AI Algorithms

  • Is the AI system decision-making process understandable to users?
  • Are the data sources and training methods openly available for scrutiny?

Data Privacy Protection & Security

  • How does the AI system protect user data?
  • Is there a clear policy on data collection, storage, and usage?

Energy Consumption & Resource Utilization

  • How much energy does the AI system require?
  • Are there efforts to minimize the carbon footprint of AI operations?
  • What materials are needed for the hardware that runs AI?
  • Is there a plan for recycling or repurposing AI-related hardware?

Environmental Impact & Sustainability

  • Will this result in increased pollution?
  • How is the pollution being mitigated?
  • Is the mining of resources disrupting ecosystems?
  • Where are the data centers located?

Social Impact

  • What is the impact on mining communities?
  • Are communities being displaced owing to the development of data storage facilities?
  • Will this create climate refugees?

References

Budennyy, S. A., V. D. Lazarev, N. N. Zakharenko, A. N. Korovin, O. A. Plosskaya, D. V. Dimitrov, V. S. Akhripkin, et al. (2022). eco2AI: Carbon emissions tracking of machine learning models as the first step towards sustainable AI. Doklady Mathematics, 106(1), S118–128. https://doi.org/10.1134/S1064562422060230.

 

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. In Atlas of AI. Yale University Press. https://doi.org/10.12987/9780300252392

 

De Vries, A. (2023, October). The growing energy footprint of artificial intelligence.” Joule 7, no. 10 (October 2023): 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004.

 

Diakopoulos, N. (2020). Chapter 10: Transparency - Accountability, transparency, and algorithms. In: M. D. Dubber & F. Pasqualie, & S. Das (Eds.). The Oxford handbook of ethics of AI, (pp. 197–214). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.001.0001

 

Filar Williams, E. (2024). Every job must be a climate job. Oregon State University. https://ir.library.oregonstate.edu/concern/defaults/f1881v675

 

George, A. S., Hovan George, A. S., & Gabrio Martin. A. S. (2023). The environmental impact of AI: A Case Study of Water Consumption by Chat GPT. Partners Universal International Innovation Journal, 1(2), 97–104. https://doi.org/10.5281/zenodo.7855594

 

Keenan, J., Kemp, D., & Owen, J. (2019). Corporate responsibility and the social risk of new mining technologies. Corporate Social Responsibility and Environmental Management, 26: 752–760. https://doi.org/10.1002/csr.1717

 

Hagerty, A., & Rubinov, I. (2019). Global AI ethics: a review of the social impacts and ethical implications of artificial intelligence. arXiv preprint arXiv:1907.07892. https://arxiv.org/pdf/1907.07892

 

Halsband, A. (2022). Sustainable AI and intergenerational justice. Sustainability, 14(7), 3922. https://doi.org/10.3390/su14073922

 

Heikkilä, M. (2023, December 1). Making an image with generative AI uses as much energy as charging your phone. MIT Technology Review. 

 

International Energy Agency. (2024). Electricity 2024: Analysis and forecast to 2026. https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08-952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf

 

Ligozat, A. L., Lefevre, J., Bugeau, A., & Combaz, J. (2022). Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions. Sustainability, 14(9), 5172. https://doi.org/10.3390/su14095172

 

Mach, K. J., Kraan, C. M., Adger, W. N., Buhaug, H., Burke, M., Fearon, J. D., Field, C. B., Hendrix, C. S., Maystadt, J.-F., O’Loughlin, J., Roessler, P., Scheffran, J., Schultz, K. A., & von Uexkull, N. (2019). Climate as a risk factor for armed conflict. Nature, 571(7764), 193–197. https://doi.org/10.1038/s41586-019-1300-6

 

Pasek, A. (2023, July). Getting into fights with data centers: Or, a modest proposal for reframing the climate politics of ICT. White Paper. Experimental Methods and Media Lab, Trent University, Peterborough, Ontario. July 2023. https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf

 

Patterson, D., Gonzalez J., Le Q., Liang, C., Munguia,L.-M., Rothchild D., So D., Texier M., and Dean, J. (2021, April 23). Carbon emissions and large neural network training.” arXiv. https://doi.org/10.48550/arXiv.2104.10350.

 

Ren, Shaolie. (2023). How much water does AI consume? The public deserves to know. OCED AI Policy Observatory. https://oecd.ai/en/wonk/how-much-water-does-ai-consume

 

Saenko, K. (2023, May 23). Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins.” The Conversation. http://theconversation.com/is-generative-ai-bad-for-the-environment-a-computer-scientist-explains-the-carbon-footprint-of-chatgpt-and-its-cousins-204096.

 

Szarmes, P., & Élo, G. (2023). Sustainability of Large AI Models: Balancing Environmental and Social Impact with Technology and Regulations. Chemical Engineering Transactions, 107, 103–108. https://doi.org/10.3303/CET23107018