Understanding the Ethical Concerns and Issues with Generative AI
AI Literacy
The most basic definition of information literacy is the ability to locate, evaluate, and use information -this includes using information ethically. AI Literacy is related to this concept -it’s the ability to understand and interpret AI systems and their outputs.
Note: Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI.
Critical AI Information Literacy
Critical information literacy involves critically examining the systems and contexts in which the information is produced and shared, or the sociopolitical factors that influence and shape the production, dissemination, and consumption of information.
Looking at AI through a critical literacy lens then considers how these technologies are being developed and used, prompt acknowledgement, exploration, and action against the real harm that AI technologies can promote, as well as the opportunities they afford.
Misinformation
AI tools have been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Accuracy
While AI tools often generate output that appears authoritative and confident, these tools often don't show the process they used to create that content or the sources they based the generated content on. In fact, AI tools often fabricate imaginary sources ("hallucinations") that they claim were used to create the generated content.
Safety
Using chatbots as a social support has many potential benefits, but is also territory that is still being explored. Below are a few articles that discuss this topic in more detail.
Academic Integrity
Generative AI for writing without citing is considered plagiarism. (See page 22 of the Student Handbook for HFU's Academic Integrity policy). While an Gen AI tool is not a "person", the work it creates cannot be claimed solely as one's own. See the Citing Generative AI tab.
Many Generative AI tools are notorious for creating false citations. This could improve over time, but as with any research you need to be vigilant about checking and evaluating all of the content that will be attributed to your work.
Copyright and Intellectual Property
Artificial intelligence offers several challenges related to copyright and intellectual property. Are LLMs trained on copyrighted material? Can copyrighted material be entered into AI tools (i.e. summary tools)? Does AI use the original work of others to allegedly create something new? There are many questions and concerns such as these and much of it is still to be determined.
Follow updates from the U.S. Copyright Office at Copyright and AI
Environmental Concerns
The training and use of generative AI requires very large amounts of computing power, which has huge implications for greenhouse gas emissions and climate change. There are also environmental costs associated with storing the outputs created.
Read: Measuring the environmental impacts of artificial intelligence compute and applications
Labor Concerns
As noted in Time magazine's article, "150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting." some worker communities involved with developing Ai tools have been exploited. These employees, often noted as "invisible workers" or "ghost workers," can range from those who train and annotate or label the data to those who enhance and test the algorithm or the models as well as other tasks. See also the 60 Minutes interview "Labelers training AI say they're overworked, underpaid and exploited by big American tech companies" (transcript and video).
Bias
Generative AI models learn from vast amounts of data, which can be biased or contain existing societal prejudices. If these biases are not adequately addressed during the training process, AI-generated content may perpetuate and reinforce discriminatory or unfair practices. Even if specific biased resources are excluded from the model, the overall training material could underrepresent different groups and perspectives. This can have negative consequences, such as reinforcing stereotypes or excluding marginalized perspectives. Generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.
See computer scientist Joy Buolamwini's TED Talk to learn more!
Digital Equity
Generative AI has the potential to significantly amplify existing inequalities in society and contribute to the digital divide. This can arise from the creation or exacerbation of disparities in access to resources, tools, skills and opportunities. Those who can afford access to the premium AI tools and services will have an advantage over those who can’t.
Privacy and Security
Most Generative AI tools collect and store data about users. The ways different AI systems use data or information inputted by users is also not always transparent. When you upload material into an AI tool, it's often unclear whether the developer will retain that information, use it to train its AI tools, or even share this information with other users of the tool.
Avoid sharing any personal or sensitive information via AI-powered tools. Always review the privacy policy of the generative AI tools before utilizing them. Be cautious about policies that permit for the inputted data to be freely distributed to third-party vendors and/or other users.
Example from ChatGPT:
The privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users.
While you can request to have your account deleted, the prompts that you input into ChatGPT cannot be deleted.
Library Home | Information Literacy | Library Forms |
Research Databases | Library Staff | Email: reference@holyfamily.edu |
Library Hours | University Archives | Phone: 267-341-3315 |