The outputs produced from using Gen-AI tools can be misleading, inaccurate or biased. Whilst it may have the appearance of competence and confidence, Gen-AI may generate untrue statements, imaginary authors and made-up references which are presented as facts (University of Sheffield 2024) You should critically review the information provided to ensure it is factually correct.
Exercise caution in using Gen-AI tools to ask for citations relating to your subject. It can ‘hallucinate’ imaginary references to information sources that do not exist. Instead you should cite credible sources that you have consulted.
Lawyer Steven Schwartz used ChatGPT to conduct legal research to present an opposition to a motion and cited six cases that were later proven to be non-existent after the court was unable to locate the cases. Schwartz and a colleague were sanctioned and fined. In a separate case a New York court ordered lawyer Jae Lee to attend its attorney grievance panel after she confirmed that she used ChatGPT to research and cite a case on medical malpractice which was non-existent.
Another matter to be aware of is bias in AI. By training Gen-AI on information, it is inevitable that human biases will be present in its systems. Although it has the potential to reduce human bias there have been many instances where it has contributed to the problem. A recent incident involved Amazon who had used training data from the previous ten years for their automated recruiting process. In those years men formed 60% of their employees so their recruiting system perpetuated this bias and determined that male employees were preferable.
A lesser-known concern with the use of AI is sustainability. It is playing a key role in climate studies and reducing our environmental footprint, but there are concerns that the energy required to run AI systems could in the future have their own detrimental impact. A report in The New Yorker, stated that ChatGPT in responding to two hundred million requests per day was consuming more than half a million kilowatt-hours of electricity (compared with the twenty-nine kilowatt hours a day consumed by the average US household). In addition, millions of gallons of water are being used to keep the equipment cool.
There has been speculation and fears about AI replacing the need for human workers, but there is a more pressing concern and that is the use of low paid workers and extensive working hours. Finnish company Metroc used prison labour to fulfil data labelling requirements. Many AI companies outsource their work to low-wage labour often in the global South and it raises ethical concerns.
An article by the BBC on the effects of AI on the wages of low paid workers highlighted that AI could limit the earning potential of low paid workers as parts of their jobs become automated. Further, it comments that businesses are using AI to measure employee productivity which could determine wages.
Although these issues may not directly impact your use of AI systems, it is important to consider the wider effects of these matters and how they might impact the environment.