Ethical considerations of AI in the workplace
Deep diving into the tech of tomorrow
5 minute read | |
Alarm bells sounded when generative artificial intelligence (AI) programs, such as ChatGPT and DALL-E, sprung into mainstream usage – they were unprecedented, unregulated and easy to use.
AI has since continued to be an exciting and ever-evolving area for development in the tech industry, with Meta, Microsoft, Google and Amazon all announcing or developing large-scale models.
But, according to experts, the social concerns surrounding AI have been misplaced.
“When ChatGPT hit public consciousness, the number-one response was cynicism about whether our kids would cheat at school,” The University of Western Australia Tech & Policy Lab Director Associate Professor Julia Powles said.
“I’d like to see a fraction of that concern directed at the products themselves, and the companies behind them.
“What you’ll find is a web of legal and ethical implications, which should give pause to any individual or organisation committed to acting with integrity.”
According to Associate Professor Powles, the ethical use of AI is grounded in understanding where it comes from, how it is produced, who is producing it and how it is used.
How we are using AI
How AI is being used is an important consideration to make because Australians are interacting with the technology on a daily basis.
AI can recognise your facial features to allow you to unlock your phone, while social media applications are constantly using AI to cater your feeds to your preferences.
Some businesses are embracing AI – using tools to automate tasks to increase efficiency and cut down on operational costs.
The concerns for businesses using AI have mostly revolved around privacy and security, according to Flinders University College of Business, Government and Law The Future of Work Associate Professor Andreas Cebulla.
“Our studies have found that job security – the fear of losing one’s job to a machine – is the greatest concern,” he said.
“It is not surprising – ‘robots will take your job’ has been used as a catchy headline for some time.”
Associate Professor Cebulla said data protection and changing workplace dynamics were fears for Australian businesses.
However, he states that the basis for these concerns was largely unwarranted in terms of supporting evidence.
“It should be noted that in most instances, businesses use AI rather sparingly, if at all,” Associate Professor Cebulla said.
“Businesses have been using AI for isolated tasks only such as customer services, fraud detection (in finance), quality control and production processes.”
What should raise concern, according to Associate Professor Cebulla, is who is programming the AI businesses are interacting with.
It is also important to remember AI is a product and as with other products businesses use, we must consider the ethics of its production and use.
“People want to sell their product,” Associate Professor Cebulla said.
“This is not to say every seller of AI is ruthless but, as with any other business, there are incentives to cut corners.
“Some care more and others less about their product’s integrity.”
The foundations of OpenAI
The integrity of ChatGPT’s OpenAI was called into question when its relationship with the US outsourcing firm Sama was revealed.
OpenAI, via Sama, employed people in its Kenya offices as data labellers for marginal wages.
According to Time, these data labellers were required to sift through hours of abhorrent content, including detailed text descriptions of child sexual abuse, bestiality, murder, suicide, torture, self-harm and incest.
It is hard to deem such practices as necessary or ethical, particularly given evidence in a number of active lawsuits that such labelling or moderation practices can catalyse significant mental health decline.
“We don’t do nearly enough due diligence on the legitimacy of data sourcing, the contingent labour supply chains, and the environmental toll from training and maintaining large AI models,” Associate Professor Powles said.
This is the case for the Microsoft-backed, OpenAI product ChatGPT, which has become the most prominent generative AI program in public discourse thanks to very little moral scrutiny.
“It is worth noting how much work the label ‘open’ is doing in ‘OpenAI’,” Associate Professor Powles said.
“In reality, the business keeps its data sources and business dealings completely confidential.”
Understanding biases
Another concern associated with the programming of AI is how it can perpetuate biased narratives.
“Unconscious bias is a risk,” Associate Professor Cebulla said.
“We look at the predictions and implicit recommendations an AI system produces and we may accept them at face value. When, in fact, we are replicating – albeit faster and more efficiently – the inequities we have inherited.”
For example, Associate Professor Cebulla cited the dangers of using AI in recruitment.
“Amazon’s recruitment was overlooking women because its proposed AI recruitment system was fed with data from its past workforce, which was predominantly male,” he said.
“The system wasn’t asked to weed out women, it just did.
“And it may not be the programmer’s bias, but that of the business making the data available and interpreting it.”
Even unintentionally, systems that prolong such biases without intervention could have damaging consequences for the nature of equal employment.
Establishing regulations
Internationally, responses surrounding the likes of ChatGPT have been varied.
In March, Italy’s Data Protection Agency completely banned ChatGPT across the country.
In June, the European Union passed the AI Act – the world’s first comprehensive AI law.
While no laws have been passed in WA, ChatGPT has been banned for students in Western Australian public schools.
For Associate Professor Powles, this is a reflection of Australia’s rigid accept or reject stance towards AI.
“The choice frame about whether to either embrace or ban technology is always the wrong frame,” she said.
“Ask what the specific AI product you are dealing with is, how it works, who is behind it, and how will it interact with your staff and your customers instead.”
This is not to say using AI is inherently immoral.
However, it should be approached with caution, not necessarily because of the headline concern about replacing people but because its production and by-products have the potential to cause harm – be it through producing unconsciously biased employment systems or being made safe via traumatic labour methods.