close
close

Yiamastaverna

Trusted News & Timely Insights

Some people find GenAI perfectly fine in their own work. Others don’t like it so much
Enterprise

Some people find GenAI perfectly fine in their own work. Others don’t like it so much

Interestingly, it seems acceptable to use GenAI for ourselves, but less so for othersWhen people choose to rely on technology like Chat GPT to help them complete work or school tasks, they are often blind to how much impact Generative AI (GenAI) has on their work, new research shows. The study, conducted by Associate Professors Dr. Mirjam Tuk and Dr. Anne Kathrin Klesse together with PhD student Begum Celiktutan at the Rotterdam School of Management Erasmus University, aims to uncover a significant discrepancy between what people consider to be acceptable levels of AI use in work tasks and the actual impact of the technology on their work.

This, the researchers say, makes it difficult to determine the ethical aspects and limits of using such technologies, as the answer to whether using GenAI is considered acceptable is not clear-cut. “Interestingly, it seems acceptable to use GenAI for ourselves, but less so for others,” says Dr. Tuk. “This is because people tend to overestimate their own contribution to the production of things like application letters or student papers when they co-produce them using GenAI, believing that they have only used the technology for inspiration and not to outsource the work,” says Dr. Tuk.

The researchers draw these conclusions from experimental studies with more than 5,000 participants. Half of the study participants were asked to complete (or remember to complete) tasks ranging from applications and coursework to brainstorming and creative tasks, if they so wished, with the support of ChatGPT.

To understand how participants might also view the use of AI by others, the other half of the study participants were asked to consider their reaction to someone else completing such tasks using ChatGPT. All participants were then asked to rate the extent to which they felt ChatGPT had contributed to the outcome. In some studies, participants were also asked to indicate how acceptable they found the use of ChatGPT for the task.

The results showed that when evaluating their own performance, participants on average believed that 54% of the work was led by themselves, while ChatGPT contributed 46%. However, when evaluating other people’s work, participants were more inclined to believe that Gen AI was responsible for most of the heavy lifting, estimating human contribution at only 38%, compared to 62% for ChatGPT.

In keeping with the theme of their research, Dr. Tuk and her team used a ChatGPT detector to assess the accuracy of participants’ estimates of how much they thought their work and the work of others was done by technology and how much by human effort. The difference in estimated creator and ChatGPT involvement, the researchers said, shows a worrying level of bias and blindness to how much GenAI actually affects our work performance.

“While people think they are using GenAI to get inspiration, they tend to think others are using it as a means of outsourcing a task,” says Prof. Tuk. “This leads people to think it is perfectly appropriate for them to use GenAI, but not for others.”

To overcome this, when embedding GenAI and setting guidelines for its use, it is crucial to create awareness of biases both towards oneself and towards others.

The full study, “Acceptance is in the eye of the beholder: Self-other biases in GenAI collaboration,” can be read in the International Journal of Research in Marketing.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *