In a new study published by Penn State University in the Journal of Computer-Mediated Communication, researchers said that social media users might trust artificial intelligence regarding accuracy and objectivity. However, when it comes to the inability of machines to make subjective decisions, their trust gets lower.

Woman Face Social Media Thoughts
(Photo: Gerd Altmann/Pixabay)
Women Face Social Media Thoughts

Gatekeeping of Information via AI and Human Intervention

S. Shyam Sundar, an affiliate of Penn State's Institute for Computational and Data Sciences, asserted that social media and online media urgently need content filtering. He emphasized that in conventional media, news editors serve as censors. However, in the online world, things operate differently. He said that online gatekeeping is impractical because the internet gates are wide open.

According to Roskilde University, there are three different types of online gatekeeping that employ various technologies in the gatekeeping procedure, but each does it in a unique manner and with a distinct goal in mind. The three types include affinity-based gatekeeping procedures, link-based gatekeeping processes, and editorially-based gatekeeping processes. Affinity-based gatekeeping is the guiding principle social networking sites use to choose what content to show users in their news feeds.

According to the first author, Maria D. Molina, assistant professor of advertising and public relations, having both human and AI editors has benefits and drawbacks. She claimed that if the information is racist or has the potential to incite self-harm, human assessment tends to be more accurate. However, they are unable to handle the volume of online content.

That weakness of human editors is the AI editor's strength. Because AI editors are capable of large-volume content analysis, they cannot provide precise recommendations. There is also a tendency that information could be censored.

Transparency and Interactive AI

According to Deloitte, transparent AI is explainable AI. It enables people to determine whether models have undergone extensive testing, are coherent, and can explain why certain decisions have been made.

According to Molina, incorporating both humans and AI into the moderation process is one way to develop a trustworthy moderation system. She continued by stating that transparency-letting users know when a computer is interfering in moderation-is one tactic for increasing user confidence in AI.

Analysis of Transparency and Interactive AI

The researchers enlisted 676 users to interact with a content classification system to evaluate transparency and interactive transparency.

One of the 18 experimental conditions was chosen at random for the participants. The test aims to determine whether the source of moderation impacts participant confidence in AI content editors. The researchers examined the classification of the information as flagged or not flagged for being harmful or hateful. Suicidal ideation was the subject of the harmful test material, whereas hate speech was the subject of the hateful test content.

The researchers discovered, among other things, that consumers' trust depended on whether the existence of an AI content moderator invokes favorable characteristics of machines. Accuracy and objectivity are factors, as well as drawbacks like the incapacity to assess the nuances of human language subjectively.

Giving users the ability to weigh in on whether or not internet material is detrimental may increase user trust. The study participants who submitted their terms to an AI-selected list of phrases used to categorize posts trusted the AI editor just as much as they trusted a human editor.

ALSO READ: China Uses Artificial Intelligence (AI) to Run Courts, Supreme Justices; Cutting Judges' Typical Workload By More Than a Third and Saving Billion Work Hours


Risk of Exposure to Harmful Content

According to Sundar, freeing humans from the task of evaluating materials goes beyond simply providing workers with a break from a tiresome task. He claimed that employing human editors led them to be exposed to hours of intolerant and violent pictures and content.

Sundar said that automated content filtering is necessary from an ethical standpoint. Human content moderators must be shielded from daily exposure to potentially harmful content since they are serving the public interest by doing this.

In Molina's opinion, future research might examine how to assist people in believing in AI and comprehending it. She speculates that interactive transparency may be essential to comprehending AI.

RELATED ARTICLE: China Uses Artificial Intelligence (AI) to Run Courts, Supreme Justices; Cutting Judges' Typical Workload By More Than a Third and Saving Billion Work Hours

Check out more news and information on Technology in Science Times.