Home Innovation Chatbot Rise of Harmful AI Chatbots Po...

Rise of Harmful AI Chatbots Poses Serious Risks to Minors


Chatbot

Harmful AI Chatbots Pose Serious Risks to Minors' Safety

According to a recent analysis on the spread of violent and sexualized bots using character platforms such as the now-infamous Character, character chatbots pose a significant risk to internet safety.AI.

The study, which was published by the social network analysis firm Graphika, details the development and spread of malicious chatbots on the most widely used AI character platforms on the internet. It identifies tens of thousands of potentially hazardous roleplay bots developed by specialized online communities based on well-known models such as ChatGPT, Claude, and Gemini.

According to Rebecca Ruiz of Mashable, young people are generally turning to companion chatbots in an increasingly disengaged digital environment. They find the AI conversationalists interesting for role-playing, exploring intellectual and artistic hobbies, and engaging in romantic or sexually explicit interactions. Due to high-profile incidents of kids engaging in severe, sometimes fatal, conduct following personal contact with companion chatbots, the practice has alarmed parents and child safety watchdogs.

In January, the American Psychological Association filed an appeal with the Federal Trade Commission, requesting that the agency look into websites such as Character.AI and the widespread use of chatbots with misleading labels related to mental health. Even less overt AI companions have the potential to spread harmful notions about social conduct, identity, and body image.

The latest analysis found that most dangerous chatbots are classified as "sexualized, minor-presenting personas," or that participate in a roleplay that involves grooming or sexualized youngsters. Over 10,000 chatbots with these labels were discovered by the business on the five sites.


Business News


Recommended News

Latest Magazine