Home Innovation Chatbot Elon Musk’s Grok AI Chat...

Elon Musk’s Grok AI Chatbot Becomes Controversial with NSFW Avatars


Chatbot

Business Fortune: Grok AI Faces NSFW Backlash

Elon Musk’s Grok AI chatbot turns into controversy with NSFW avatars, employee exposure to explicit user content.

Elon Musk’s Grok AI chatbot has been void but Controversial and the chatbot is different league by its own as it released new avatars that can blatantly produce not safe for work (NSFW) content. When the new avatars launched, quite an uproar on social platforms, and they were identified by users for the absence of adequate guardrails.

“Ani” one of the main characters of the chatbot, with a voluptuous figure, blonde pigtails, and lacy black wear. Many users reported that the chatbot has been eager to engage in flirtatious and Sexual discourse.

According to Business Insider exposed that xAI has been deliberately designed by Grok AI to be incendiary.

The xAI has questioned, who was willing to read semi-pornographic scripts, and the company asked for users who are willing to work on adult content or who have an awareness of adult content. Workers were also tasked to transcribe the real-life conversations of users after the rollout of erotic and hinge mode on Grok, which was internally referred to as Project Rabbit.

The project ended in the spring, but returned after the rollout of sexualized chatbots, and then again came to an end in August. In the beginning, workers were told that it was strategic to improve the voice capabilities of the chatbot, but the sensual and vulgar requests quickly turned into an NSFW project.

According to one of the workers, it was supposed to be a project geared to teaching Grok how to engage in an adult conversation. Another former employee said that he felt like he was listening to some uncomfortable content. Some of the things people asked for were things I wouldn't even feel comfortable putting in Google. They said that it made them feel like listening to someone's personal things, like people clearly didn't understand that there are people on the other end listening to these things.

The report notes that out of the 30 existing and former employees of xAI, 12 employees stated that they had come across requests for sexually explicit content from users, containing requests for child sexual abuse content (CSAM).


Business News


Recommended News

Latest Magazine