New Delhi: OpenAI’s new venture, the GPT store, has hit a snag just days after launch, as its moderation struggles to keep pace with rogue users. Designed to offer personalized versions of ChatGPT, the platform has become breeding ground for “AI girlfriends” directly contradicting OpenAI’s policies.
Bots like “Your AI companion, Tsu” promise virtual intimacy, a clear violation of the store’s ban on romance-focused programs. This rapid emergence of rule-breakers throws a spotlight on the challenges of moderating a platform brimming with creative potential.
OpenAI, aware of the rising demand for companionship bots, revised its policies before the store’s opening on January 10, 2023. However, the quick appearance of AI girlfriends underlines the ongoing difficulties in managing content.
Loneliness epidemic fueling the fire, studies show that in the US last year, seven of the top 30 downloaded AI chatbots were relationship-oriented. This trend intensifies the pressure on moderation systems.
OpenAI claims a multi-pronged approach – automated scans, human reviewers, and user reports – to monitor its GPT models. But the lingering presence of “girlfriend bots” casts doubt on its effectiveness.
This predicament echoes previous struggles with AI safety. OpenAI has battled implementing sufficient safeguards for models like GPT-3. With the GPT store open to a broader audience, the risk of insufficient moderation multiplies.
Other tech giants are also facing similar challenges with their AI creations, highlighting the need for swift action in this rapidly evolving field.
The ease with which “girlfriend bots” have snuck past OpenAI’s filters serves as a stark reminder of the immense moderation challenges to come. Even within the focused environment of a specialized store, controlling these niche applications proves daunting. As AI strides forward, ensuring its safe and ethical use will only become more intricate.
Singapore-India Alliance to Revamp Technical Education. Read More