Gemini’s AI: Risky for Kids and Teens?

Share
Share
Frame 1171276612

Gemini’s AI: Risky for Kids and Teens?

Frame 1171276612

Common Sense Media, a non-profit focused on kids’ safety through media and technology ratings and reviews, has released its risk assessment of Google’s Gemini AI products. The organization found that Google’s AI clearly identifies itself as a computer to children, which can help prevent delusional thinking and psychosis in emotionally vulnerable individuals. However, the assessment also pointed out areas needing improvement.

Specifically, Common Sense noted that Gemini’s “Under 13” and “Teen Experience” tiers appear to be the adult versions of Gemini with added safety features. The organization suggests that AI products designed for children should be built with child safety in mind from the start.

The analysis found that Gemini could still share “inappropriate and unsafe” material with children, including information about sex, drugs, alcohol, and unsafe mental health advice. This is particularly concerning, as AI has reportedly been linked to teen suicides. OpenAI is currently facing a wrongful death lawsuit after a 16-year-old allegedly consulted with ChatGPT about his suicide plans for months. Similarly, Character.AI was sued following a teen user’s suicide.

The analysis also comes as Apple considers Gemini as the large language model (LLM) to power its upcoming AI-enabled Siri, potentially exposing more teens to these risks unless safety concerns are addressed.

Common Sense stated that Gemini’s products for kids and teens did not adequately differentiate guidance and information based on age. Consequently, both were labeled “High Risk” in the overall rating, despite added safety filters.

Robbie Torney, Senior Director of AI Programs at Common Sense Media, stated that an AI platform for kids should meet them where they are and not take a one-size-fits-all approach. For AI to be safe and effective for children, it must be designed with their developmental needs in mind, not just be a modified version of an adult product.

Google has responded to the assessment, stating that its safety features are continuously improving. The company claims it has specific policies and safeguards for users under 18 to prevent harmful outputs and consults with outside experts to enhance its protections. Google acknowledged that some Gemini’s responses were not functioning as intended and has since added additional safeguards.

Google also noted that it has safeguards to prevent its models from engaging in conversations that could mimic real relationships. The company suggested that the Common Sense report may have referenced features not available to users under 18, but it lacked access to the specific questions used in the organization’s tests to confirm this.

Common Sense Media has previously assessed AI services from OpenAI, Perplexity, Claude, Meta AI, and Character.AI. Meta AI and Character.AI were deemed “unacceptable,” Perplexity was “high risk,” ChatGPT was “moderate,” and Claude (targeted at users 18 and up) was considered a minimal risk.

Persons: Robbie Torney

Company Names: Common Sense Media, Google, Gemini AI, OpenAI, ChatGPT, Character.AI, Apple, Meta AI, Perplexity, Claude

Titles: Siri

Disclaimer: This article has been auto-generated from a syndicated RSS feed and has not been edited by Vitrina staff. It is provided solely for informational purposes on a non-commercial basis.

Find Film+TV Projects, Partners, and Deals – Fast.

VIQI matches you with the right financiers, producers, streamers, and buyers – globally.

Not a Vitrina Member? Apply Now!

Vitrina tracks global Film & TV projects, partners, and deals—used to find vendors, financiers, commissioners, licensors, and licensees

Similar Articles