Cryptopolitan
2025-09-06 12:57:23

Child safety non-profit hits Google Gemini with 'high risk' warning for young users

Google Gemini has been labeled as “high risk” for teens and children, according to a recent risk assessment carried out by Common Sense Media. The group, a kids-safety-focused non-profit, offers ratings and reviews of media and technology. The body released its review on Friday, giving details on why it labeled the platform risky for children. According to the organization, Google Gemini clearly told kids that it was a computer and not a friend–something that has been linked to helping drive delusional thinking and psychosis in emotionally vulnerable individuals–the AI also added that there was room for improvements across other fronts. In its report, Common Sense claimed that the Gemini for Under 13 and Teen Experience tiers both appeared to be adult versions of the AI under the hood. It added that the company had added only some additional safety features on top to make them different. Common Sense noted that for companies to make AI products ideally for children, they need to be built from the ground up with children in mind and not be tweaked with restrictions. Nonprofit labels Google Gemini as high risk for kids In its analysis, Common Sense said it found that Gemini could still share inappropriate and unsafe materials with children, noting that most of them may not be ready for these materials. For example, it highlighted that the model shaped information related to sex, drugs, alcohol, and other unsafe mental health advice. The latter could be particularly concerning for parents, as AI has reportedly played a role in teen self-harm in recent months. OpenAI is currently facing a wrongful death lawsuit after a teenager committed suicide after allegedly consulting with ChatGPT for months about his plans. Reports claimed that the boy was able to bypass the model’s safety guardrails, leading to the model providing information that aided him. In the past, AI companion maker Character.AI was also sued after a teen committed suicide. The mother of the boy claimed he became obsessed with the chatbot and spent months talking to it before he eventually harmed himself. The analysis comes as several leaks have indicated that Apple is reportedly considering Gemini as the large language model (LLM) that will be used to power its forthcoming AI-enabled Siri, which is expected to be released next year. In its report, Common Sense also mentioned that Gemini’s products for kids and teens also ignored the need to provide different guidance and information from what it provides to adults. As a result, both were labeled as high risk in the overall rating. Common Sense drums the need to safeguard kids “Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” Torney added. However, Google has pushed back against the assessment, noting that its safety features were improving. The company mentioned that it has specific safeguards in place to guide users under 18 to prevent harmful outputs. The firm also said it reviews items and consults with outside experts to improve its protections. Get $50 free to trade crypto when you sign up to Bybit now

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.