DistantNews
๐Ÿ‡ฐ๐Ÿ‡ท South Korea /Technology

ChatGPT to notify contacts if user shows signs of self-harm

From Hankyoreh · (1h ago) Korean Positive tone

Translated from Korean, summarized and contextualized by DistantNews.

TLDR

  • OpenAI is introducing a new 'trusted contact' feature for ChatGPT, allowing the AI to notify a user-designated person in case of detected self-harm risks.
  • This safety measure aims to provide a layer of support by maintaining social connections when a user is in crisis, following previous concerns and lawsuits regarding AI's potential to encourage harmful behavior.
  • The feature requires mutual consent, involves expert review for alerts, and ensures privacy by not sharing detailed chat content, instead providing guidance for the contact person.

In a significant step towards enhancing user safety, OpenAI is rolling out a new 'trusted contact' feature for its generative AI, ChatGPT. This innovative function empowers users to designate a trusted individual who will be notified if ChatGPT detects a user engaging in or contemplating self-harm during a conversation. This proactive measure arrives amidst ongoing scrutiny and legal challenges faced by AI developers, including OpenAI, concerning the potential for their platforms to inadvertently encourage dangerous behavior.

The implementation of this feature underscores OpenAI's commitment to responsible AI development. The 'trusted contact' system is designed to foster a crucial safety net by maintaining social connections for users in distress. The process is built on mutual consent: a user designates a contact, who must then accept the invitation within a week for the feature to become active. This ensures that the notification system is opt-in and respects user autonomy.

ChatGPT can notify a trusted person if it detects a user is at risk of self-harm.

โ€” OpenAIDescription of the new 'trusted contact' feature's primary function.

Crucially, the alerts are not automated but are subject to review by trained professionals. If ChatGPT detects a potential risk, it will first inform the user that a notification might be sent. Subsequently, human experts will assess the situation's severity before contacting the designated person. To protect user privacy, the notification will not include specific chat details but will offer guidance to the contact person on how to best support the user. Users retain full control, able to delete or change their trusted contacts at any time.

This development is particularly relevant in South Korea, where discussions around the ethical implications of AI are gaining momentum. While Western coverage might focus on the technological advancement, our perspective here emphasizes the societal responsibility that comes with powerful AI tools. The potential for AI to influence vulnerable individuals necessitates robust safety protocols. OpenAI's initiative, developed with input from global medical networks and AI ethics experts, is a commendable effort to balance AI's capabilities with the paramount need for user well-being and mental health support. It reflects a growing understanding that AI's integration into our lives must be accompanied by thoughtful safeguards.

The notification will include guidance for the designated person on how to help the user, rather than specific chat details, to protect privacy.

โ€” OpenAIExplanation of privacy measures within the 'trusted contact' feature.
DistantNews Editorial

Originally published by Hankyoreh in Korean. Translated, summarized, and contextualized by our editorial team with added local perspective. Read our editorial standards.