Meta temporarily adjusts AI chatbot policies for teenagers
On Friday local time, Meta stated that, in light of lawmakers' concerns over safety issues and inappropriate conversations, the company is temporarily adjusting its AI chatbot policies for teenage users.
A Meta spokesperson confirmed that the social media giant is currently training its AI chatbot so that it will not generate responses for teenagers regarding topics such as self-harm, suicide, or eating disorders, and will avoid potentially inappropriate emotional conversations.
Meta said that, at the appropriate time, the AI chatbot will instead recommend professional help resources to teenagers.
In a statement, Meta said: "As our user base grows and our technology evolves, we continue to study how teenagers interact with these tools and strengthen our safeguards accordingly."
In addition, teenage users of Meta's apps such as Facebook and Instagram will only be able to access certain specific AI chatbots in the future, which are mainly designed to provide educational support and skill development.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Bitcoin plunges 30%. Has it really entered a bear market? A comprehensive assessment using 5 analytical frameworks
Further correction, with a dip to 70,000, has a probability of 15%; continued consolidation with fluctuations, using time to replace space, has a probability of 50%.

Data Insight: Bitcoin's Year-to-Date Gains Turn Negative, Is a Full Bear Market Really Here?
Spot demand remains weak, outflows from US spot ETFs are intensifying, and there has been no new buying from traditional financial allocators.

Why can Bitcoin support a trillion-dollar market cap?
The only way to access the services provided by bitcoin is to purchase the asset itself.
Crypto Has Until 2028 to Avoid a Quantum Collapse, Warns Vitalik Buterin

