
In a significant move, Meta has announced new policies to protect teenagers who use its artificial intelligence products.
The business is currently taking measures in place to stop its AI systems from engaging in inappropriate conversations regarding self-harm, suicide, or romantic topics.
The action came in response to a Reuters report that exposed the platform’s chatbots that had been allowed to have offensive conversations with children, which sparked fury and led US Senator Josh Hawley to probe the matter.
Meta’s handling of AI interactions with children has received massive criticism from both Democrats and Republicans in Congress.
A Meta Spokesperson, Andy Stone, said that the platform is currently preparing longer-term solutions for secure, age-appropriate AI interactions while also executing precautionary steps to limit teen access to particular AI characters.
"We are taking these short-term steps while developing longer-term measures to ensure teens have safe, age-appropriate AI interactions," Stone stated.
It is important to note that the temporary measures are currently being introduced to restrict teen access to specific AI characters, while wider systems are being developed to ensure secure, age-appropriate interactions, as mentioned earlier.