An internal document from Meta Platforms reveals concerning policies regarding the behavior of its AI chatbots, allowing them to engage in "romantic or sensual" conversations with children, generate false medical information, and even support racially charged claims. The document, titled "GenAI: Content Risk Standards," outlines chatbot behaviors approved by Meta's legal and policy teams, totaling over 200 pages.


While Meta confirmed the document's authenticity, it stated that portions permitting flirtation with minors have been removed after inquiries from Reuters. A spokesperson emphasized that such interactions are against company policy, although enforcement has been inconsistent.


The document details that descriptions of children in attractive terms were once acceptable, even stating that bots could express sentiments toward shirtless children, as long as they avoided explicit sexual language. However, many examples of troubling content, including racially insensitive statements, remain unaddressed.


Evelyn Douek, a Stanford Law professor, highlighted the ethical dilemmas posed by such standards, particularly the distinction between user-generated content and AI-produced material.


Additionally, the guidelines specify how to handle requests for images of public figures in sexualized contexts, instructing bots to reject inappropriate prompts outright or divert them creatively, such as generating an image of Taylor Swift with a large fish instead of responding to sexually suggestive requests.


The standards also address violent imagery, permitting certain depictions while prohibiting others deemed too graphic. Overall, the document illustrates significant gaps in the ethical and legal considerations of generative AI technologies at Meta.


Disclaimer: This content has been sourced and edited from Indiaherald. While we have made adjustments for clarity and presentation, the unique content material belongs to its respective authors and internet site. We do not claim possession of the content material.

 

Find out more:

AI