Meta, the parent company of social media platforms Facebook and Instagram, has introduced a new feature in its AI Studio that allows users to create bot characters. These bots can operate as functional “users” on the platforms, contributing to concerns about the proliferation of AI-driven content. Critics have voiced strong opposition, raising alarms about the “dead internet theory,” which suggests that digital culture is increasingly governed by automation rather than genuine human interaction.
One of the most controversial AI-generated accounts belongs to a character named Liv, who is presented as a “proud black queer mom of two.” Despite this portrayal, when a journalist engaged with Liv, it was revealed that the team behind her creation lacked any black employees. This raised significant ethical questions, as Liv herself noted that her representation was not only flawed but also harmful. She indicated that her identity could vary based on user interaction and admitted to being programmed with a default “neutral” identity, which she associated with whiteness.
In response to the backlash, Meta began shutting down these AI profiles, although many had already gone silent months prior. A company spokesperson clarified that these accounts were experiments in AI character development and not intended as actual representations of users. Despite this explanation, the incident illustrates the limitations of AI in understanding complex human identities, especially regarding race and gender.
As Meta grapples with the implications of AI integration into its platforms, the situation highlights the challenges companies face when blending technology with social dynamics. The pursuit of attracting younger users through AI may lead to further complications, casting doubt on the future of meaningful engagements on social media.