South Korea’s media watchdog has urged the U.S.-based social media platform X to take steps to safeguard underage users from sexual content produced by the AI model Grok. The Korea Media and Communications Commission (KMCC) made this request amidst rising worries about deepfake sexual material that AI platforms can create. The KMCC emphasized the need for measures to shield teenagers from harmful content, including controlling their access and preventing potential illegal activities on Grok.
According to South Korean law, social network platform operators like X must appoint a minor protection official and submit an annual report. The KMCC’s request aligns with regulations, highlighting that the creation, distribution, or storage of sexual deepfake content without consent could lead to criminal penalties. KMCC Chairperson Kim Jong-cheol expressed the intent to support the responsible and secure advancement of new technologies. Plans include implementing reasonable regulations, enhancing policies to curb the spread of illicit information, particularly sexual abuse content, and mandating AI service providers to safeguard minors.
In a separate development, X Corp, led by Musk, acknowledged the existence of explicit images on its platform, largely attributed to its Grok AI. The company assured compliance with Indian laws and vowed to eliminate such content. The Indian government instructed X to conduct a thorough evaluation of Grok’s technical and governance frameworks to prevent unlawful content generation. It emphasized the need for strict user policies, including suspending and terminating violators promptly and removing all offending content without altering evidence.
