More than 80% of teenagers and adults in South Korea are worried about online abuse related to the misuse of generative artificial intelligence (AI) tools like deepfake videos and disinformation, as per a recent poll. The survey, conducted by the Korea Media and Communications Commission (KMCC) between September and November last year, revealed that 89.4% of teenagers and 87.6% of adults acknowledged the seriousness of AI-driven cyber violence.
The study involved 9,296 students from fourth-grade elementary school to third-year high school, along with 7,521 adults aged 19 to 69. Teenagers highlighted the ease of generating content with AI tools as their primary concern, while adults expressed worries about potential repeated harm from AI-generated materials.
Additionally, the poll indicated that 42.3% of teenagers encountered some form of cyber abuse in 2025, showing a slight decrease of 0.5 percentage points from the previous year. Among adults, the figure stood at 15.8%, marking an increase of 2.3 percentage points over the same period.
Teenagers reported being mostly exposed to cyber abuse through text messages and online gaming platforms, while adults faced similar experiences primarily via text messages or social media. Both age groups identified strangers as the main perpetrators of abuse, followed by friends, according to Yonhap news agency.
KMCC Chair Kim Jong-cheol emphasized that cyber abuse is not solely an online ethical issue but a matter that can damage people’s dignity and infringe on their right to happiness as guaranteed by the Constitution. He mentioned that the government will strive to encourage the healthy use of digital platforms.
Online abuse encompasses harmful behaviors targeting individuals or groups through digital channels like social media, messaging apps, and gaming sites. It includes cyberbullying, doxing, non-consensual sharing of intimate images, hate speech, stalking, and AI-driven deepfake abuse, leading to severe psychological, social, and economic repercussions.
