The National Human Rights Commission (NHRC) has taken action concerning the safeguarding of children’s personal data in an Artificial Intelligence (AI) educational project. A complaint by NAMO Foundation highlighted potential privacy risks for children due to a collaboration between a US AI company, Anthropic, and NGO Pratham. This collaboration involves an AI system called the “Anytime Testing Machine (ATM)” for processing children’s academic data.
The complaint raised concerns about the collection, processing, and storage of children’s data, as well as potential cross-border transfers, under the Digital Personal Data Protection (DPDP) Act, 2023. Allegations suggested that inadequate safeguards in the Pratham-Anthropic AI project could compromise children’s safety and data security. Seeking intervention, the complainant urged for the assessment of data protection risks and the implementation of necessary safeguards for children’s well-being.
The NHRC found the allegations, if true, to potentially violate human rights related to privacy and minors’ protection. Notices were issued under the Protection of Human Rights Act, 1993, directing Chief Secretaries and UT Administrators to investigate the claims and ensure compliance with data protection laws. State governments were also instructed to review agreements with organizations like Pratham and Central Square Foundation to prevent data misuse.
The NHRC further demanded reports on the use of AI systems in the education sector from the Union Ministry of Electronics and Information Technology, the Department of Higher Education, and the Department of School Education and Literacy. Action Taken Reports (ATRs) have been requested from all relevant authorities within two weeks for the Commission’s review.
