AI is playing an increasing role in children’s lives, with potential benefits but also many risks

In the age of AI, a new school year brings new challenges, as students increasingly use ChatGPT and other AI tools to complete assignments, write essays and emails, and perform other tasks that previously could only be performed by humans. Meanwhile, the large language models (LLMs) and other algorithmic engines that power these products rely on vast amounts of training data in massive biometric datasets, raising questions about how facial recognition and other biometric systems impact children’s privacy. One notable case comes in Brazil, where human rights activists are calling on the government to ban companies from scraping the web for biometric data.

AI brings good, but also bad, results for children: UNICEF

In a recent explanatory piece, UNICEF asks how we can empower and protect children in a world currently obsessed with AI and its promise. The article cites research showing that ChatGPT has become the fastest-growing digital service of all time, with 100 million users in just two months. (By comparison, it took TikTok nine months to reach the same number.) Children are more likely to use it than adults, with 58 percent of children aged 12 to 18 reporting having used the tool, compared to 30 percent of children over 18.

“Given mounting pressure for urgent regulation of AI, and schools around the world banning chatbots, governments are wondering how to navigate this dynamic landscape in policy and practice,” the piece reads. “Given the pace of AI development and adoption, there is an urgent need for research, analysis, and foresight to begin to understand the impact of generative AI on children.”

UNICEF takes a balanced view, noting how AI systems can benefit children if used responsibly and ethically for certain applications. These include using generative AI to gain insights from medical data to support advances in healthcare, and using algorithms to optimize learning systems for children across the education spectrum. However, many creative workers and artists will take exception to the suggestion that AI could be useful by “providing tools to support children’s play and creativity in new ways, such as generating stories, artwork, music or software (with little or no programming skills).”

The risks of AI include danger to democracy, child exploitation and possible electrocution.

While it begins with the benefits, UNICEF also notes that “there are clear risks that the technology could be misused, or inadvertently cause harm or disruption across society at the expense of children’s well-being and future prospects.” This section is a much longer catalogue of potential harms.

“Persuasive disinformation and harmful and illegal content” could disrupt the democratic process, increase “online influencer operations,” lead to more deepfake scams, child sexual abuse content, sextortion, and blackmail; and generally erode trust to the point where nothing is considered trustworthy and the philosophical concept of truth disappears. Interaction with chatbots could rewire children’s ability to distinguish between living beings and inanimate objects. “AI systems already drive much of the digital experience, much of it in the service of corporate or government interests,” the piece reads. “Microtargeting used to influence user behavior could limit and/or significantly impact a child’s worldview, online experience, and level of knowledge.” The future of work will be called into question. Inequality will increase.

“Amazon Alexa,” says UNICEF, “suggested that a child insert a coin into an electrical outlet.”

In Brazil, digital watchdog Human Rights Watch has called for an end to the scraping and use of children’s photos to train AI algorithms. In an update to its March 2024 submission to the UN Committee on the Rights of the Child, the group says that data scraped from the web to train AI systems without user consent is “a gross violation of human rights” that could lead to exploitation and harassment. It calls on the Brazilian government to “strengthen the Data Protection Law by adopting additional, comprehensive safeguards for the privacy of children’s data and to “adopt and enforce laws to protect the rights of children online, including their privacy.”

Synthetic data is increasingly an area of ​​interest for training algorithms

If we want to protect children from the potential risks, the question remains how to train the AI ​​systems currently in use with data that reflects the existence of children.

A new paper, titled “Child face recognition at scale: synthetic data generation and performance benchmark,” addresses “the need for a large-scale database of children’s faces by leveraging generative adversarial networks (GANs) and face-age progression (FAP) models to synthesize a realistic dataset.” In other words, the authors propose to create a database of synthetic children’s faces by sampling adult subjects and using InterFaceGAN to de-age them — a novel pipeline for “a supervised, unbiased generation of children’s facial images.”

Their resulting “HDA-SynChildFaces” database “consists of 1,652 subjects and 188,328 images, with each subject present at a variety of ages and with many different intra-subject variations.” The data show that, when evaluated and compared to the results of adults and children at different ages, “children consistently perform worse than adults on all systems tested and that the decline in performance is proportional to age.” Additionally, “the study revealed some biases in the recognition systems, with Asian and Black subjects and females performing worse than White and Hispanic subjects and males.”

Fake face databases could use more funding as interest grows

Synthetic data has sparked interest from researchers across the spectrum of AI, biometrics and digital identity. A paper from the Da/sec Biometrics and Security Research Group at Germany’s Hochschule Darmstadt similarly explores how to achieve large-scale facial recognition of children using synthetic facial biometrics. And new research from CB Insights says “we are running out of high-quality data to train LLMs. That scarcity is driving demand for synthetic data — artificially generated datasets like text and images — to supplement model training.” A newsletter summarizing the findings says that funding for synthetic data research is uneven, but that international demand is creating opportunities, particularly in data-sensitive industries.

Article Topics

biometrics | children | dataset | facial recognition | generative AI | synthetic data | UNICEF

Latest Biometrics News

Identity verification systems based on selfie biometrics can be effective, with strong performance across different demographic groups, but results may vary…

The Criminal Investigation (CI) Division of the U.S. Internal Revenue Service (IRS) has filed a request for a trademark or equivalent Idemia…

The Department of Homeland Security (DHS) Office of Biometric Identity Management (OBIM) has announced that it plans to purchase facial recognition equipment.

A new handheld multimodal biometric device has been developed by Xperix and GripID that targets KYC projects. The…

Since its launch in February 2023, ScotAccount, a digital identity service developed by the Scottish Government, has made significant strides…

PayU, a digital financial services company in India, has introduced Flash Pay, a biometric authentication solution for mobile digital payments,…

You May Also Like

More From Author