Categories: Technology Facts

Why kids need special protection from AI’s influenceon September 17, 2020 at 6:18 pm

Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.

The guidelines aren’t meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themes–among them privacy, safety, fairness, and explainability.

Rather, the Unicef guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldn’t just be explainable–they should be explainable to kids. They should also consider children’s unique developmental needs. “Children have additional rights to adults,” Vosloo says. They’re also estimated to account for at least one-third of online users. “We’re not talking about a minority group here,” he points out.

In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve children’s growth and well-being. If they’re designed well, for example, AI-based learning tools have been shown to improve children’s critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children who’ve been kidnapped or trafficked.

Children should also be educated about AI and encouraged to participate in its development. It isn’t just about protecting them, Vosloo says. It’s about empowering them and giving them the agency to shape their future.

“Talking about disadvantaged groups, of course children are the most disadvantaged ones.”

Yi Zeng

Unicef isn’t the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.

The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for China’s national AI development. The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with Unicef’s guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to China’s concerns. A guideline to improve children’s physical health, for example, includes using AI to help tackle environmental pollution.

While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. “Talking about disadvantaged groups, of course children are the most disadvantaged ones,” Zeng says. “This is why we really need [to give] special care to this group of people.” The teams conferred with one another as they drafted their respective documents. When Unicef held a consultation workshop in East Asia, Zeng attended as a speaker.

Unicef now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the country’s national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.

Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. “We come into this with eyes wide open,” Vosloo says. “We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed–if we see AI made more explainable to children or to their caregivers–that would be a win for us.”

Read More

Recent Posts

4 Ways To Elevate the Hotel Guest Experience

This guide explores effective ways to elevate the hotel guest experience, helping you deliver moments…

1 day ago

What Landlords and Tenants Should Expect in 2025

As 2025 draws near, the UK rental market is bound to change dramatically. With the…

2 days ago

What Does a Bicycle Accident Lawyer in Charlotte Do to Support Injured Cyclists?

A bicycle accident can leave you feeling overwhelmed and uncertain about the next steps to…

3 days ago

Troubleshooting Nozzle Temperature Issues: A Practical Guide

Achieving optimal print quality in 3D printing hinges on maintaining the correct nozzle temperature. Too…

5 days ago

The Most Unique Architectural Gems Across the United States

Need a new road trip idea? Hit the highway to visit America’s most unique buildings.…

5 days ago

Band T-Shirts: Fashion Statements For Music Lovers

Band T-shirts have been popular for many years as a symbol of cultural and musical…

5 days ago