Why Google Is Reversing Course And Allowing Teens To Access Its Bard AI Chatbot
Teenagers who reside in countries where they are able to manage their own Google Account will finally be able to access Google’s Bard AI chatbot. This is according to Tulsee Doshi, who is Head of Product within the Responsible AI team at Google. Initially, Bard will only be available in English, with more languages to follow down the line.
One of the biggest reasons that Google wants to give access to this group of users is because it feels it can be transformative in their education. Google states that “Bard can also be a helpful learning tool for teens, allowing them to dig deeper into topics, better understand complex concepts and practice new skills in ways that work best for them. They could, for instance, ask Bard to help them brainstorm science fair project ideas or learn more about a specific time period in history.”
Google is also aware that it needs to allow this access in a responsible manner. To achieve this the company brought in child safety and development experts to consult on content policies to create an experience that prioritizes the safety of these younger users. One of the organizations that provided input was the Family Online Safety Institute (FOSI).
An onboarding experience also awaits teens looking to use Bard, an experience carefully tailored for these users after Google got feedback from this age group. It includes an AI literacy guide with useful information on how to get the most out of generative AI in a responsible manner. Teens will also be able to see how their Bard activity is used by Google, and will be able to turn off Bard if they choose to do so.
Moreover, Google took the time to train Bard to recognize content that is inappropriate for younger users. It will implement safety features to prevent unsafe content from appearing in responses to teen’s queries.
Google appears to have checked the right boxes in regards to user safety before officially opening this up to teenagers across the globe. However, they will need to continually work on making sure that it's providing accurate information, and that unwanted responses such as hallucinations are kept to a minimum.