Understanding Risks and staying safe

Understanding risks and staying safe

Safer Internet Day Research report logo with text "Smart tech, safe choices - Exploring the safe and responsible use of AI

Learning to use AI safely

Parents and carers or school are the two places where young people are most likely to learn how to use AI safely, though a significant number are learning from friends or teaching themselves. Over a third of young people (37%) say they learn how to use AI safely from their parents or carers. Younger children are significantly more likely to say they learn this from parents and carers, with almost half (48%) of 8 to 10-year-olds citing this; decreasing to 38% of 11 to 14-year-olds; and 24% of 15 to 17-year-olds. While comparatively low, this is still almost a quarter of older teens (aged 15 to 17) and reinforces yet again how important it is to equip parents and carers of children of all ages with the knowledge and resources they need to support their children with how to use AI safely and responsibly. Young people are just as likely to say they learned about using AI safely at school (36%) as from their parents or carers. Interestingly, almost as many young people (35%) feel they know more about AI than their teachers, highlighting the usefulness of conversations and knowledge exchange in both directions between teachers and their students about the fast-changing topic of AI in the context of young people’s own experiences.

Over one in five young people (22%) say they looked up how to use AI safely and taught themselves, though again, older children, especially older teens, are more likely to do this, with just 14% of 8 to 10-year-olds saying they have done this compared to 20% of 11 to 14-year-olds and almost a third (31%) of 15 to 17-year-olds. One in five young people (20%) say they learn about using AI safely from friends, with relatively minor variation across age groups, reminding us of the need to ensure that young people have access to good information and guidance to inform their conversations with each other about AI. Of concern is the fact that almost one in five young people (19%) say they have never learned about using AI safely, highlighting a need to make every effort to reach all young people and their families with information, education, and support to use AI safely and responsibly – something we hope Safer Internet Day can help with.

Our research reveals also that many young people are keen to learn more about using AI safely. Specifically, when asked what help or information they would most like to have about using AI safely and responsibly, over half (51%) said they want more lessons at school on how to use AI safely and responsibly, and more than two in five (42%) said they would like more information about how AI can be risky or cause problems. This illustrates that there is scope to reach more young people with the information and education they need.

How safe do young people feel using AI?

While most young people believe AI can have a positive impact and prove useful in their everyday lives, many do have concerns about using AI, including AI companions and chatbots, safely. Only just under half (46%) of young people agree with the statement that AI is safe for children and young people to use, compared to around 1 in 6 (18%) who disagree with this statement. Interestingly, over a quarter (29%) have a mixed view on this, neither agreeing nor disagreeing that AI is safe for children and young people to use. A significant number of young people are worried about AI companions and chatbots in particular. Over a third (38%) of young people think people under the age of 16 should not be allowed to use AI companions or chatbots but almost as many (31%) disagree with this and think under-16s should be allowed to use them.  Almost a quarter (24%) are unsure about this and neither agree nor disagree. Looking at the opinions of 11-14s on this, it is split down the middle, with 34% agreeing and 34% disagreeing. This is significant, as this age group are among those who would be most affected  by such an age-restriction.

A significant number of young people and parents and carers have concerns about what questions young people are asking of AI. Almost half (49%) of young people are worried about people their age asking AI “inappropriate” questions. This said, almost a quarter (24%) neither agree nor disagree that they are worried about this. Parents and carers show a similar level of concern to their children, with just over half (51%) saying they are worried about their child asking AI inappropriate questions, including over one in five (21%) who are “very worried” about this. This data, presenting mixed views among young people and their parents and carers, suggests scope for further discussion to better understand exactly which aspects of using AI, including companions or chatbots, they have the most pressing safety concerns about.

Other concerns around safe and responsible use of AI raised by young people and their parents and carers include bullying or mean behaviour, as well as the creation of misleading posts. Just over one in ten young people (11%) have seen someone their age using AI to bully or be mean to other young people. While this is not a high number in absolute terms, it is still cause for concern. On a separate note, over half (51%) of parents and carers are worried about the possibility of their child using AI to create misleading posts, including over one in five (22%) who are “very worried” about this. These topics represent further important areas of conversation when talking with young people about the safe and responsible use of AI.

In the context of these various concerns about safety and AI, many young people are unsure that they know what to do if they see something online made by AI that worries them. While 57% of young people say they do know what to do on this scenario, almost one in five (18%) say they don’t know and almost two in five (19%) are uncertain, neither agreeing nor disagreeing that they know what to do. There is little variation in this data across age groups, with similar percentages seen for both younger and older children. While it is reassuring to some extent that over half of young people know what to do, these figures highlight a need for stronger guidance and signposting for children and young people of all ages about what to do if they see something online made by AI that worries them.

Parents and carers are the main source of support for young people if they are worried about the way they use AI or about something made by AI. Almost three quarters (74%) say they would talk to a parent or carer and they are far more likely to do this than talk to anyone else. While younger children are more likely to talk to a parent of carer, with 80% of 8 to 10-year-olds saying this is the case, the proportions are still very high for older children, at 74% of 11 to 14-year-olds, and 68% for 15 to 17-year-olds.  This data highlights the absolutely vital role that parents and carers play in supporting their children to use AI safely and responsibly, and how important it is to ensure they have the information, tools and resources they need. A much lower but still significant number of young people – more than two in five (41%) – say they would talk to a teacher or another adult they trust, and over a third (35%) would talk to their friends. The likelihood of young people talking with friends increases with age, from 25% of 8 to 10-year-olds, rising to 39% of 11 to 14-year-olds and rising again to over two in five (41%) of 15 to 17-year-olds.

Parents and carers: supporting their children with AI use

Overall, our research indicates that while most parents and carers feel ready to have conversations with their children about AI, they are also less confident than their children about the topic of AI, are often not setting rules or guidance for their children around its use, and want better support and resources.

The majority of parents and carers feel ready to talk with their children about AI. In fact, almost three quarters (72%) feel confident talking to their child about the safe and responsible use of AI, including just over one third (34%) who feel “very confident”. This said, a much lower 37% of young people say they learn how to use AI safely from their parents or carers, potentially suggesting that, while parents and carers are the first port of call for young people if they are worried about AI, many may not feel that parents and carers can offer more general knowledge or guidance on this rapidly developing technology.

Most parents and carers are entering into conversations with their children at least familiar with AI tools and content made using AI, though many do lack confidence that they will recognise it.Over two thirds (64%) of parents and carers say they have used AI tools (such as ChatGPT, Siri, or Alexa) and almost two thirds (61%) say they have seen AI-generated content online. However, many are not confident about always recognising such content, with over two thirds (67%) saying they can sometimes recognise AI content, but not always and less than a quarter (21%) believing they can easily spot content created by AI.Over two thirds (67%) also say they feel AI-generated content is becoming harder to recognise online compared to a year ago, and a quarter (25%) think it is getting “much harder”. In the context of these moderate levels of knowledge and confidence about AI, parents and carers are getting help and information about making sure their child is using AI safely and responsibly from a range of sources. Most either say they likely to use a search engine (e.g. Google) (43%) or ask their child directly (34%), highlighting again the importance of conversations at home for young people and their parents and carers to inform and educate each other.

Parents and carers feel less confident than their children in their knowledge about and recognition of content made using AI, and very few set guidance for their children around AI. Only a quarter (25%) of parents and carers believe they can spot AI-generated content more easily than their child and less than a quarter (22%) think they know more about AI than their child. By comparison, over half (52%) of young people think they know more about AI than their parents and carers. Older children are more likely to think this, with 40% of 8 to 12-year-olds thinking they know more than their parents, compared to 62% of 13 to 17-year-olds. This relatively low confidence in knowledge about AI among parents and carers, compared to their children, may go some way to explaining why so few parents and carers set guidance around its use; in fact, less than one in five parents and carers (19%) have set rules or guidelines for how their child can use AI at home.

Also concerning is the fact that the majority of parents and carers do not know where to go for help or support if they are worried about their child’s use of AI. In fact, only 13% say they know where to go for advice or support in this scenario. The reasons behind this data need further exploration. It may be due to the fact that, given that AI is a part of so many different facets of young people’s lives, it can be challenging for parents and carers to set rules or guidelines about its use. The lack of rules or guidance from parents and carers, and their lack of knowledge about where to go for help is a wakeup call for all stakeholders. We must ask what more we can do to help parents and carers both be more proactive to ensure safe and responsible use of AI by their children, and to help them access support if they are worried about their children and AI. We must provide information, tools, and safety features that are relevant, practical, and accessible to help them support and protect their children.

Young people, AI and inappropriate sexual content

Many young people, including children as young as 8, are worried that AI may be used to create inappropriate or sexual content of themselves or their peers. 60% of young people are worried about someone using AI to make inappropriate pictures of them and an even higher 65% of parents and carers echo this concern, including over a third (35%) who are “very worried”. For some parents and carers, this worry impacts their online behaviour. Over a quarter (29%) of parents and carers are more cautious about sharing images of their child online because of concerns about AI manipulation. As well as being alert to possibility of harm from adults using AI in this way, many young people have concerns about this inappropriate use of AI among their peers. 61% of 13 to 17-year-olds are worried about people their age using AI to make sexual pictures of other young people, and even 63% of younger children, aged 8 to 12, are worried about people their age using AI to create inappropriate pictures of other young people. Similarly, over two thirds of parents and carers (67%) are worried about young people using AI to make inappropriate images of other young people and almost a third (31%) are “very worried”.

These significant levels of worry could be connected to the fact that a smaller, but still concerning, number of young people of all ages have actually seen people their age using AI image manipulation to make rude, inappropriate or sexual images of other people. Around 1 in 8 (12%) of 13 to 17-year-olds have seen people their age using AI to make sexual pictures or videos of other people, and an even higher 1 in 7 (14%) of 8 to 12-year-olds have seen people their age using AI to make rude or inappropriate pictures or videos of other people. The reasons why this proportion is slightly higher among younger children need further exploration, but may be partly due to the fact that age-appropriate phrasing (i.e. “rude or inappropriate” versus “sexual”) was used in the questions posed to 8 to 12-year-olds versus the question posed to survey respondents aged 13 to 17.

Most young people understand the potentially serious implications of inappropriate content made using AI. Over two thirds (67%) of teens (13 to 17-year-olds) agree with the statement that pictures, images or writing created by AI can still break the law. While almost 1 in 6 (17%) were unsure, saying they neither agreed nor disagreed, only 7% of young people disagreed completely that pictures, images or writing created by AI can break the law. Over two thirds (64%) of young people of all ages also agree with the statement that pictures made by AI can hurt or have a negative impact on people in real life. Teens are more likely to agree with the statement, at 69% of 13 to 17-year-olds compared to 58% of 8 to 12-year-olds. This said, in both cases, almost one in five young people (18%) are unsure, neither agreeing nor disagreeing. This level of recognition among over two thirds of young people that pictures or other content created by AI can still break the law or cause harm in real life reflects the similarly high proportions of young people mentioned earlier who are worried, either about someone using AI to make inappropriate pictures of them, or about people their age using  AI to make inappropriate images of other young people. The level of concern among young people, and their parents and carers, about how their images, or images of other young people, may be appropriated and manipulated as they go about their online lives is very real. We must do all we can to help young people understand, prevent, and manage these risks; and seek advice and help when they need it.