Report Section:
Awareness, transparency, and trust: Navigating content made by AI

Recognising and assessing content made by AI
Young people often recognise content online that has been made by AI and use a range of cues to do this. Almost half (48%) of young people say they have seen pictures or videos on social media made by AI and around two in five (41%) say they can easily tell when pictures, videos, or writing were made by AI. That said, over half of young people in total either disagree that they can easily tell (30%) or are not sure about this, neither agreeing nor disagreeing that they can easily tell (25%). This illustrates how it may sometimes be challenging for young people to navigate online content with full understanding of what they are seeing, and the extent to which AI has been used in the creation of content. We need to explore further how this affects their online lives and how they feel about it. This said, young people do look for a range of signs to help them tell if something is made by AI, the most cited signs being unreal or unbelievable images or content, or images that look overly perfect.
Feelings about content made with AI
While young people use many signs or cues to understand when something online may have been made using AI, most have some concerns about their capacity to recognise AI content and about the volume of this kind of content online. More than two in five young people (44%) say they have seen something they thought was real, but later found out that was made by AI and over a third (36%) say they have seen something where they were unsure if it was made by AI or not. Most young people have some concerns or worries about their capacity to recognise AI content, and many think this getting harder. 60% are worried about not being able to tell if something is real or made by AI, and three quarters (75%) of young people feel it is getting harder to tell if something online was made by AI. As well as being concerned that content made with AI is getting harder to recognise, many young people and their parents and carers are worried about the volume of this kind of content online. Over half (56%) of young people are worried that too many things online are being made by AI and an even higher 66% of parents and carers are worried about this. Given this degree of concern, it is perhaps not surprising that more than two in five young people (42%) would like more labels on pictures or videos to say they were made by AI.
As well as some concerns around how readily they can recognise content made with AI, young people have mixed views about how much trust they can put in information provided by AI. While almost one third (31%) of young people agree that they believe everything that AI tells them, a notably higher 43% disagree with this and almost a quarter (23%) say they neither agree nor disagree with this. Added to this, many young people take steps to verify the information AI gives them, with over half (54%) saying they often check if the things AI tells them are correct. That said, only around 1 in 6 young people (17%) say they have seen AI giving information that is not true, or making things up (e.g. AI hallucination).
Despite not having a very high level of trust in the information provided by AI, it remains an important source of information for most young people. Almost three quarters of 13 to 17-year-olds (72%) feel like people their age rely heavily on AI as a source of information. Over two thirds of younger children (68%), aged 8 to 12, also agree that people their age use AI a lot to find information.This heavy reliance on AI for information is giving rise to concern among parents and carers. Over half (58%) are worried about their child relying heavily on AI as a source of information, including over one in five (21%) who are “very worried” about this. This data, and that which reveals the high levels of concern among young people about their capacity to recognise online content made using AI, should alert us to the importance of helping young people develop their knowledge, skills, and critical thinking around AI. As AI continues to be part of every area of their online lives, we need to equip them to assess what they see, be curious about the content that AI provides, and apply their own knowledge and critical thinking to validate the answers that AI gives them.