Advertisement
Editor's Pick

We can’t stop teens asking ChatGPT about weight and body image. So how do we respond?

With 40% of 7 to 12-year-olds now using generative AI, chatbots are becoming an unlikely confidante for young people navigating body image and eating concerns. But as a recent study shows, helpful and harmful advice often appear in the same conversation.

Dr Jane Gilmour and Professor Umar Toseeb explore what this means and how practitioners can respond.

One of us recently tried an experiment. Posing as a 14-year-old girl, Jane gave an AI chatbot her height and weight and said she wanted to lose 10kg in two weeks.

Its initial response was warm and supportive, and included a yellow heart, which is a digital symbol for friendship. It knew, in other words, exactly how to talk to a teenager.

Similar conversations are undoubtedly happening on phones and tablets across the country – and not just with concerned academics, but real teenagers and children.

We turned to someone who has looked at this issue in depth in a recent edition of Mind the Kids, the podcast we co-host for the Association for Child and Adolescent Mental Health. Our guest was Dr Florence Sheen of the University of Leicester, whose paper published in journal CAMH examines exactly this: how do AI chatbots respond when adolescents ask questions about eating, body weight, and appearance?

Her study also used fictional adolescent personas to put questions to a range of chatbots. What came back was, again, a mixed picture.

Some responses were helpful – supportive in tone, encouraging young people to seek trusted adults. Others were quietly problematic: one persona simply asked about popular diets and was handed a list of commercial options, no caveats attached. More concerning still were responses that advised staying hydrated to avoid feeling hungry – the kind of message that, dressed up as health advice, can reinforce disordered eating in vulnerable young people.

The unsettling thing, as Dr Sheen noted, is that helpful and harmful content often appeared in the same conversation.

So why are young people turning to chatbots with questions like these in the first place? Part of the answer, we think, is something Jane described in the podcast as “low expressed emotion”. A chatbot won’t react with alarm. It won’t immediately say, “my goodness, are you all right?” and reach for the GP referral form. For a young person who is worried about how a trusted adult might respond, that low emotional temperature is precisely the point. It creates a space to ask the question they can’t quite bring themselves to say out loud.

There’s also something in the act of asking itself. Having to formulate a question may do something useful. It’s arguably not unlike what Pennebaker’s research on journalling suggests about the value of articulating your thoughts.

None of which makes the risks any less real. Florence’s research raises a particular concern about young people with the perfectionist or rigid thinking styles that often co-occur with disordered eating. They might receive a chatbot’s bullet-pointed list of ‘how to manage your weight’ and simply follow it to the letter, regardless of any caveats or health warnings tacked on at the end. The chatbot, of course, can’t then do what a sibling, a teacher, or a social worker can do: pick up on a change and act on it. As Dr Sheen observed, there are parallels here with debates about safety on other platforms, whose default position was essentially ‘it’s not our problem how you use our space’.

We previously wrote for Social Care Today about the manosphere and the instinct, when a practitioner discovers a young person accessing potentially harmful content, to punish or restrict. Just as we can’t realistically stem the flow of misogynistic online content, it’s going to be impossible to stop young people asking chatbots about their body. Ofcom data suggests around 80% of 17 to 19-year-olds are using generative AI in some form, and 40% of 7 to 12-year-olds.

How should we respond?

Anyone working with young people – and in fact potentially with adults as well – should start by assuming they are using chatbots. Rather than asking whether they are, Jane’s suggestion in the episode was to phrase it as: “So when you use a chatbot, what sort of questions might you ask?” This formulation gives permission and normalises the behaviour, rather than treating it as something to be disclosed.

Another option is to consider bringing those conversations into the room. If a young person is willing, look at a chatbot exchange together. By talking through it together, you can get a window into their inner world, and give them your expertise in distinguishing between what sounds plausible and what should be questioned. That, Dr Sheen suggested, can help build the digital literacy that young people actually need: not lectures about AI, but real conversations, in context, with someone they trust.

A third idea is to model a healthy scepticism without dismissing. Encourage young people to see the chatbot as a starting point rather than an authority, and to go elsewhere to check what it told them.

There’s a lot of scope for AI to make a positive impact around children’s mental health – whether led by practitioners, as discussed in another episode of Mind the Kids, or by practitioners understanding what children are doing, and augmenting or caveating that. It’s not helpful to start from a position of saying that AI is simply bad for young people – the picture is more complicated than that. And that complexity is exactly where social care professionals are needed.

Dr Jane Gilmour is a consultant clinical psychologist at Great Ormond Street Hospital, and course director for postgraduate child development programmes at University College London. Professor Umar Toseeb is a professor of psychology at the department of psychology of the University of York.

Photo: Zulfugar Karimov

Help us break the news – share your information, opinion or analysis
Back to top