Social media and children are a bad mix, yet parents still post their photos online

A deeper understanding of children’s safety and well-being could shape adult social media patterns for the better.

Sign up now: Get ST's newsletters delivered to your inbox

Barely acknowledged is what happens when a child doesn’t have an account, yet their entire childhood is still documented online.

Barely acknowledged in the debate on Australia’s social media ban for under-16s is what happens when a child doesn’t have an account, yet their entire childhood is still documented online.

PHOTO: REUTERS

Joanne Orlando

Follow topic:

As Australia’s ban on under-16-year-olds having certain social media accounts kicks

in this week

, debate on whether it’s a good idea or even legal rages on – both at home and overseas.

Yet, barely acknowledged in this debate is what happens when a child doesn’t have an account, but their entire childhood is still documented online. Should this be permitted?

“Sharenting” – when parents share their children’s lives online – entered the dictionary a few years ago. Awareness of potential risks has been increasing, but many parents still routinely share pictures and videos of their children online.

Sharenting is widespread and persistent. A review of practices over the past 10 years found that parents commonly share details such as children’s names, dates of birth, birthday parties, milestones (birthdays, school achievements), health information and photos. This produces a “digital identity” of the child long before they can consent.

And it’s not just parents. Dance schools, soccer clubs and various other community groups, as well as family members and friends, commonly post about children online. All contribute to what’s essentially a collective digital album about the child. Even for children not yet old enough to have their own account, their lives could be heavily documented online until they do.

This challenge moves us well beyond traditional approaches to safety messages such as “don’t share your personal details online” or “don’t talk to strangers”. It requires a deeper understanding of what exactly safety and well-being for children on online platforms looks like.

A passive data subject

Here’s a typical sharenting scenario. A family member uploads a photo captioned “Mia’s 8th birthday at Bondi beach!” to social media, where it gets tagged and flooded with comments from relatives and friends.

Young Mia isn’t scrolling. She isn’t being bullied. She doesn’t have her own account. But in the act of having a photo and multiple comments about her uploaded, she has just become a passive data subject.

Voluntarily disclosed by others, Mia’s sensitive information – data on her face and age – exposes her to risks without her consent or participation.

The algorithm doesn’t care Mia is eight years old. It cares that her photo keeps adults on the app for longer. Her digital persona is being used to sustain the platform’s real product: adult attention.

Children’s images posted by family and friends function as engagement tools, with parents reporting that “likes” and comments encourage them to continue sharing more about their child.

We share such posts to connect with family and to feel part of a community. Yet, a recent Italian study of 228 parents found 93 per cent don’t fully realise the associated data harvesting practices that take place, and their risk to the child’s privacy, security and image protection.

A public narrative of one’s life

Every upload of a child’s face, especially across years and from multiple sources, helps create a digital identity they don’t have control over. Legally and ethically, many frameworks attempt to restrict commercial data profiling of minors, but recent studies show profiling is still happening at scale.

By the time a child is 16 – old enough to create their own account – a platform may already have accumulated a sizeable and lucrative profile of them to sell to advertisers.

The fallout isn’t just about data; it’s personal. That cute birthday photo can resurface in a background check for future employment or become ammunition for teenage bullying.

More subtly, a young person forging their identity must now contend with a pre-written, public narrative of their life, one they didn’t choose or control.

New laws aiming to ban children from social media address real harms such as exposure to misogynistic or hateful material, dangerous online challenges, violent videos, and content promoting disordered eating and suicide – but they focus on the child as a user. In today’s data economy, you don’t need an account to be tracked and profiled. You just need to be relevant to someone else who has an account.

What can we do?

The essential next step is social media literacy for all of us. This is a new form of literacy for the digital world we live in now. It means understanding how algorithms shape our feeds, how dark design patterns keep us scrolling, and that any “like” or photo is a data point in a vast commercial machine.

Social media literacy is not just for children in classrooms, but also for parents, coaches, carers and anyone else engaging with children in our online world. We all need to understand this.

Sharenting-awareness campaigns exist, from eSafety’s parental privacy resources to the EU-funded children’s digital rights initiative, but they are not yet shifting the culture. That’s because we’re conditioned to think about our children’s physical safety, not so much their data safety. Because the risks of posting aren’t immediate or visible, it’s easy to underestimate them.

Shifting adult behaviour closes the gap between our concerns and our actions, and the reality of children’s exposure to content on social media.

Keeping children safe online means looking beyond kids as users and recognising the role adults play in creating a child’s digital footprint.

  • Joanne Orlando is researcher of digital well-being at Western Sydney University. This first appeared in

    The Conversation

    .

See more on