Post
The limits of digital freedom. When children's safety becomes a matter of state
The limits of digital freedom. When children's safety becomes a matter of state
Australia has become the first country in the world to decide to put a limit on social media in the name of protecting children. The decision has sparked a fierce debate about freedom of speech, censorship and the role of the state in the digital world. In fact, however, the dispute is not just about technology or the age of users, but a much deeper question: who bears responsibility for the safety of the youngest in an environment that is increasingly becoming a tool for pressure, misinformation and undermining social resilience.
Is Australia's decision on children and social media justified?
The tumultuous reaction to Australian regulations restricting children's access to social media today says more about our relationship with technology than about the law itself. In the public debate, the Australian government is sometimes portrayed as an oppressive regulator, while technology platforms position themselves as defenders of free speech and freedom of communication. Meanwhile, the crux of the problem lies elsewhere: in the systemic harm to the youngest internet users and the long-standing shift of responsibility from global technology corporations to children and their caregivers.
Digital space is not a vacuum
The Internet is sometimes treated as an autonomous reality, governed by its own laws. This is a simplification that serves the interests of platforms well, but describes social reality poorly. In practice, digital space is an integral part of everyday life, a place of relationships, conflicts, social pressures and emotions. For children, the boundary between the „online” and „offline” worlds is essentially nonexistent. There is a single, consistent area of experience.
That being the case, digital space, like the environment or public infrastructure, requires rules that reduce the risk of harm. No one disputes air quality standards or health and safety regulations. So it's hard to see it as rational to reject regulation altogether where children's mental health and well-being are at stake.
Australia's first country: content and scope of regulation
Australia has become the first country in the world to decide on a nationwide ban on social media use by people under the age of 16. As of December 10, 2025, the law is in effect, requiring platforms such as Facebook, Instagram, TikTok, YouTube and Snapchat to block the accounts of underage users. Failure to comply risks very high financial penalties of around 49.5 million Australian dollars.
The legal basis for the changes is the law Online Safety Amendment (Social Media Minimum Age) Act 2024., which constitutes an amendment to Australia's online safety laws. The regulation imposes a minimum age of 16 for social media users and requires platforms to take „reasonable steps” to prevent younger people from creating and maintaining accounts. Overseeing the implementation of the regulations is the country's online safety regulator, the eSafety Commissioner.
The first effects of the law coming into force are already visible. The media reports that some users under the age of 16 have begun to lose access to services, and platforms have begun deactivating accounts belonging to minors. Thus, the regulations ceased to be a mere statutory provision began to function in practice.
A paradigm shift in accountability
The Australian law is not revolutionary in that it introduces an age limit per se. Its primary significance lies in shifting responsibility from the user to the service provider.
For years, the prevailing belief was that children's negative experiences on social media were the result of a lack of parental supervision or insufficient digital education. Platforms remained out of touch with the real-world consequences, despite the fact that they are the ones designing recommendation systems, user engagement mechanisms and service architecture. Australian regulations formally challenge this model, placing responsibilities directly on technology companies.
Legal disputes and protests
The ban has sparked opposition from parts of the technology community and digital rights organizations. The Reddit platform has filed a lawsuit in the Australian High Court, arguing that the new regulations violate the constitutional freedom of political communication and the rights of young people to express themselves in online spaces.
This is not the only instance of the law being challenged. Previously, similar allegations against the Australian government were also raised by two 15-year-olds backed by the Digital Freedom Project, who claimed that the authorities had gone too far in restricting access to social media. The disputes show that the regulations have become part of a broader debate about the limits of free speech in the digital environment.
Government responses and public narrative
The Australian government is conducting a parallel communications effort to make the case for the new regulations. Prime Minister Anthony Albanese is publicly encouraging young people to spend time offline, pointing to studying, sports or reading books as alternatives to social media activity.
In official communications, the administration stresses that the regulations are not a crackdown on free speech, but part of a broader online safety strategy focused on protecting the mental health and well-being of children.
Practical implications and enforcement challenges
The new rules have begun to cover not only classic social networks. According to media reports, platforms such as Twitch have also begun implementing blocks for users under 16, and existing accounts of minors are to be deactivated.
At the same time, there are reports that some teenagers are trying to circumvent the restrictions, including by using VPNs. This shows that enforcement involves real technical and organizational difficulties, although the mere fact of trying to circumvent the law does not undermine its point.
Imperfection of the law does not invalidate its meaning
Australian solutions are not free of flaws and do not solve the problem completely. However, this is not their main goal. The key is to set a direction: to recognize that children's safety in the digital environment is a common good, and that responsibility for it must also lie with those who profit from this space.
If certain behaviors and content are unacceptable in the physical world, they should not be tolerated in the digital world simply because they are harder to control. Australia has taken the first step. The international interest in this case shows that this is not a local experiment, but a benchmark for a global debate about the future of social media regulation.
Social media as an information battlefield
In this context, it is impossible to overlook another dimension of social media's functioning: its role as an information battlefield in the global rivalry of powers. For years, digital platforms have been used to systematically disseminate disinformation, amplify extreme emotions and promote destructive social behavior, the aim of which is not to convince people of one particular narrative, but to undermine social resilience in the long term. Such actions do not necessarily lead to immediate crises - rather, their effectiveness lies in eroding trust, normalizing aggression, deepening divisions and destabilizing public debate. In this sense, children and adolescents become a particularly vulnerable group: not only as recipients of content, but as future citizens whose ability to think critically, empathize and participate in society is shaped in an environment of algorithmically reinforced polarization. Protecting the youngest in the digital space is therefore not just a matter of mental health or upbringing, but an element of broader state security and social resilience to information pressure.
In the case of children and adolescents, this mechanism operates with multiplied force. Young users of social media function in a space where the boundaries between information, entertainment and manipulation are blurred, and cognitive and emotional competencies are just being formed. Disinformation content, messages based on fear, aggression or extreme emotions, and patterns of destructive behavior are not perceived by them as part of a political game or narrative warfare, but as a natural part of the everyday world. Algorithmic reinforcement of such content makes children not only passive recipients, but also carriers of further distribution, through sharing, imitation and internalization of attitudes. In the long run, this leads to the weakening of critical thinking skills, the normalization of symbolic violence and the lowering of social resilience for entire generations. From this perspective, protecting children in the digital environment ceases to be solely a matter of individual security becomes an investment in the future social cohesion and democratic resilience of states.
Sources:
https://www.theguardian.com
https://timesofindia.indiatimes.com
https://en.wikipedia.org/wiki/Online_Safety_Amendment_(Social_Media_Minimum_Age)_Act_2024