Under-16s Social Media Ban: OSINT and Its Implications
- Lauren Nicholls
- 3 days ago
- 10 min read
Certain social media platforms are now required to take ‘reasonable steps’ to prevent Australians under 16s from having and using accounts.
The new legislation came into effect on 10 December 2025. The intent is to protect younger Australians from the risks of social media use, such as social pressure, addictive design features and exposure to harmful content.
Australia was the world’s first country to implement this approach; however, several countries have since enacted or started to consider similar policies, including Spain, Norway, Malaysia, France, Denmark, and the UK.
The potential impact for OSINT practitioners is multilayered, ranging from challenges in identification of young persons on social media, impacts on adult platform users, and additional considerations for the maintenance of online persona’s, aka sock puppets.
For OSINT practitioners working in safeguarding, child protection, online harms and risk assessment, this legislation practically impacts how vulnerable young people are visible, and protected, online. As platforms alter age verification and enforcement mechanisms, investigators must adapt how they identify risk, establish confidence in age and identity, and understand shifts in online behaviour. One early effect is that less content is public-by-default, with more social interaction moving into private or semi-closed spaces, making responsible detection, corroboration and intervention more complex, not less.
This analysis examines what has actually happened since the ban took effect and how these changes impact OSINT practitioners.

Has the new legislation kept teens off social media?
Survey says: Sort of.
Have accounts been deactivated? Yes, about 4.7m in the first half of December, according to the eSafety Commissioner. Does this appear to have kept under 16s off these social media platforms? Not completely.
When it comes to Australia’s under 16s the critical thing to remember is that they’re digital natives. This means that they’re tech-savvy and their ways of interacting with people and the world is tech-centred. Generally speaking, they have both the means and the motivation to continue using social media, banned or not.
That said, measuring success solely by youth access may miss the policy's primary aim: shifting responsibility for age verification from families to platforms. By this metric, the legislation has driven significant change. Platforms are implementing verification systems, improving detection technology, and accepting enforcement obligations. The question for OSINT practitioners isn't whether the policy is 'working' in absolute terms, but how the changing landscape affects investigative capabilities.

What were the habits of under 16s before the ban?
High proportions of Australian kids access social media. An eSafety Commissioner survey showed that 74% of 8-year-olds had used social media in 2024, with the figure jumping to 92% for 12-year-olds.
YouTube was by far the most accessed platform, with 73% of 13–15-year-olds using it in 2024. YouTube, however, is also accessed in a wide range of settings, including family and educational. Less than half of these young people accessed YouTube via their own account.
Snapchat (63%), TikTok (62%) and Instagram (56%) were the platforms of choice for 13–15-year-olds in 2024. For each of these platforms, almost all these children were accessing via their own accounts. Facebook was not as widely used among this cohort – it was the next most popular but only 41% reported having accessed this platform over the same period.
So, where (on the internet) are they now?
The top 5 platforms for young people in Australia were all included in the social media ban. So where do these digital natives gather on the internet now?
While the account deactivation numbers are significant, it looks like plenty of young Australians are still using the banned platforms, as well as seeking alternatives. Some media articles have highlighted young people that have de-prioritised social media in their lives, however it’s difficult to tell how common this is among the cohort.
It’s likely that users are experiencing a combination of situations across their different accounts due to differences in how the accounts were setup, as well as how platforms are detecting and enforcing the ban.
For those under 16s that are continuing to engage with social media, it looks like this is being done via three main methods:
Despite age verification processes, some users have maintained access to accounts on banned platforms.
Existing accounts are flagged and users identify workarounds to either reinstate the original account or create a new one.
Users have opened accounts on alternative platforms
The current situation will change if platforms move to more sophisticated identification processes or the federal government adds more platforms to the banned list (which they have flagged as a possibility). TikTok is prepped to roll out improved age-detection technology for Europe in January (although it focuses on under-13 detection) so it’s possible that the detection methods for Australian users will change at some point in the future.
The user has maintained access to their account on the banned platform and are continuing to use it.
Some under 16s have maintained access to their accounts on restricted platforms. All the banned platforms, with the exception of Reddit, require a birth date for account registration. But, young users commonly provide an older birth date when signing up (lying on the internet? Who would have thought!) which in turn allow them to evade age detection mechanisms.
We can get an idea of the extent of this from the eSafety Commissioner’s 2024 survey of children and social media. It found that 68% of Australian 12-year-olds had their own social media accounts, but in the same year only 13% of children (12 and under) had an account deactivated due to being underage.
Operating several accounts simultaneously is also a common practice for Gen Z and younger, particularly on Instagram and TikTok. When combined with the above, there’s an increased chance that an under 16 user could have maintained access to at least one of their accounts, for the time being.

The user’s account was flagged, and they’ve identified a work around so are still utilising the banned platform.
The internet is littered with examples of how under 16s have skirted the ban for different platforms, either to reinstate their existing account or create a new one.
Anecdotes of parents, older siblings and friends being used as stand-ins for the age verification face scans are common, as are users successfully tricking the AI by applying makeup and false eyelashes.
Other methods include creating a new account while travelling overseas, using a VPN, or just re-registering with an older date of birth. Identification of parents or friends may also be used, such as email accounts and phone numbers.
It is possible that there has also been a rise in unauthenticated viewing of content, particularly for the more open platforms like YouTube and TikTok. This can be an option when users are seeking to view but not engage or post content on a platform.

The user has partly or wholly moved to a different social media platform
One likely displacement effect is a shift away from public feeds and into private/closed channels. If teens can’t reliably hold accounts on the major platforms, it’s rational to keep social contact in group chats and semi-closed communities. Think WhatsApp, Signal and Messenger group chats, plus in-game chat ecosystems such as Roblox and Steam Chat.
If teens shift from public feeds into private group chats, that likely reduces some of the harms most associated with mainstream social media, especially algorithm-driven exposure. However, it also displaces risk into harder-to-observe spaces, where harmful content and coercive group dynamics can persist with less visibility. Less content is public-by-default, and more of the ‘social graph’ moves into spaces that are harder to observe, search and corroborate.
But, as it stands in February 2026, alternative social media platforms that are not currently banned and have gained in popularity, include:
Lemon8 – Photo and video sharing app owned by TikTok’s parent company ByteDance. Its new popularity has found it sitting it into the top 100 on the Australian free apps rankings since October 2025.
Coverstar – A US-based video-sharing platform which saw a significant jump in app download numbers around the time of the ban. It’s marketed as a ‘safe’ and ‘positive’ app designed for 9 -16yo users.
RedNote – Also known as Xiaohongshu, this is a Chinese-based video sharing app that’s similar to Instagram and TikTok. It initially gained exposure when Americans flocked to it during the TikTok ban in January 2025. Commentators have noted that in this case, RedNote is more likely to be seen as an option for those with Chinese speaking backgrounds.
WhatsApp – The well-known private messaging app is frequently mentioned in discourse as a popular alternative to Snapchat.
However, it doesn’t appear that Australia’s under 16s are relocating to a specific alternative platform en masse. While daily downloads of these apps showed a noticeable spike around December 10, 2025, when the ban came into effect, the trend didn’t continue, with the download numbers dropping again shortly after.
It’s likely that the decision made by each individual is strongly influenced by where their peers are going – you can’t do social networking without the social component. Additionally, it could be the case that enough people within social groups have managed to continue access to the original platform, meaning that large relocations are not currently necessary.

Has this changed the way young people operate on the banned platforms?
It’s hard to tell whether under 16s have changed their patterns of interaction on the banned platforms, and it’s likely that this varies greatly from person to person.
There has been a substantial number of accounts deactivated and content creators that target younger audiences have said their follower counts and engagement metrics have dropped significantly in the immediate wake of the ban.
That said, users who have maintained or regained access to platforms will probably continue to operate as normal, at least for the time being.
It’s possible that the current age identification measures in place by the platforms are not yet being used to the extent that would influence young people to alter their interactions and language to appear older.
Looking forward, if platforms lean harder on behavioural and language-based age signals, watch for linguistic changes. It won’t be one “new slang”, but fewer school-year cues, more generic bios and faster churn in in-group terms. This matters because platforms already discuss language analysis for underage detection, and users often adapt language to dodge automated filters.
Meta, TikTok and Snapchat all now employ data analytics to determine the approximate age of a user. Snapchat has reported their tool can currently pick an under 13 user with 70-75% accuracy, according to the eSafety Commissioner’s 2025 Transparency Report.
With other countries considering similar bans, there will be increased pressure placed on these platforms to improve their age identification processes. It’s likely that over the next few years, there will be noticeable uptick in the use and efficacy of using user behaviour, language and content creation indicators to flag the accounts of under 16s. This is the point where we’re more likely to see changes in the user behaviours of under 16s and probably increased migration to other platforms or ways of communication.

What about the impact on those over 16?
Age and identity verification requirements are impacting some adult social media users. While many will just follow the process to verifying their age on the platform, there are others who are unable or unwilling to do so.
Platforms not included in the ban, such as Discord and Bluesky, now pre-emptively require Australians to verify their age. Some privacy conscious users have expressed their concerns about identification methods, while others have shared anecdotes about elderly family members who have had their accounts suspended.
Discord has also announced their intent to rollout age checks worldwide, with the adoption of a “teen by default” settings policy from early March. All users will be subject to Discord’s age-based protections unless they verify their age via ID upload or video selfie. Those who do not verify their age will not have access to NSFW communities or be able to see direct messages from someone they are not connected with.

There are additional considerations for OSINT practitioners brought about by the under 16s social media ban.
What are the challenges in identifying young persons on social media?
OSINT practitioners, particularly those working in spaces such as child protection or counter-child exploitation, may have to contend with new online environments, changing user behaviour, and limited publicly available data.
Changing social media accounts – it’s likely that under 16s in Australia have seen changes to one or more of their accounts on banned platforms. Users may have created new accounts (under new usernames) or are utilising a lesser known but existing account.
Movement to new platforms – it appears that a significant number of under 16s social media users are downloading, if not moving over to, alternative social media apps. This may result in new usernames, profile images and connections lists.
Movement from open to closed social media – Online gaming and standalone messaging apps have been excluded from the ban’s legislation. With the reduced risk of these platforms being added to the ban list in the future, it’s possible that these sorts of apps where public content is not posted will see higher utilisation by young people.
If dealing with these challenges, it’s important to remember that the social component is key to social networking. Under 16s social media users will still want to be identifiable by their peers, even if accounts, platforms or profile names change. Some tips include:
Collect social media ID numbers, not just profile names as user IDs will not change if the user alters the profile’s details. Practitioners can sometimes search using the number or compare IDs of a new looking and previous account to check if they are the same.
Look for social group signposts in profile information or other interactions, such as references to schools, local areas, sporting teams etc.,
Keep an eye out for profiles with similar or the same usernames as previously known accounts.
References to their old social media usernames in comments or other profile information like bios can indicate links to old profiles.
Comments about beating the ban or continuing to use social media despite being under 16 may also serve as a flag.
Additional considerations for online persona maintenance
Currently the identity and age verification requirements of the large social media platforms are not prohibitive to maintaining an adult, Australian-based social media persona on most platforms. The move towards age identification has raised some complaints from adult users of BlueSky and Discord, and OSINT practitioners may see some challenges for their persona in these contexts.
For those conducting activities in a more youth-focussed area, future developments in activity and language related account flagging may also bring challenges for practitioners on the popular platforms. Practitioners should keep an eye on developments in this space and consider balancing their persona’s activity accordingly.
If you'd like to take your OSINT collection and tradecraft to the next level, get in touch with us at training@osintcombine.com or explore our training courses to find the perfect fit for your organisation.



