fitscapades

Australia’s Under-16 Social Media Ban Explained And Debated

michelle Season 1 Episode 22

Send us a text

Social media can feel like a gladiator arena for adults, let alone kids. We take a clear-eyed look at Australia’s Social Media Minimum Age Act and why it puts the responsibility on platforms to prevent under-16s from creating or keeping accounts. No criminal penalties for parents, no blanket government ID checks—just “reasonable steps” that force companies to build safer systems, document their methods, and accept penalties when they fail.

We walk through how age assurance actually works in practice. Think layers: self-declared age and behavioral cues first, AI age estimation only when needed, and verified ID or parental consent as a last resort. That approach aims to cut risk while minimizing data collection, bias, and frustration for adults. We also break down which platforms are affected, which are excluded, and why the risk profile matters. Along the way, we tackle the tough questions: enforceability, potential migration to less regulated services, and what happens when algorithms misclassify near the age boundary.

None of this sits in a vacuum. Public trust has eroded after years of economic pressure, scandals, and pandemic hangovers, and that distrust colors how people hear any new policy. We address the fear that this is a slippery slope to surveillance, then point to audit-based oversight, aggregated metrics, and privacy law guardrails designed to prevent it. For families, we offer practical moves—talking with kids, backing up content, preparing for possible deactivations, and considering healthier paths to connection. For platforms, the mandate is to document layered defenses, build appeals for mistakes, and show their systems actually work.

If you’re a parent, educator, or just someone who cares about kids’ mental health online, this conversation is for you. Listen, share your perspective, and help us pressure the platforms to do better. If you find value here, follow the show, share it with a friend, and leave a review so more people can join the conversation.

Support the show

Thanks for listening! Follow the links below for more!

https://www.instagram.com/fitscapades

https://patreon.com/fitscapades

https://youtube.com/@fitscapades?si=hzhOJ8vjmjz5dAJy

TikTok @fitscapades1

twitter/ X fitscapades

SPEAKER_00:

So there's been quite a bit of discussion and kickback against the new um underage social media ban in Australia. So the ESI Safety Commissioner explains that from the 10th of December in 2025, the online safety amendment, which is the Social Media Minimum Age Act of 2024, will require certain social media platforms to take responsible steps to prevent Australians under the age of 16 from having accounts. Now, as a mother myself, I heard this and was really relieved. My son is nine years old, he hounds me continually about getting Instagram accounts, and I just don't think there's a need for it. I think there's only potential for him to uh come to harm, and there's no good that's gonna come from this, you know. Um uh and so I really don't didn't understand where why people were objecting so much. And so I was a little bit vocal on X as I tend to be, and I got all types of responses from uh people saying, you know, it's all about control, you know, the government's kind of trying to control us by you know seeing what we're doing online. It's I and I I just I mean I will I don't I can't see any evidence of that, you know. I I think people are just being really paranoid and and it's frustrating. Uh and I so so I got every type of response from that to sanctimonious parents claiming to be perfect and you know they don't need the government to control their children, they could do it just fine, thank you very much. You know, the thing is I I I I find that that to be quite offensive actually because you know it what they're a actually trying to do is just use this this uh the the subject of parenting as an excuse to just be oppositional to the government for no really good reason at all. And and uh, you know, to be told that I'm not a good parent because I somehow need to rely on the government to restrict my my son's access to um to social media is really not fair, you know. I mean of course I'm capable of restricting my son's access to all these sorts of things. But if the government steps in and says that my son can't have an account, then it's just an act at a level of protection um that that makes me feel safe, really. I I I have no problem with it at all. Um and so I just but but I guess I got such a vehement and universal kickback against people saying, you know, you know, what are you talking about? You know, this is a Nazi state, everyone's trying to control you, like these conspiracy theories. I just wanted to do a deep dive into it just to see like if I'm flawed in my thinking. So um as of the so why is this law being introduced? Um government sees as a way to protect young children's um mental health and well-being from risks um associated with social media usage, such as cyberbullying, harmful or inappropriate content, design features that encourage excessive screen time abso fucking lutely, um, and pressure from online uh comparison. Okay. But like social media is a is a brutal place. Like some days I get beaten down by I don't want a son to have to deal with this. I I I don't think there's a role or a place for social media in a young person's life before they have developed self-consciousness and a and a firm sense of self, you know. They argue that social media features are persuasive and or addictive, endless feats, absolutely algorithmic recommendations. Look, I don't want my son to be sitting scrolling into Instagram for hours. Already he spends enough time playing Minecraft, you know. I I I just think how destructive it would have been in my own childhood. You know, instead of doing homework, I would have been looking at rubbish and and making myself more stupid, not more smart. Young adults are more vulnerable to the harms from these features. Absolutely. The law places responsibility on the platforms rather than just parent parents and children to ensure young users are um prevented from accessing certain services, thus aiming to shift the burden of safety onto the providers, and that makes perfect sense. So, how the law works platforms are defined defined as age-restricted social media platforms must take reasonable steps to prevent people under the age of 16 from creating or keeping accounts. Um reasonable steps obligation means that platforms must implement the systems such as age assurance and age um uh estimation or verification. Um, though the law doesn't mandate exactly which technology must be used, okay? So that so it's not the government isn't collecting information about us, right? The gatekeepers of the platform, the government is just pretty pretty much saying, look, we need you to make sure that no one under six age of 16 has an account with you, okay? And and you know, you are gonna police it, and we're just gonna make sure that you comply with that. I just don't know how they're gonna police that side of things, which is something I need to look at. The place the platforms uh face civil penalties if they uh fail, uh if they fail, fines of up to$49 million for corporations that systemically systematically allow under 16s to have accounts. The law is not a criminal ban on children themselves and does not penalise parents or children for having an account. The legal requirement is placed on the platform. So again, there's no interface between government, you know, and person. It's between the it's just the governments regulating the platforms. Some services are excluded, for example, messaging apps or online gaming platforms, depending on the definition. Uh some of the main some of the main questions are critiques. Some experts argue that the 16-year-old cutoff is somewhat arbitrary, and that while risks exist, the benefits of social connection and access to information for young teens are also real. Well, that's true too, but that's why they're not banning things like Discord, which is where a lot of teenagers do converse. There are concerns about how enforceable this will be, how platforms will reliably verify age, and whether children will instead migrate to less regulated or emerging platforms. Well, that's always going to be a risk, I guess. Uh, but so be it. There is debate about whether a blanket restriction rather than um more nuanced regulations is the best approach to online safety. So if you're a parent, carer or educator, or someone under the age of 16, now's a good time to start the conversation about what this change means, how two children use social media, what platforms they use, and what alternatives might be and how to remain safe. Young children already using these platforms under 16, they may need to back up their content and prepare for account deactivation or transition depending on how the individual platforms respond. And for the businesses and platforms operating in Australia, they need to be ready to comply with the regulatory guidance, establish aid vericate verification or age estimation mechanisms, and ensure account removal, deactivation where required. Alright, so platforms that are uh are affected Facebook, absolutely, Instagram, Snapchat, absolutely, that should be threads, TikTok, X, YouTube, Reddit, what are kids doing on Reddit anyway, and Kick. I think Kik isn't that way all the pedos hang out anyway. Um, yeah, so the the cut platforms that are not included are Discord, WhatsApp, Roblox, Steam, Steam Chat, GitHub, and YouTube Kids. These are excluded because they are either primary gaming or social gaming platforms focused on messaging rather than broad social interaction with posting on feeds, are platforms designated specific for kids. So, how the aid verification compliance work? Here are them so the main mechanisms. So the platforms must take responsible steps to prevent under-16s from creating or keeping accounts. The law doesn't mandate one specific technology. Platforms can use age estimation, age verification, or other risk mitigation approaches. The law starts from the 10th of December of this year, and platforms yeah, they face civil penalties if they don't comply. Importantly, the law doesn't criminalise children or parents, so the obligations are on the platform, uh, as information campaign. So, yeah, alright, so so basically, like it's not like the government is collecting individual IDs, and one other thing is too so that the way that they verify the age is not stipulated. So these people going on about oh, you know, like the digital ID, you know, it's all coming down to Big Brother and they're all collecting all our information, blah blah blah blah. No, not everyone is going to have to verify their age. If you have had an account for a while, you've got a birth date there that's clearly over the age of 16, you will not have to verify your identity. So, firstly, that's the first number one myth out of like scrubbed. Everyone's like, this is not about protecting the kids, this is about regulating us. You know, they want all of our IDs, they want information about us and what we're doing in social media. Dare I say it, if you're doing I mean, probably a lot of this stuff is monitored anyway, especially if you're doing dodgy shit. Anyway, some caveat caveats and practical issues. The list of platforms is not static, so new services could be added. Because age verification estimation is inherently imperfect, there's recognition that the system won't catch every under 16 account, but the goal is systemic compliance, not perfection. There are concerns about how children might migrate to excluded platforms or gaming services if strictly blocked from social media, which means broader online safety work needs matter still matters. So, in terms of key compliance errors, like um so platforms must detect, deactivate or otherwise manage existing under-16 accounts and prevent re-registration or bypassing of age controls. Platforms must implement age assurance estimation mechanisms, layering measures rather than relying solely on self-declared age. Platform must provide clear mechanisms for appeals of or review if users think they've been incorrectly flagged under age. And then the regulator may impose may impose uh civil penalties or on corporations for systemic failures. The definition of what counts as an age-restricted social media platform is broad capturing service, where a significant purpose of the social interaction linking other users to and user posted posting. Alright, so yeah, basically it's like you know, one of your kids in the big bad world, you know, able to look at everything, you know. So here's how it's gonna work. So if we look at Facebook, Meta has said it will comply with the law despite opposition. They reported identifying 450,000 under 16 uh Australian accounts on Facebook and Instagram. Uh, they must uh identify existing under-16 accounts, notify the users, allow deletion or data hibernam until age of 16, and implement behavioural um or AI age estimation prevent and to prevent new under-16 accounts. TikTok, Australian under 16 accounts will comply, must same as above, so same as what we just said. Uh uh Snapchat has said that it thinks that the law is problematic, but it will comply. And Twitter's Lifst by ASafety is one of the platforms on notice to comply, needs to establish age assurance to prevent under 16 accounts. Initially exempt but later included amongst the platforms that needed to reply uh apply. KINK both recently added to the list of age restriction platforms, so they now must implement the same uh steps. But KIK is a dodgy one where all the pedos hang out, I think. Uh so platforms should not simply ask everyone to verify their age with formal ID. The guidance emphasizes layered risk-based approaches and avoiding overbroad verifications for every user. For users already under the age of 16 within an account, platforms must handle deactivation with care, communicate clearly, and may need to offer options like data deletion, account hibernation, etc. Appeals or dispute mechanisms. If a user is incorrectly flagged as under 16, they should be able to challenge or provide additional evidence. Lack of this may be a compliance risk. Platforms should monitor and prevent circumvention. So under-16s re-registering with false age. The guidance mentions layered controls with ongoing monitoring. Uh, by the platform, by the way, not the government. Privacy implications using age estimation or biometrical tools has risks, privacy and bias, guidance emphasis proportionality, transparency, and avoiding undue collection of sensitive data. There you go, so they don't want to be spying on us. Um yes. Okay, so how is the government going to monitor platforms compliance without compromising privacy? Uh so how they plan how the how so the co-regulator office of Australian Information Commissioner OAIC plan to monitor and enforce platform compliance with the new minimum wage law and how they aim to do so without uh compromising privacy is of high priority. So mechanisms for compliance monitoring and enforcement based on the regulatory guidelines and tax sheets, government's approach to come comprises the following reasonable steps obligation. Platforms are find it's age restricted, must take reasonable steps to prevent children under the age of 16 having counts. Law does not prescribe a single technology or method, the guidance is principle-based rather than prescriptive. Platforms must record information about what steps they've taken, and the guidance signals that platforms should be able to provide compliance information on e-safety. So there should be like certain KPIs that they are able to produce that guarantees their compliance. Age assurance and age estimation system. So platforms will need age assurance methods to identify likely under 16 accounts. Self-declaration alone is deemed insufficient. They may use frictionless methods, so behavioural metadata inference or more active checks, upload ID, facial age estimation, but the law emphasizes minimal intrusion. Okay, so limiting what data can be collected or stored. The fact sheet explicitly states no Australian will be compelled to use government ID to verify their age. Platforms must offer reasonable alternatives to ID upload. Their privacy obligations under Part 4A and the Privacy Act platforms and any age assurance third parties must comply with the A Australian privacy principles. The guidance emphasises, minimizes personal information collection and destroying information once used for age assurance. So pretty much all the government wants to oversee is what the process is, not what the individual data collection shows, you know. And um it's very specific about that too, um, about being respective of privacy. Regulator oversight and enforcement powers. ESafety has powers to request information for platforms about how they are meeting these reasonable steps. OAIC oversees compliance with the privacy provisions as well, the schemes they can investigate and issue notices. The law also mandates that Minister initiate an independent review of the social media minimum age obligations with within two years of commencement. How this attempts to uh protect privacy. There are privacy protective features built into this design, so no blanket of government ID uh requirement. The system explicitly says Australians will not be forced to hand over their government ID or register with digital ID for age assurance. Layered risk-based approach. Rather than verifying everyone, many users, especially obviously over 18, may not face frictionless checks, metadata inference, behavioural signals rather than invasive verification. Data minimisation and destruction. So platforms and providers must minimize what's pest what personal information they collect for age assurance and destroy it once utilized. Co-regulation oversight. The OAIC's role ensures that age assurance systems must also be consistent with broader privacy law, so age tech systems cannot operate outside of standard protections. Um, platforms will be required to record their reasonable steps and be subject to review. The independent review two years after commencement adds a transparency layer. So key root key risks and unresolved issues. Um despite these measures, there are some significant challenges at the prior and privacy trade trade-offs. So effectiveness of interference method inference inference methods. Using behavioral metadata signals to estimate age may reduce privacy intrusion, but there's risk of error, uh so misclassification of over 16s for under 16s and bias. The guidance notes note this. When platforms outsource age checks, this creates additional entities handling personal data, increasing risk of data breach, misuse, unclear liability. The guidelines caution about this, but uh it remains a practical risk. Data retention and destruction compliance while legal frameworks mandate destruction of age assurance data, ensuring platforms comply, may be complex, circumvention and anonymous anonymization tools. Under 16s may use VPNs, false age restriction registrations, shared accounts, monitoring platforms for compliance may require uh collection or data of data or re-registration attempts, suspicious account behaviour, raising privacy monitoring concerns. Balance between uh safety and rights engagement. Some argue that a ban on uh strict or strict age verification may push kids to less safe platforms that are not regulated or reduce access to valuable social peer connections. Global regulation and extraterrestrial platforms, many major platforms are foreign domiciled, ensuring they comply while per preserving local privacy protections is a challenge. So, in short, um the Australian government's approach attempts to balance um uh attempts to balance by placing the onus on platforms to self-monitor and self-report rather than requiring increase intrusive broad checks by the government. The platforms must implement age assurance systems, provide evidence of their reasonable steps that they've taken to the regulator, and abide by privacy standards. The system emphasizes minimal data collection, destruction of data, and alternatives to formal ID and layered checks rather than blanket verification. So the monitoring is platform-based rather than surveilling individuals in discriminatory and with regulatory oversight and audit powers. The privacy safeguards are built in via the Privacy Act framework and the specific um SMNA provisions. So I I think it's pretty clear that this is not an attempt to control um the Australian community. Um and there's just a lot of fear-mongering that's been going around, uh built on distrust of the government, you know. Um and I I I can argue this to the cows come home to people who are distrustful to uh about the government and they're never going to believe it or never going to um, you know, be trust be trusting, I guess, you know, there's a way because there's just so much, I guess, um, yeah, uh distrust and um and you know words being thrown around like you know communism and you know over it and you know that sort of thing in America and in Australia. That I don't I I don't think uh that there's any way to assure these people that this is not the priority of the government, and I don't believe it to be. I I I I really don't. I think it from the language of it all, it looks very much like it is all about the privacy of children. They don't want to check everyone's ID, they're trying not to check IDs, they're trying to use things like you know behavioural patterns to screen for someone that might be likely to be under 16 rather than just go blanketly asking for everyone's identification. But if you talk to the conspiracy theorists around, you know, they will say, yeah, that that's how it all starts, you know. And I think that that's look my rationale for not worrying about that is you know, this is how like um you know, um movements like MAGA who do want to control the population, this is what they they leverage off, you know, that that people are I guess uh mistrusting of the establishment and they come in as this trustworthy establishment when really all along they're the regime that wants to uh control people, you know, and that's just an unfortunate thing, is that most of these people who are suspicious are going to run towards the more dangerous sort of political option, in my view. And I I can just see it all it's like watching a car crash happening and you can't turn away, you know. So um there are several real-world case studies and examples of how different platforms and systems have deployed age assurance and age verification technology showing both successes and and privacy accuracy challenges. So the first case study is Australia's national trial and state, a major Australian Commission study, found that age verification for social media is technically possible, but there is no one size fits all solution. So the um the report flagged significant risks accuracy issues, especially for minors near threshold, biases, so for race and ethnicity, and in some age estimation technologies and privacy trade-offs. The study found that several methods were viable, but no one method was fruit proof, with concerns over accuracy and privacy. This suggests that for the Australian regime of the Online Safety Amendment Act 2024, platforms will need to choose a layered, flexible approach rather than rely on a single verification method. What do they mean when they're talking about layered flexible approaches? So, a layered approach, so this is a pr the the the prefer the this is a preferred method of of handling age assurance rather than asking everyone to hand over the ID. So a layered approach means platforms use use several layers of age assurance checks, starting with the least intrusive methods and only escalating if stronger verification is needed. So it's like kind of like airport security. So if everyone passes through the a basic scanner, so that's low fiction, and only suspicious cases go to manual screening. So layer one is self-declaration of behavioural inference. So it user inputs birth date, platform analyses usage patterns, so keywords, content, time online, and contacts. And this has got a very low privacy impact, and and this is default for for all users. So you know, if you've if if you put in your birth date, right, and then the platform sees that you've been looking up like telly tubbies and Barbie or whatever, then it's gonna like you know, you think you could be a kid. Um metadata AI and age x uh AI age x estimation. AI scans face in uploaded selfies or videos to estimate age range or cross-checks device slash browser history. So okay, so the a so you have to upload a face uh selfie. I guess that's a little there there's a moderate privacy impact. And that's triggered when uh a when user's declared age or behaviour looks under 16. Okay, verified ID or parental consent, upload ID, driver's license, or parental approved consent via a one-time link. The privacy impact is high and it's only for unresolved or disputed cases, and account monitoring and anomaly flags, system flags if account later posts like a teen using childlike slang or links to underage groups. It's moderate privacy risk and it's a continuous audit. Okay, so uh the goal is to minimize unnecessary data collection. So it's like just like the system just looks for patterns and then flags if something unusual comes up. A flexible approach recognises that different services have different risk levels. Not all users can or should provide government ID and technology norms evolve, so each platform can choose its mix of assurance layers, provided it can provide ease to the C e Safety Commissioner that those steps uh are reasonable for its context. I mean, I guess um so you've got a system with AI that's continually monitoring like um browser history at other s other on other platforms or you know on your browser. Uh I guess that's kind of invasive. Um, but in some pre experimental or proposed frictionless age assurance systems and algorithm um analyses, browsing history or app usage patterns, so sites visited, time spent on entertainment versus finance sites or key search search keywords or app categories, you know. So I guess the system just uh sniffs for these things, and if if the if the profile or the the pattern of usage seems to suggest a child is using it, then then the system was automatically flagged for a more um a more complete check to to to verify the age. So I don't think the actual content of of you know what has been found is is is stored. It's just like it's yes or no, like it's like you know, yes, this could be a child, no, it's not. And the reasons for it flagging it up are not necessarily recorded, I think, and that's how the the the privacy is actually preserved. So um, you know, device metadata, so all of our phones collect metadata already. So this is already ongoing, you know. Um uh and I guess yeah, AI builds a uh a probabilistic age profile, or guess if your user is likely or under or over 16. Why is it synergy intrusive, deep behavioural data? So browser history can reveal uh religion, sexuality, medical concerns, political views, and other sensitive information just beyond age, but they're not really looking at that, they're just looking for a yes or no answer. Opaque interference in inference, users often don't know what is being analysed or how decisions are being made, black box profiling, but that's good because that that protects any real humans from finding out sensitive data, uh data retention and sharing risk. If data or model outputs are stored, they could be used for marketing or surveillance. Consent ambiguity, it's hard to claim meaningful consent when most people just click to agree or access service. Sorry, like Australia, treat profiling as high risk processing requiring explicit justification. Uh what record so age estimation should rely on minimal context specific data and should only involve general monitoring for of browser search histories. They prefer on device or zero knowledge. Estimation methods that check limited patterns locally and never upload the full histories, which I'm saying. So private so yeah, even if there is a sort of deep monitoring of of what you're doing on other platforms or in your browser, again, it's a yes or no answer. Like the specifics as to why your account might be flagged to be an under 18 or sorry, under 16 or over oh you know, an under-16 account are not not like not retained and and not known to humans, I guess. Um it's just a yes or no answer. There's flexibility in terms of um what each site is allowed to do, so the government doesn't dictate that. So examples of flexibility, so tic-tac might combine AI age estimation with manual parental confirmation, Reddit might use third-party verification, layer three, but store minimal data, and YouTube may rely mostly on Google account plus device metadata. This flexibility ensures compliance without mandating a single intrusive system. And why regulators like uh layered flexible systems? So privacy preserving only a minority of users undergo high intrusion checks, scalable and can adapt different services, new tech and risk levels. It's legally defensible, so platforms can show they took reasonable steps, proportional to risk, user-friendly, so it reduces frustration and access barriers for adults, and future proof allows adoption of new privacy enhancing tools, challenges that are still present. So accuracy gap, so AI estimation can misclassify near boundary ages, bias, so facial estimation accuracy varies by ethnicity and gender. Circumvention teams might still bypass controls like with VPNs and fake dates. Transparency users need to know when and how their data is being analyzed. In one so so layered flexible approaches means platforms build a staircase of privacy sensitive check sensitive checks, starting light, escalating only when necessary, and instead of a single blanket age verification gate. Yeah. But still, how are they is the government going to like monitor whether there is compliance with this um law without invading privacy? So the government firstly will not directly monitor or track individual accounts, but it will audit, investigate, and penalise platforms if they systematically allow under 16 accounts to exist despite the legal duty to prevent them. And so let's how let's unpack how this works. So who does the monitoring? So the e-safety commissioner oversees the compliance with the reasonable steps obligation, that is, ensuring platforms implement proper age assurance systems and actively remove under 16 accounts where possible. So basically, they're not going to be looking at individual accounts, they just want to see what mechanisms are in place and how the system works, you know, and whether it's all in place. Then there's the Office of the Australian Information Commissioner. So this monitors privacy compliance, ensuring that whatever age checking method platforms are used respect the Privacy Act and data protection principles. So both agencies have information request audit enforcement powers, but they do not view individual user data or spy on people's accounts and how compliance will be assessed. So rather than constant surveillance, the system is audit-based. So platforms must document their reasonable steps such as algorithms, checks, parental consent systems, and review system processes. ESafety can issue information notices asking a platform to demonstrate how it detects under-16s and how effective that system is. The regulator that can conduct spot audits or investigate after complaints. So if parents, schools, or media reports indicate many under-16 accounts are still active, if e-safety finds systemic non-compliance, it can issue enforceable directions or impose civil penalties. So enforcement focuses on whether the company's system works, not on policing each children's phone. What monitoring failures look like. In practice, regulators will use data-driven oversight, not account snooping. So sampling statistical checks, platforms may need to share anonymized metrics, estimated percentage of under 16 accounts detected and removed false positive rates. Exactly. So just the raw numbers of accounts that they've flagged to be under 16s and the number then that went on to be truly under 16s. That's sort of raw data numbers, but not actual account information will be shared. Complaint mechanisms, e-safeties reporting portal will let users flag if under-16s are still active or if age verification system fails. So if a parent finds that their mum, their their child is you know on Insta scrolling tit shots or something, they can then complain, and that's the way it will be fed through. Independent review. The law requires a formal review of how well these systems work and whether additional enforcement is needed. Industry transparency reports, large platforms may need to publish summaries for their compliance. It's all just like statistics, like you know, broad numbers, not individual information. Privacy protection during the oversight, there's going to be no mass government data collection. ESafety explicitly states that it will not demand individual user data or IDs. Compliance evidence can be aggregated or anonymised. No linkage to digital ID. Australians will not need to register with digital ID or give the government their credentials for HTEX. OARC, oversight, any data shared with regulators must still meet Australian privacy principles, limited pro purpose, and minimal retention and deletion after use. What could trigger penalties? Platforms could face sanctions if they lack a genuine age assurance system, ignore high volumes of under 16 accounts, fail to act after regulators' warnings, or use overly intrusive methods that breach privacy obligations. The regulator will judge reasonable steps by risk and platform size. For example, TikTok's algorithm system will face stricter scrutiny than a small niche forum. Australian government won't monitor or track kids' accounts directly. Instead, it will audit platforms for the strength and effectiveness of their age assurance systems. Enforcement happens through data audits, transparency reports, and complaint-based investigations, not real-time cum surveillance. So I mean I think after reading into all of this, it's very clear that privacy is uh of paramount importance for the Australian government, and this is not an attempt to control or spy on people or be over it's not an overreach. You know, it's very clear that this is all about safety of children. And I like I'm sorry, but I you know, I don't think you can can defend the stance that your your um children are you know missing out through uh not having social media if they're under 16, you know. Like we live in uncertain times at the moment. Public trust in the Australian government has declined sharply over the past few years, and the reasons are layered political, economic, cultural, and institutional. So there's been cost of living pressures and a sense of government failure. So high inflation, housing stress, and stagnant wages have created a perception that government is not protecting ordinary Australians. Rent, groceries, and energy prices have all risen faster than incomes while large companies pose strong profits. Both federal and state governments are seen as slow to act on housing supply, infrastructure and bottlenecks and energy reliability, feeding the feeling that Canberra is out of touch, and this erodes trust because people link their day-to-day hardship um to politic policy in action of or poor management. But a lot of these problems, too, are not just affecting Australia, it's all global, and it wouldn't matter what government you had in place, we'd still be having these issues. Uh, there have been transparency, uh transparency and accountability scandals. So recent controversies, including the the PWC tax leaks, the RoboDebt Royal Commission findings, and ongoing political donation concerns, have damaged faith in bureaucratic integrity. The National Anti-Corruption Commission was supposed to rebuild confidence, but many Australians feel it has been slow or selective. Ordinary citizens see different rules for elites versus the public, a major driver of cynicism. Um, we're now in the post-po-pandemic fatigue, and and during this time people perceived that there was institutional overreach. So COVID era restrictions, vaccine mandates, and border closures left lasting resentment for some communities, particularly when we didn't have like so many people dying here because of these restrictions, though. Even though Australia managed the pandemic relatively well, the emergency powers that that used during that period left a hangover of suspicion about government control, censorship, and surveillance. The rise of trust gap rhetoric, whereas people feel health or social policy decisions are made for them and not with them, continues to shape mistrust. There's a perceived disconnect between Canberra and everyday life. So many Australians believe that major parties are now too similar and both influenced by big business unions, lobbyists rather than voters. Young Australians locked out of housing, weighed down by debt, often feel the system serves older house owners, homeowners and corporations first. This fuels a none of them represent me sentiment, reflected in low voter enthusiasm and the growing attraction of minor parties like Stupid One Nation and the Tills and the Greens and the Independents. Information chaos and social media distrust loops. Only make this worse, so social media landscape amplifies mistrust. People see conflicting truths about the economy, immigration, or social issues, and governments have started to regulate misinformation. But many interpret this as censorship deepening scepticism. And when the government says we're protecting you, some Australians now hear we're controlling what you can say. So there's cultural identity and polarization. So debates on the voice referendum, immigration, climate, and gender policy have been highly divisive. Both sides accuse the government and media of bias, and each polarised camp views Canberra as captured by the other side's ideology. What polarization fuels is a sense of national institutions no longer stand above politics. Evidence, so great evidence of the trust decline. The ANU trust in government survey showed federal trust fell from 60% in early 2020 to around 36% in 2024. The Elderman Trust Barometer places Australia near the lower to middle globally for government trust, behind the Nordics in Canada but above the US and the UK. Thank God for that. Confidence is lost amongst youngest Australians and enters and renters groups, most often affected by cost of living pressures. So that's the times we live in. And so unfortunately, you know, no matter how good the policy is, and no matter how much you can explain to people who are mistrustful of this move by the government, people are just not going to buy it, they're not going to believe it, you know, they they just um are simply not going to trust the g that the government is doing the right thing, basically. And so, you know, in terms of my conversations on X like today, for instance, I I've had enough, I'm like pretty much burnt out, like uh I I'm sick of trying to make these people see reason that I well, my side of the coin what I believe to be true, and I do believe that this is not government overreach, this is about safety. Um and so yeah, I guess at the end of the day I don't need these people to believe me, but it's just frustrating because I see that what I am fearful is like, you know, I can see the government here is actually not trying to control us, and they're I've actually got our children's best interests at heart. And uh the level of distrust in general Australia i is only going to make us at risk of a rising um sort of movement like one nation coming into power, and they will be all about controlling Australians, you know, and so we sort of run away um, you know, uh into the arms of the murderer, if you will, I guess. So yeah, I I um these these are challenging, frustrating, and frightening times, you know. Um so yeah, that's all I have to say on the matter. I'm done with it. Um please join the conversation. Let me know if you disagree with anything I have to say, if you've got a reasonable argument that raises a solid rebuttal to all of this. I am more than willing to learn and listen. Um and indeed, if your view is good, even change my views. Um Thanks for listening.