I’m asking for public policy ideas here. A lot of countries are enacting age verification now. But of course this is a privacy nightmare and is ripe for abuse. At the same time though, I also understand why people are concerned with how kids are using social media. These products are designed to be addictive and are known to cause body image issues and so forth. So what’s the middle ground? How can we protect kids from the harms of social media in a way that respects everyone’s privacy?


You’d be surprised with what parents let their kids do. My little anecdotal sample size contains mostly highly educated people but most of them don’t place any restrictions on screen time of their kids. They claim they talked to their kids and they have assured them they don’t look at anything they are not supposed to but that’s just not what happens in reality.
What really happens is that the kids with no restrictions will engage with all the predatory bullshit on these platforms, nonstop. I can see this with my own eyes and my kid brings their friends over.
Communication is key but unfortunately the business model of these platforms is based on addiction and children are not equiped to deal with it and parental controls are an essential component.
I believe the parent post is nicely sketching out what a “best” move is. I have seen no better approach myself. At the same time I see what you see. The best approach isn’t all that great. If you’re lucky and find the right people it could work. There’s a lot of luck involved there.
That’s why I do think there should be some regulations indicating what is tolerated. It seems to me parent poster may agree (and thus also woth your take).
Since GDPR you can tell the school you don’t want pictures on platforms you disagree with. You may miss out on seeing the photo’s, you might come across as crazy, but you can (and you should). We were given a choice at the cost of extra paperwork and some limitations.
Even without the addiction problem of these platforms we should nurture and find a good society around us. It’s a valid take to try and find likeminded people.
I don’t think that’s the end of it. Given the state we’re in, the network effect, and the fragile ego of developing kids, I suppose we need a stronger push.
AI enforced age verification or logins which allow you to be followed anywhere is not the solution in my current opinion, it’s a different problem. The problems are the addictive and steering nature of the platforms which seems to be hard to describe in a clear way legally.
I wonder how “these platforms” should be defined and what minimum set of limitations would give us and the children the necessary breathing space.
Wholeheartedly agree that the problem is the addictive and predatory nature of these platforms. I don’t see how that would change under the current perpetual growth economy we all live under
the minimum would be transparency for the algorithm. If users can see exactly what a social media algorithm is doing with their content feed, they would always have a way to identify and escape dark patterns of addiction.
But this minimum itself would require powers to compel tech companies to give up what they would describe as intellectual property. Which would probably require a digital bill of rights?
The most practical option would be to just ask your kids directly about the kinds of content they’ve been consuming and why. Dinner table conversations can probably reveal those dark patterns just as well