The State of Kids Online Safety Legislation at the end of the 2022–2023 State Legislature Session
In previous years, debates about online speech at a state level had largely focused on issues such as concerns about anti‐conservative bias or online radicalization. More recently, however, many states have instead focused on the impact of social media platforms and the internet on kids and teens.
While many of the proponents of these bills may have good intentions, these proposals have significant consequences for parents, children, and all internet users when it comes to privacy and speech. States that have enacted such legislation have faced legal challenges on First Amendment grounds and cases are currently pending in the courts.
In general, there have been four categories of legislation at a state level: age‐appropriate design codes, age‐verification and internet access restrictions, content‐specific age‐verification laws, and digital literacy proposals. With many state legislatures recessing this summer, there is an opportunity to analyze what the emerging patchwork of such laws looks like, the potential consequences of these actions, and what — if any — positive policies have happened.
Age‐Appropriate Design Codes and Age Verification for Online Activity in the US
Signed into law in 2023, the California Age‐Appropriate Design Code Act is the first of its kind in the United States. The law obliges businesses to conduct risk assessments of their data management practices and to estimate the age of child users with a higher degree of certainty than existing laws, controlling their access to certain content. While such a law is well intended, it has raised serious concerns about privacy and free speech and is currently being challenged in court. Other states are considering bills that require age verification for using social media.
Such proposals originate in European countries, such as the UK, which is considering its own Online Safety Bill to prevent young people from harmful content but also raises serious concerns around the censorship of lawful speech, privacy, and encryption. On speech, such initiatives threaten the right to anonymous speech. On privacy, kids and adults are likely to be harmed by invasive, yet currently unsafe methods of age‐verification technologies in an online ecosystem where at least 80% of businesses claim to have been hacked at least once. On encryption, some have advocated introducing backdoors in end‐to‐end encryption to catch malicious actors that harm kids while overlooking the importance of encrypted channels for kids to safely call out abusers.
This legislative session, several U.S. states have contemplated bills that would require additional steps to verify who may have a user account on social media or other websites, each with their unique approaches. But many share common concerns. For example, a cluster of states have sought to mandate explicit parental consent for minors creating or operating a social media account, like Pennsylvania, Ohio, Connecticut, and Louisiana. Pennsylvania, for example, has proposed legislation stating that a minor cannot have a social media account unless explicit written consent is granted by a parent or guardian. Ohio and Connecticut have followed a similar path requiring parental consent for children under 16 using social media. Wisconsin considered a bill recently to require social media companies to verify the age of users and require parental consent for children to create accounts. More than 60 bills were introduced in 2023 and at least nine states considered age verification, age‐appropriate design codes, or other restrictions on young people’s internet usage. Most of these proposals failed; however, there are a few significant age verification bills that were still pending or enacted as of July.
The Governor of Louisiana signed the Secure Online Child Interaction and Age Limitation Act (SB162) into law on June 28. This law not only enforces parental consent for minors, but expressly requires companies to verify the age of all Louisiana account holders. As will be discussed below, this is often the case with age‐verification laws more generally. Similarly, Arkansas passed the Social Media Safety Act, requiring children under 18 to obtain parental consent for creating a social media account. Utah went a step further by banning access to social media after 10:30 pm for all children under 18 unless parents modify the settings.
Consequences of Age‐Appropriate Design Codes
The implementation of overly broad policies raises significant privacy concerns, not only for young users but for everyone. The process of accurately determining the age of an underage social media user inherently necessitates determining the age of all users. In a context where social media companies may be held accountable for errors in age determination, the request for sensitive information such as proof of ID becomes a requirement for all users. This poses immediate questions regarding the type of identification data to be collected and how companies might utilize this information before the age verification process is complete.
On a practical level, social media platforms cannot solely depend on their internal capabilities for age verification, thus necessitating reliance on third‐party vendors. This reliance presents a further question: who possesses the necessary infrastructure to manage such data collection? Currently, MindGeek, the parent company of PornHub, stands as one of the dominant international market players in age verification. Many conservatives may question such a company or the social media platforms they are concerned about having the IDs or biometrics of young users. For example, the Arkansas Social Media Safety Act relies on third‐party companies to verify users’ personal information.
Options that do not require the collection of sensitive documents — like government IDs or birth certificates — are likely to rely on biometrics. In such cases, not only are there concerns about the potential risk of this information falling into the hands of malevolent actors, but also questions of the accuracy of such technology in cases, such as distinguishing the difference in a 17 ½‑year‐old and an 18‐year‐old. These are critical considerations for legislators as they advance bills aiming to replace parental oversight with governmental control, a shift that may also generate unforeseen consequences and risks.
We must also consider the potential repercussions on youth when their freedom of speech, expression, and peer association are curtailed due to the absence of social media. How can we balance the disparities between parents who restrict their children’s access to social media and those who permit it? In today’s digital age, children often forgo playing in neighborhood streets and optinstead for virtual interaction.
Social media platforms have empowered young people to voice their opinions on political matters and vital issues such as climate change. Without the communication channels provided by social media, the reach and organization of initiatives like Greta Thunberg’s “Fridays for Future” would have been significantly reduced. It’s crucial to consider the potential loss of such influential platforms which serve not only as a stage for youthful expression, but also a catalyst for activism. Introducing bills that impose broad restrictions on access to social media is likely to also obstruct these beneficial aspects stemming from social media usage.
Additionally, these restrictions would make it difficult — if not impossible — for users of all ages to engage in anonymous speech as well as access communication and lawful speech. The only way to verify users under a certain age — such as 16 or 18 — is to also verify users over that age. This means all users would be forced to provide sensitive information like passports, driver’s licenses, or biometrics in order to participate in online discussions. This information would have to be tied to a user’s account, meaning it would be impossible for users to retain true anonymity. This sets up a honeypot of sensitive personal information for malicious hackers.
Topic‐Based Age‐Appropriate Design Codes or Age‐Verification
Some states have introduced age‐verification legislation that targets specific content. Currently, these proposals have been limited to pornographic material and websites. For websites exclusively dealing with pornography, the task of flagging them is relatively straightforward. However, challenges arise when attempting to regulate more malleable platforms that do not primarily host adult content.
Louisiana was the first state to take such an approach with a law that requires age verification for access to platforms if pornographic content comprises more than one‐third of the overall content. However, such thresholds can often be arbitrary and could impact more general‐use websites that may be attempting to remove such content. For example, platforms like Twitter and BlueSky allow adult nudity, and other “sensitive media content” are permissible with certain restrictions. The platforms likely engage in significant content moderation and flagging of such content; however, the exact percentage of such content on a website may vary.
Lawmakers must also take into account how such laws could impact smaller platforms. A new platform with fewer users could have only a small amount of adult content but cross an arbitrarily set threshold based on the percentage of content. Small websites that see a sudden increase in users might also struggle to keep up with moderation for a time and end up over thresholds — even if such content violates their official terms.
Pragmatically, these laws may not be as effective at achieving their goal as policymakers may hope. As of July 1st, Virginia is the most recent state to enact a law that requires age verification for websites showcasing adult content. However, given that consumers have privacy concerns over sharing their sensitive personal data, they tend to bypass these protective measures, raising concerns over their effectiveness. For instance, since the enactment of the law, Google Trends data indicates that Virginia leads the US in searches for virtual private networks (VPNs), a tool that allows individuals to access such sites without disclosing sensitive information to these adult‐content websites. Utah also saw an uptick in VPN searches when it introduced its age verification law (SB287). It’s worth noting that bypass methods aren’t exclusive to adults.. A study on the enforcement of similar laws in the United Kingdom revealed that 23% of minors say that they can bypass blocking measures. In addition to relying on VPNs to bypass age verification, users may also visit more obscure adult content sites that are less likely to follow safety protocols.
The ease with which these measures can be circumvented suggests that these government laws may put people’s sensitive data at risk and infringe upon young people’s rights to access various speech forums, all without providing effective ways to reap their intended benefits. Rather than enacting laws that may not achieve their intended effects, focus should be shifted toward actionable measures like public awareness and education. The state‐level patchwork approach to handling people’s sensitive data underscores the urgent need for a comprehensive federal privacy bill.
A Better Alternative: State Bills Promoting Digital Literacy
The concerns about young people online are quite varied and an important reason why the best solutions are likely left to parents and trusted adults in a child’s life, rather than a government one‐size‐fits‐all approach. One positive set of legislative proposals that has emerged over this last session are those that focus on the education of young people through improved digital literacy curriculum. This approach will empower young people to use technology in beneficial ways while also advising them what to do should they encounter harmful or concerning content.
As discussed in more detail in a recent policy brief, many states already have an element of digital literacy in their K‑12 curriculum; however, such standards typically pre‐date the rise of the internet and social media. This year, Florida passed a law that would include social media digital literacy in the curriculum. States including Alabama, Virginia, and Missouri also considered such laws.
An education‐focused approach will empower young people to make good decisions around their own technology use. Ideally, such a curriculum should be balanced or neutral in its approach to explaining the risks and benefits of social media or other online activities. States should not be too prescriptive in their approach or allow individual schools to make decisions that reflect specific values or issues encountered by their students. They should give way to parental notification and responsiveness when it comes to discussions around such issues. Civil society and industry have provided a great number of responses to support parental choice and controls. If policymakers are to be involved, the focus should be on education and empowerment rather than restriction and regulation.
Conclusion
2023 has seen an increase in policy proposals seeking to regulate the internet access of young people, but this carries consequences for all internet users. Such actions will likely face challenges in court on First Amendment grounds as seen with the Arkansas and California laws. As with users of any age, children and teens’ use of and experience with technology can be both positive and negative. A wide array of tools exists to empower parents and young people to deal with concerns, including exposure to certain content or time spent on social media. If policymakers seek to do anything in this area, the focus should be on empowering and educating children and parents on how to use the internet in positive ways and what to do if they have concerns, not through heavy‐handed regulation that both fails to improve online safety and takes away its beneficial uses.