Menu Menu

Are US lawmakers focusing too much on child safety in online spaces?

It’s no secret that social media is a place where our sense of self-worth comes to die. So as policymakers in the United States strive to enforce safety measures for children in these spaces, are adults being wrongly left out of the equation?

Public awareness of the dangers posed by social media has grown exponentially over the last decade, with people of all ages and genders reporting negative mental health symptoms after its prolonged use.

Despite this, legal online protection measures have been slow to develop, especially in the US, where the impact of social media on American children is often at the forefront of the conversation.

Facing doubts about implementation and enforcement at every turn, the government’s most powerful proposal for digital protection to date has been the Kids Online Safety Act (KOSA), which is expected to be voted on in Capitol Hill soon.

In short, this bill would grant parents of anyone under the age of 17 additional tools to manage their kid’s experience online, while extending opportunities to strengthen their digital privacy. However, the bill does not address these issues for anyone over the age of 18.

Although most people will heed that platforms like Instagram and TikTok help facilitate social connections, psychological research suggests that the superficial and meaningless nature of social media engagement can increase feelings of loneliness due to a lack of depth and intimacy offered alternatively by face-to-face interactions.

It’s also known that a simple scroll on these platforms – which are rife with photoshopped images and only the highlight reels of everyone’s lives – is a breeding ground for negative self-image, anxiety, and depression.

When 89 percent of Americans believe social media is damaging their mental health, it’s no wonder that adults in the US are concerned that protective measures aimed only at young people won’t be enough to stifle the mental health crisis ensuing across the nation.

 

WARNING: SOCIAL MEDIA IS A DANGER!

Since virtually no one guarantee they will end a session of mindless scrolling completely unscathed, US Surgeon General Vivek Murthy suggested that ‘warning labels’ should be added to social media platforms.

However, officials admit that it is difficult to predict how this would be received by people of all ages, or whether it would have the opposite of the desired effect.

To draw comparisons, nutrition labels sparsely prevent us from indulging in junk foods if we’re really craving them. Rendering certain drugs illegal hasn’t stopped people from becoming addicts. Even laws enforcing drinking ages don’t completely prevent underage kids from experimenting with alcohol.

Not to mention, algorithms and features incorporated into social media platforms have been designed to keep us coming back for more. Consider the slot-machine design of refreshing a page for newly posted content, explore pages that offer an endless selection of posts we’ve never seen before, and notifications about ‘suggested posts’ delivered at random intervals throughout the day.

To address these ‘addictive algorithms’, the state of New York is expected to enact a ban that prevents social media platforms from presenting these many features on accounts belonging to young people.

Again, these protections will only apply to users under the age of 18. What about everyone else?


Should the onus of protection be placed on social media platforms?

Although many would argue that we have a responsibility to regulate our own social media use, others believe internet platforms like Instagram and TikTok should be doing more to prevent harmful content and addictive features from reaching users in the first place.

Given that social media has become essentially become the wild west, while being designed to be highly addictive and also being ridiculously accessible, it’s been good to see certain platforms taking stabs at this already.

Instagram has recently introduced a ‘Made with’ AI label on posts, which users can manually add this tag to their content before posting. If they fail to, Meta will add it automatically if it detects content has been made using AI.

Twitter, on the other hand, has added its ‘Community Notes’ feature, which encourages users to fact-check and clarify claims made online. In some instances, this helps to flag fake news, while presenting factual resources and verified backstories of specific content.

While these features have enhanced our ability to grasp what’s real or true from what’s not to some degree, they still don’t address the damaging effects exposure to social media causes for our mental health.

Most officials agree that placing restrictions on widespread social media use would cause widespread outrage. It would raise arguments about media censorship, individual free will, and freedom of speech.

For now, it seems like we’re all going to have to be responsible for protecting ourselves by limiting our time online and remembering that social media is not real life. It may seem like I’m stating the obvious, but it’s definitely something we all need to remind ourselves of from time to time.

Accessibility