ChatGPT Wishes You a Happy Safer Internet Day!

February 7, 2023

Safer Internet Day is upon us once again, as we reflect on how to work  “Together for a Better Internet.” What’s different this year is that AI in general, and ChatGPT in particular, has exploded into the public consciousness. 

Some questions have naturally risen to the surface. What does this mean for the high school (or college) essay if all a student has to do is feed in the assignment and hit enter? How will this impact professionals such as lawyers, journalists and creative writers if an AI can do a better (and faster) job at a fraction of the cost? What happens when it becomes more intelligent than us?

Given these and other pressing questions, I decided to ask ChatGPT for some help in writing this blog. I asked it three questions regarding online safety and here are its (fairly reasonable) answers.

What can parents do to better protect their children online?

“The most important thing parents can do to keep their kids safe online is to have open and ongoing conversations with them about internet use, online behavior, and digital citizenship. This involves setting clear guidelines, monitoring their activity, and educating them about potential online risks such as cyberbullying, exposure to inappropriate content, and online predators. By building trust and fostering a positive online culture at home, parents can help ensure their children have a safe and positive experience online.”

I asked a similar question about policymakers.

“The single most important thing policymakers can do to keep kids safe online is to enforce and update existing laws and regulations related to online safety, privacy, and data protection, and to create new laws where needed. This involves working with tech companies and other stakeholders to develop and implement policies that ensure online services are designed with the safety of children and young people in mind, and that hold tech companies accountable for protecting their users' data and privacy. 

Policymakers can also provide resources and support to educate parents, educators, and children on online safety and digital citizenship, and work with law enforcement to investigate and prosecute online crimes against children. By taking a comprehensive approach to online safety, policymakers can help ensure a safe and secure online environment for kids.” 

And, finally, what can the tech industry do to help keep kids safe online?

“The best thing technology companies can do to keep kids safe online is to prioritize privacy and security in the design and development of their products and services. This involves incorporating robust security measures to protect children's personal information and data, implementing age-appropriate defaults and controls to limit exposure to inappropriate content, and promoting digital literacy and citizenship through education and resources for their users. 

Technology companies can also work with policymakers and other stakeholders to develop industry-wide standards and best practices for online safety, and provide tools and resources for parents and educators to help them understand and manage their children's online activity. By putting the safety and privacy of their users first, technology companies can help create a safer and more positive online experience for kids.”

So there you have it. Not a bad distillation of the many millions of articles, reports, briefings and blogs that have been fed into ChatGPT on the topic of online safety.  While there is nothing groundbreaking or novel, it is an eerily good summary of the kinds of best practices that FOSI and our partners have espoused over many years.  

What will be fascinating to watch is how this and other generative technologies mature over time and what skills and tasks they will eventually replace. We, as humans, will have to think carefully about how we plan to coexist with this new technology. While there are plenty of dystopian scenarios that have been described involving AI - in books and in films - it could be that we and our children will simply have to adapt to a world where we work with our AI tools, much as we learnt to integrate the internet in the 1990s and then social media and smartphones in the 2000s. 

Wherever we are heading, we will certainly have to work (as the theme of Safer Internet Day suggests) together for a better Internet.

Header image credit: "a laptop with a sunrise on the screen" / DALL-E

Written by

Stephen Balkam

For the past 30 years, Stephen Balkam has had a wide range of leadership roles in the nonprofit sector in both the US and UK. He is currently the Founder and CEO of the Family Online Safety Institute (FOSI), an international, nonprofit organization headquartered in Washington, DC. FOSI’s mission is to make the online world safer for kids and their families. FOSI convenes the top thinkers and practitioners in government, industry and the nonprofit sectors to collaborate and innovate and to create a “culture of responsibility” in the online world.

Prior to FOSI, Stephen was the Founder and CEO of the Internet Content Rating Association (ICRA) and led a team which developed the world’s leading content labeling system on the web. While with ICRA, Stephen served on the US Child Online Protection Commission (COPA) in 2000 and was named one of the Top 50 UK Movers and Shakers, Internet Magazine, 2001.

In 1994, Stephen was named the first Executive Director of the Recreational Software Advisory Council (RSAC) which created a unique self-labeling system for computer games and then, in 1996, Stephen launched RSACi – a forerunner to the ICRA website labeling system. For his efforts in online safety, Stephen was given the 1998 Carl Bertelsmann Prize in Gutersloh, Germany, for innovation and responsibility in the Information Society and was invited to the first and subsequent White House Internet Summits during the Clinton Administration.

Stephen’s other positions include the Executive Director of the National Stepfamily Association (UK); General Secretary of the Islington Voluntary Action Council; Executive Director of Camden Community Transport as well as management positions at the Institute of Contemporary Arts (London) and Inter-Action. Stephen’s first job was with Burroughs Machines (now Unisys) and he had a spell working for West Nally Ltd – a sports sponsorship PR company.

Stephen received a BA, magna cum laude, in Psychology from University College, Cardiff, Wales in 1977. A native of Washington, DC, Stephen spent many years in the UK and is now has dual citizenship. He writes regularly for the Huffington Post, appears often on TV and has appeared on nationally syndicated TV and radio programs such as MSNBC, CNN, NPR and the BBC and has been interviewed by leading newspapers such as the Washington Post, New York Times and The Wall Street Journal, radio and in the mainstream press. He has given presentations and spoken in 15 countries on 4 continents.