Empowering Young Adults While Managing Online Risk

January 8, 2015

I recall being a young boy living in orchard country in the beautiful Okanagan Valley. By the age of 8, I had the run of my 37 acre orchard and its surrounding gullies and fields. I'd run, bike, hike, and explore with a German Shepherd as my co-conspirator and a backpack filled with trail mix. Occasionally, I'd wipe out and return home with some tears in my eyes and a wound on my leg, but it always healed and I was all the more diligent the next time.

Surrounding the orchard were various homes of people my family knew, and I knew I could visit if ever I needed help. I was aware that talking to strangers could be dangerous and I knew well enough to stay away from the dangerous bits of landscape, not that there were any cliffs or raging rivers; had there been, my radius of freedom might have been a little smaller.

Was there some risk? Yes! Was the risk of death or serious harm serious? No. Had it been, I wouldn't have been allowed to travel so far and wide. Also, my life had been guided by my parents to ensure I knew how to make good decisions (which, for the most part, I did). This ability to assume an appropriate amount of risk helped guide me to be the person I now am. In truth, I'm a bit of an experience junky, but I'm also a little risk averse; however, when thrust into difficult situations I don't shy away from them.

My company provides filter and moderation tools for online communities. We do it very well! In years past, filters for online communities (that is to say, the bit of technology that blocks certain words and phrases) had to be either a blacklist filter or a whitelist filter. Blacklist filters make sure that nothing on that list is said. The problem with blacklist filtering is that you're constantly trying to figure out the new ways of saying bad things. Whitelist filters are the opposite; they only allow users to say things that are on the whitelist, which proves to be a very restrictive way to communicate.

We decided to do it differently where we look at words and phrases and assign a risk to those words. We can then gauge how the word is used and look at the context by which it's being used (is the user trusted, has the user demonstrated negative behaviour in the past, is the environment for older users or younger users, etc.). We can then filter uniquely by user and context, thus eliminating the overbearing action of saying all words are either good or bad (yes, some words are just bad and others are just good).

An area that we're keenly interested in is how we can help replicate a healthy amount of risk in an online community without putting users in danger. Most parents accept that a child might fall and scrape their knee while playing on a playground (although, I've recently seen children playing on monkey bars with helmets on). We also accept the risk that when a child plays with other children they might be on the receiving end of some not-nice behaviour. We hope this won't happen but when it does, we comfort them and teach them about character and how they should react to such people; they will meet bullies throughout their entire life. In the online arena though, we've become quite scared of anything that might cause risk to a child, possibly with good reason. When we think about the effects of this, we are concerned that children are no longer learning important life lessons.

I love how Tanya Byron said that we must "use our understanding of how [children] develop to empower them to manage risks and make the digital world safer."

Recently we've been asking ourselves, 'how can we allow for a safe amount of risk to be present while providing tools that mimic real life?' For example, in real life, a bully has to look into the eye of his or her victim. Although we can't mimic that, we can deliver specific and timely responses to a bully that encourage them, at the moment of their bullying, to picture how others might receive what they're saying. Another example might be the way an adult can engage in a situation that is beginning to get more serious. Even though we start to filter the sort of words that become more abusive, how can we then get this information to an adult or moderator as quickly and efficiently as possible so the adult can intervene? This is the subject of our current development, as we believe deeply that in order for kids to be truly safe online, they need to grow and develop skills that cause them to make smarter decisions and show greater amounts of empathy. This includes the need to look at what's an appropriate amount of risk for children at all ages.

The Internet is providing an unprecedented amount of access to people of all ages and backgrounds. Perhaps, as we progress in our understanding of its impact, more and more companies will start to realize the role they must take in helping it develop well. We must be willing to challenge assumptions and work through our own discomforts so that we can engage in a healthy discussion. As parents, we must challenge ourselves to see how technology has changed the way our children interact, and how the risks we're well aware of from when we were children are experienced in the digital world. How can we learn to help our kids "fall" gracefully and stand up again more confidently?

I love how Tanya Byron said that we must "use our understanding of how [children] develop to empower them to manage risks and make the digital world safer."

Cover image courtesy of Wikipedia.

Written by

Nate Sawatzky

As Chief Evangelist for Kelowna’s Two Hat Security, Nate draws on his long experience of working with companies focused on building amazing products that improve the quality of children and family’s lives in a digital age.


in 2005, Nate was part of the team that launched Club Penguin, a children’s virtual world that was purchased by The Walt Disney Company in 2007. For 6 years, Nate build and lead a support and moderation team that grew to a team of 200+ people, spanning 5 countries in 6 languages supporting several of Disney’s online properties.


Since leaving Disney in 2012, Nate has contracted and consulted with various products typically in the early stages of development. His focus and obsession is in how culture and community impact the way our business grow and ultimately can change the world.


Nate now resides with his wife and 6 kids in the lovely Okanagan region of Western Canada. Two Hat Security builds tools designed to help companies build stronger and healthier online communities. Learn more about Two Hat at www.CommunitySift.com.