The Medium or the Message: Children’s Content on YouTube
Share This Post
In the spirit of digital exploration, we here at Ghostery wanted to take a closer look into the world of YouTube videos, particularly at content generated for and targeted at children. What we found was startling. If you have not yet seen these videos for yourselves, you may be equally as surprised to learn what exactly is out there that kids seem to love watching.
Amongst the wide variety of offerings, there are thousands and thousands of unboxing videos, where the “unboxer” unboxes a package with great precision and attention to detail, creating an almost-hypnotizing effect. There are also nursery rhyme videos that repeat the same songs over and over again, ad nauseum, and, perhaps most fascinating, compilations and mashups of many types of popular kids’ videos designed to keep kids enticed for large periods of time. There are also videos that take the categories mentioned above and subvert them in strange and often unseemly ways.
YouTube made the headlines in November 2017, when it announced that it was going to crack down on videos for children that present inappropriate content, not only ones that seem to exploit children or place them in dangerous and precarious situations, but also knock-off videos that take well-known children’s content and infuse it with gratuitous violence.
An example of the latter type of video is a series of knock-offs of Peppa Pig (the well-loved children’s character). In the still below – a clear montage of the iconic Clorox jug layered on top of the original video – she can be seen drinking bleach. Another noteworthy example of content appropriation is the Elsagate phenomenon, where content producers take unlicensed popular characters from family-oriented media, place them in live action or animated videos, and have them do or say lewd, violent, and lascivious things.
Image source: Variety – November 10, 2017, via YouTube
Writer and artist James Bridle has explored this topic in great detail in his article, „Something is wrong on the internet„. As he’s written,
“Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are, it turns out very widespread. They make up an entire YouTube subculture. Many are obviously parodies, or even satires of themselves, in the pretty common style of the internet’s outrageous, deliberately offensive kind. All the 4chan tropes are there, the trolls are out, we know this.”
Before you start trying to find these now infamous Peppa Pig videos, you should know that some things have changed since November 2017. The videos Bridle referenced, as well as many others have been removed from YouTube as part of the company’s crackdown on inappropriate videos aimed at children. So… problem solved? Unfortunately, not at all.
YouTube Kids: Slipping Past the Algorithms
YouTube launched its eponymous Kids app in February 2015 and it quickly grew a massive following. Conceived as a platform for family-focused content, the first iteration included parental controls. This app which YouTube touts as making it “safer and simpler for kids to explore the world through online video” was the subject of controversy in mid 2017, when it became apparent that content not suitable for children was making its way into the app.
In June 2017, YouTube decided to remove ads from inappropriate videos that target families, thereby making it impossible for content producers flagged as doing this to monetize their videos.
Despite YouTube’s intent to create a “supervised” environment for the dissemination of safe content, content producers, focused on creating knock offs and the like, have continued to infiltrate this environment simply by uploading their output as usual. Machine learning and algorithms then pulled this content from the main YouTube platform into the Kids app. Content producers were able to facilitate this further by masking adult imagery in the videos, making them look very similar to the original, and using titles optimized for search intent.
A few months later in November 2017, a series of high-profile, scathing pieces on YouTube’s content problem, including The New York Times article, “On YouTube Kids, Startling Videos Slip Past Filters” came out further highlighting the “algorithm” problem and questioning what deleterious effects these videos may have on young viewers. Shortly thereafter, YouTube adopted a new policy to better flag content in the main YouTube platform as age-restricted, and thereby, not eligible for YouTube Kids.
To access YouTube Kids, parents download the app for their children on a mobile device. They must provide the year they were born as well as sign in with their Google account. Once inside YouTube Kids, they can then customize the viewing preferences for their children. If they elect to pay for a premium YouTube account (YouTube Red), YouTube Kids is free of paid ads.
Whereas the Kids app is gated by parental control, to sign into the conventional YouTube platform, users need to create a Google account and to manage their own account they must be 13 or older. Despite the account creation terms, YouTube is essentially accessible by anyone who can open the YouTube app or access the desktop site through a browser. While YouTube has taken attempts to better police the content that appears on YouTube Kids, the original platform to this day still contains many disturbing videos for kids.
What most articles on this topic focus on is precisely the nature of the content, and to a lesser extent the algorithmic logic that makes it profitable for content producers to create verisimilar videos. These videos so closely mimic the authentic, well-loved originals that they are bound to get lots of views and clicks, and thus become profitable through YouTube’s monetization model.
Indeed, upon first inspection, one might conclude that the issue here is the distribution of freakish content to kids. And yes, this is certainly a bad thing, but in focusing on the content exclusively, a potentially more dangerous issue is ignored.
What About Data?
Curiously absent from these conversations is the data collection that YouTube carries out on YouTube Kids and while kids are on YouTube’s main platform. Even if YouTube perfects a method of video evaluation that utilizes the best strengths of machine learning and human oversight — it did announce in December 2017 that it would hire 10,000 more employees to review any content that could be in violation of its policies — the issue of user privacy still exists. From a performance perspective, YouTube’s algorithms rely on user data to become smarter and more useful for users. Additionally, a source of revenue for the company is ad dollars spent by businesses looking to advertise on the platform; the metrics YouTube provides advertisers to evaluate the performance of their ads also require user data.
What information does YouTube Kids collect?
– Device information, such as the hardware model, operating system, and unique device identifiers
– Internet protocol (IP) address
– Unique identifiers
– Mobile identifiers that collect and store information about an app or device
– * It does not collect names, addresses, contact information from children
Data collected is used for internal operational purposes that include spam and abuse prevention, enforcing content license restrictions, determining a preferred language, and personalizing content for video recommendations.
While YouTube Kids includes advertising (if you are not a YouTube Red member), it does not allow for interest-based advertising or retargeting of children. It also does not “allow your children to share personal information with third parties or make it publicly available.” The full Privacy Notice can be read here.
Although children under the age of 13 cannot sign into YouTube, they can sign into their own account in YouTube Kids with the Family Link feature, which has been around since mid 2017. Parents can use Family Link to do such things as restrict how their children browse the web and review and manage app permissions and settings. Google recently rolled out Family Link to EU users as part of its preparations for Europe’s latest data protection regulations (GDPR), enacted on May 25th, 2018.
The data collected for Google Accounts managed by Family Link seems much vaster and more intrusive than what is collected by YouTube Kids. It includes server log details – not limited to hardware settings, browser type, referral URL, device phone number, call history data, and voice and audio information. YouTube also analyzes each child’s content, including their emails. Finally, if a child shares information publicly when signed into their Google Account, search engines may index it. This means that if a child shares personal information or content, anyone using a search engine could find what they have shared.
This raises an essential issue: even though Google collects much of this information to authenticate users and improve products, there is a cost to the user that seems to hide in plain sight.
Parenting Tool or Online Identity Shaper?
YouTube does offer a platform for kids that can be pretty enriching, when used correctly. The information it collects, to a large extent, is used to optimize the viewing experience of a user; after watching several videos, YouTube’s algorithms can start building a “profile” of a user and can then provide content, app, and other recommendations tailored to the user.
On the one hand, this data enhances the viewing experience, and arguably increases engagement. Yet, on the other hand, there is a big tradeoff implicit in this contract. Even if a parent closely monitors a child’s activity on YouTube’s main platform or on YouTube Kids, that child’s online identity is being shaped by a system of data collection practices and algorithmic recommendations that work as a feedback loop.
The minute a child begins consuming content on YouTube, they also enter into an agreement with YouTube (most companies do something very similar), and it is through this digital handshake that the official process of creating a unique cookie persona commences. What are the long-term ramifications of this feedback loop?
Might it be plausible to say that who a child is as a digital citizen, who a child is as a consumer of content, and who they are as an online individual is as much, if not more, a product of algorithmic logic, as it is personal—albeit juvenile—choice?
Anyone who uses these products and any other online services that collect their users’ personally identifiable information (PII) as part of their business model, are directly, but often unknowingly, involved in creating a cookie and ad profile for themselves, further propelling this feedback loop.
Think of what one watches on YouTube as energy: this energy gets deposited within the platform and from it the platform harvests data points. These data points may be harmless at first, but as the user continues to feed this system with each action they take, the profile the system is building becomes more and more specific and unique. To put it another way, online platforms and services have the power to shape the identities of their youngest users, and while this may not necessarily have an unwanted outcome, it’s certainly significant and worthy of discussion.
Ok, So What Now?
For most parents, and really for most internet users, choosing to never access beloved websites and online platforms and services again because of their data collection policies is not realistic. As noted above, even though YouTube Kids faced harsh criticism for the content it permitted and promulgated to children, it still offers a lot of value to millions of users.
But fear not! Fortunately, it is very possible to strike a balance between receiving these services you value and protecting your privacy. A plethora of digital tools allow you to take control over what gets tracked and revealed about you as you spend time online.
Using privacy tools like Virtual Private Networks (VPNs) and anti-tracking technologies on your computers and devices is one way to stop the cookie persona creation process from starting, or at the very least, prevent others from seeing the information cookies and third-party tracking scripts send over unsecured networks. A recent blog post by Cliqz explores the most common ways that tracker operators track you across the web and what you can do to combat these practices.
Ghostery and Cliqz offer a suite of AI-powered, private-by-design products that reliably protect users from being profiled by cookies and trackers. Ghostery’s desktop browser extension, available on most major browsers, and the Ghostery Privacy Browser for iOS and Android allow users to block trackers on individual pages or for all websites, thereby stopping trackers from accessing any personal data. Additionally, Cliqz’s anti-tracking technology, a part of the Cliqz Browser and the Ghostery Desktop Extension, detects third-party trackers that are attempting to retrieve PII and replace this information with random values.
The key takeaway is that most apps and services that seem reputable and offer indispensable value are active participants in the data and tracking game. It is not what you watch or read online that will determine how you are tracked, it is the fact that you do consume online content. Ghostery and Cliqz give users the tools to take back control of their online identities. And for parents, these tools offer one way to protect the most vulnerable internet users from companies looking to take their personal information as “payment” for using these companies’ services and products.
10th anniversary highlights
This Wednesday Ghostery celebrated it’s 10 year anniversary – balloons, confetti, cameras, the whole shebang. Ghosterians from the past and present,...
10 years and counting!
Ten years ago, a ghost appeared in your browser. It has taken many forms, but its goal has remained the same – to show you what’s happening behind...