YouTube has been publishing quarterly reports that detail how many videos it removes for policy violations and in its most recent report, YouTube has also included additional data regarding channel and comment removals. Between July and September, the company took down 7.8 million videos, nearly 1.7 million channels and over 224 million comments, and YouTube noted that machine learning continues to play a major role in that effort.
“We’ve always used a mix of human reviewers and technology to address violative content on our platform, and in 2017 we started applying more advanced machine learning technology to flag content for review by our teams,” the company said. “This combination of smart detection technology and highly-trained human reviewers has enabled us to consistently enforce our policies with increasing speed.”
Of the more than 7.8 million videos that were taken down for violating YouTube’s community guidelines, 81 percent were detected by the company’s automated systems. And the majority of those videos — 74.5 percent — didn’t receive a single view before being detected. Nearly three-quarters of the removed videos were spam, while videos violating child safety and adult content rules each accounted for 10 percent of what was taken down. Only 0.4 percent of removed videos included content that promoted violence or violent extremism.
As for entire channels, they’re removed after they’ve accrued three strikes for violating community guidelines, if they feature severe abuse or are found to be “wholly dedicated” to violating YouTube’s guidelines. Nearly 80 percent of the 1.7 million removed channels were taken down for promoting spam, over 12 percent were removed for hosting adult content and 4.5 percent were taken down for violating child safety rules. And because all of a channel’s videos are removed when it’s terminated, 50.2 million additional videos were removed in the last quarter through channel terminations.
The more than 224 million comments taken down by YouTube included those that violated the platform’s community guidelines as well as comments that YouTube labeled as “likely spam” and weren’t approved by the creators whose channels they appeared on. YouTube’s automated systems caught 99.5 percent of the comments that were removed.
YouTube has had issues in the past regarding inappropriate and disturbing kids’ videos as well as extremism. To tackle the issue, Google committed to boosting its machine learning efforts and adding more people to YouTube’s Trusted Flagger program, last year. Meanwhile, the European Union has continued to push companies like Google, Facebook and Twitter to remove extremist content within an hour.