YouTube has announced tougher guidelines on hate content in videos, strengthening its response to three categories.
It first promised to take action in March, ensuring that hateful content cannot be monetized. The move was in response to an ad boycott after major brands found their ads embedded in – or appearing alongside – offensive videos …
The strengthened guidelines appear in the Creator Academy. YouTube product management VP Ariel Bardin told The Verge that the company is now taking a tougher stance on:
Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group’s race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.
Inappropriate use of family entertainment characters: Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.
Incendiary and demeaning content: Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
The company is attempting to walk a tight line between freedom of expression and avoidance of offense. Its strategy is to allow most content, but prevent ads appearing within or alongside videos likely to be considered offensive to a mainstream audience. Given the subjective nature of the judgements required, YouTube says that it is aiming to speed up the appeal process for videos which fall foul of the new rules.
Image: Reuters
Check out 9to5Google on YouTube for more news and reviews!
FTC: We use income earning auto affiliate links. More.
Comments