YouTube has announced tougher guidelines on hate content in videos, strengthening its response to three categories.
It first promised to take action in March, ensuring that hateful content cannot be monetized. The move was in response to an ad boycott after major brands found their ads embedded in – or appearing alongside – offensive videos …
Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group’s race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.
Inappropriate use of family entertainment characters: Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.
Incendiary and demeaning content: Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
The company is attempting to walk a tight line between freedom of expression and avoidance of offense. Its strategy is to allow most content, but prevent ads appearing within or alongside videos likely to be considered offensive to a mainstream audience. Given the subjective nature of the judgements required, YouTube says that it is aiming to speed up the appeal process for videos which fall foul of the new rules.