In a new set of guidelines posted Wednesday, livestreaming company Twitch said that it will start enforcing its policies against “hateful conduct and harassment [for actions] that occur off Twitch services.” That means such conduct “directed at members of the Twitch community [on]… social media, other online services, or even offline” can result in Twitch bans or other consequences on the platform.
The focus of the new off-service conduct policy is on off-Twitch offenses that “pose a substantial safety risk to the Twitch community [and have] the greatest potential to harm our community,” Twitch said. Those include terrorism, threats of violence, membership in hate groups, and sexual offenses, as listed by the policy.
Twitch says it will engage “a highly-regarded third-party investigative partner” to investigate such claims of off-Twitch conduct and act in cases where evidence is available and can be verified, such as through links, screenshots, or video that has been “confirmed by our third party investigator as authentic.” If that law firm finds a “preponderance of evidence,” then the situation will be forwarded to a law enforcement response team to “manage sensitive, confidential investigations and partner with law enforcement.”
The parties involved in any investigation will be notified “where appropriate,” while accusers will be kept confidential, Twitch said. Details of investigations will not be shared publicly.
Punishing users for conduct outside of Twitch isn’t a completely new concept for the platform; in 2018, the company said it would “consider verifiable hateful or harassing conduct that takes place off-Twitch when making moderation decisions for actions that occur on Twitch.” But Wednesday’s announcement seems to represent a refocusing and expansion of that policy—and a commitment to investigate off-Twitch claims of harassment more thoroughly.
“While this policy is new, we have taken action historically against serious, clear misconduct that took place off service, but until now, we didn’t have an approach that scaled,” Twitch writes in its announcement. “These investigations are vastly more complex and can take significant time and resources to resolve… For behaviors that take place off Twitch, we must rely more heavily on law enforcement and other services to share relevant evidence before we can move forward.”
Last summer, a number of high-profile Twitch streamers faced accusations of sexual abuse and misconduct, while other streamers accused Twitch of not taking such reports seriously enough in the past. Those reports led to calls for a one-day “#TwitchBlackout” boycott aimed at getting Twitch executives to “take note of abuse, racism, sexual harassment, assault, and rape,” as one participant put it.
In December, Twitch rolled out a new, stricter set of guidelines on hateful conduct and harassment on the service, which went into effect on January 22. Those policies set what Twitch says is “a much lower tolerance for objectifying or harassing behavior” and added comments on caste, color, and immigration status to a list of previously protected identity-based attributes such as race, ethnicity, and sexual orientation.
In January, Twitch also removed its popular PogChamp emote, saying, “The face of the emote [Ryan “Gootecks” Gutierrez] encourag[ed] further violence” after the January 6 Capitol riot.
Twitch has had an inconsistent history in responding to reports of problematic behavior among some of its partners. In 2019, the company cut ties with Thomas “Elvine” Cheung after he was arrested in a child sex-trafficking sting. But Australian streamer Luke “MrDeadMoth” Munday initially received only a temporary ban from the platform after he was arrested for assault over an attack captured on stream. Twitch later made that ban permanent after community outcry.
Also in 2019, popular streamer Guy “DrDisrespect” Beahm received a two-week suspension from Twitch after being kicked out of the E3 gaming convention for filming a Twitch stream in a public bathroom.