Editor's Note: This story was published before the rally in Charlottesville erupted into violence on Saturday, leaving three dead and dozens injured.
This weekend, alt-righters and white supremacists will descend upon Charlottesville, Virginia, as they have throughout 2017. But they may find themselves without a place to party.
That's thanks to Airbnb, which this week removed users who were using the service to book venues as part of their Unite the Right rally, as Gizmodo first reported.
The company learned from some of its users that Unite the Right attendees were organizing logistics on the neo-Nazi website the Daily Storm, which brands itself as "The World's Most Genocidal Republican Website" and has a poster for the event on its front page that urges visitors to join the rally "to end Jewish influence in America." Once Airbnb confirmed that some rally-goers had used the platform to book listings for events associated with the anti-Semitic rally, the home-share site decided to boot those users' accounts.
This wasn't just an easy and correct call for Airbnb. It was also an example of how a platform company can actually make judgments about what is and is not an acceptable behavior, rather than simply waving away controversies by claiming it offers a mere tool for its users. That's something that many deep-pocketed Silicon Valley firms can't seem to figure out—and an area in which, until recently, Airbnb struggled, too.
In this case, the problem that Airbnb had on its hands was clear-cut.
"We've taken over all of the large AirBnbs in a particular area," wrote a user named SCnazi on a Daily Storm message board. "So far, we're close to filling our 7th house. We have 80-90 people, and are a mix of various AltRight groups."
SCnazi continued: "We've set up 'Nazi Uber' and the 'Hate Van' to help in moving our people around as needed, esp. between our off-site locations and Charlottesville."
Airbnb said it booted a number of rally attendees because they "would be pursuing behavior on the platform that would be antithetical" to the community policy," which requires "those who are members of the Airbnb community accept people regardless of their race, religion, national origin, ethnicity, disability, sex, gender identity, sexual orientation, or age."
Not all tech companies have been as swift to act in similar circumstances. Like Facebook, which following the June terrorist attack in London allowed a post by Republican Louisiana Rep. Clay Higgins that called for the hunting and murdering of "radicalized" Muslim suspects to remain online.
Yet a status update made in May by DiDi Delgado, a Black Lives Matter activist, that said "all white people" are racist—an extreme argument to some, but not one that called for violence—was removed and her account was suspended for seven days.
And then there's Twitter, where the alt-right has thrived, using the social media site to broadcast outright hate speech and harass people off the platform. Twitter routinely flounders when asked by its users to help protect them. Last year, actress Leslie Jones was driven off the platform after being barraged by racist tweets and inaction from Twitter in stopping it.
Jones' bad experience made headlines because of her celebrity and the involvement of Milo Yiannopoulos. But as almost any Jewish journalist who has ever written critically about Trump or any woman who has ever railed about systemic sexism can attest, harassment on the network is very real—and frequently goes unaddressed by Twitter.
I just don't understand pic.twitter.com/N9xWoXPttu— Leslie Jones (@Lesdoggg) July 18, 2016
Airbnb's decision to not allow its community to be an organizing tool for a white supremacist gathering is an example of a tech company taking its commitment to community safety seriously. It's also an example of a company maturing and learning.
In April, Dyne Suh — a woman who booked a cabin in California on Airbnb — was alerted only minutes before arriving that her reservation was canceled after the host sent a text message that read: "I wouldn't rent to u if you were the last person on earth. One word says it all. Asian."
Last year the hashtag #AirBnbWhileBlack proliferated on Twitter as a place for black users to recount their experiences being denied places to stay even when the listings were marked as open. Quirtina Crittenden, who started the hashtag after being routinely rejected by users with open listings, said that when she changed her photo to a generic city skyline and her name to Tina, she had no problem booking a place to stay.
While these incidents reflect racism on the part of Airbnb hosts and not values the company embraces, Airbnb decided that it had a serious role to play in making sure its service wasn't a place that fostered discrimination. In April, the company entered into an agreement with the California Department of Fair Employment and Housing that would allow the regulatory agency to perform racial discrimination audits for hosts who have three or more listings on the site, which the state has long conducted of landlords to ensure that fair-housing laws are upheld.
Because of that agreement, the woman who rejected Suh because of her race is now required by the state to pay $5,000 and take a college-level Asian American studies course, as well as agree to comply with the state's fair-housing laws. Airbnb's latest move demonstrates the company is willing to act proactively—as opposed to, say, waiting for one of the partygoers to hang an anti-Semitic banner on the front of the house they booked.
More frequently, platforms only respond to such incidents when someone gets hurt or a news outlet embarrasses the company after a major misstep. Facebook CEO Mark Zuckerberg initially refused to acknowledge that Facebook had a fake-news problem that was causing misinformation to circulate and potentially even influence voters, calling it "a pretty crazy idea" in November after Trump won.
Now his company has "disputed" tags and has teamed up with fact-checking organizations to help combat the spread of misinformation. (Its latest effort, "Related Articles," may end up being the most successful yet at tackling falsified articles on the network.)
Of course, social media websites designed for free-flowing communication are much harder to moderate than a website with a focus as narrow as home-stays. But those sites can still follow Airbnb's lead: When people in your community complain, you need to investigate and take action. That means not being afraid of pissing off thousands of people who use their platforms to share hate speech—which, yes, sometimes bleed into broader accusations of partisan bias, which social media sites are particularly loathe to incur. In an industry where losing users means losing money, booting anyone is a big deal, but it can also send a big message.
Zuckerberg said in May that Facebook was dealing with its content moderation dilemma by hiring 3,000 more people to work on the issue. But adding thousands of more staff to deal with the problem is unlikely to change a thing if Facebook refuses to make a clear commitment to protect its users first and foremost.
That means meeting with groups representing people who have been adversely affected by the Facebook's current content moderation strategy to get a clear understanding how the platform isn't working for everyone. But that's not happening. In January, a coalition of 77 civil rights groups wrote a letter to Facebook requesting a meeting to address what the organizers called the "disproportionate censorship of Facebook users of color." Facebook declined.
Twitter, for its part, says it's trying to curb harassment. Last month, the company reported that it's "taking action on 10x the number of abusive accounts every day compared to the same time last year" and now works to "limit account functionality or place suspensions on thousands more abusive accounts each day."
But that's based on internal data, and whether or not those fixes are truly meaningful depends on how many accounts it censured before. Still, something is better than nothing, even if it has taken years for Twitter to step up.
While Airbnb did the right thing in this case, monitoring situations on a case-by-case basis is unlikely to be very effective. Its platform is run on software that can book reservations much, much faster than a human can erase them. And that means that the company might need to use software to monitor for and flag bigoted interactions on the site. For example, if a black user is repeated being denied booking requests that are otherwise open, technology could flag that.
There are serious limits to policing hateful activities. It's nearly impossible to read a person's intentions; we also probably wouldn't want Airbnb to wade into political stances that don't involve outright hatred and calls for violence. Which is why having a clear, easy way for people to report problems they are experiencing, as well as a commitment to quickly investigate and act appropriately, is important, too.
And if the racists, sexists, and anti-Semites gathering in Charlottesville don't like it, they can go make their own Airbnb. It's a free country.