Twitter is opening the blue checkmark to everyone. Starting today, the company will let users request a verified account on its website by filling out a form with a verified phone number and email address, a profile photo, and additional information regarding why verification is required or helpful. In defining who will get approved, Twitter still says “an account may be verified if it is determined to be of public interest.” Prior to today, Twitter tended only to verify public figures, brands, and people in media, politics, sports, business, and other high-profile sectors.
It’s unclear why Twitter is opening the process to the public. The company says it has about 187,000 verified accounts, but around 310 million monthly active users. The disparity there, alongside increased pressure to provide anti-harassment tools, means more and more users may only be interacting with those who share their verification status. For instance, Twitter lets verified users filter their notifications to only show replies, mentions, or likes by other verified users. (The new Engage app offers some of those features to regular users as well.)
I submitted the form and was notified about three weeks later that I was verified. Others have gotten quicker approvals and rejections. It is not clear what criteria Twitter is using or why it is limiting certain features to verified users.
According to 10 high-level former employees, the social network’s long history with abuse has been fraught with inaction and organizational disarray. Taken together, these interviews tell the story of a company that’s been ill-equipped to handle harassment since its beginnings. Fenced in by an abiding commitment to free speech above all else and a unique product that makes moderation difficult and trolling almost effortless, Twitter has, over a chaotic first decade marked by shifting business priorities and institutional confusion, allowed abuse and harassment to continue to grow as a chronic problem and perpetual secondary internal priority. On Twitter, abuse is not just a bug, but — to use the Silicon Valley term of art — a fundamental feature.
Talk to enough Twitter insiders and one thing becomes painfully evident: The company’s understanding of its platform hasn’t always been clear to employees, even at senior levels — a problem that has made it difficult to understand how to police harassment. One source recalls that, when asked, Jack Dorsey refused to answer exactly what kind of tool Twitter was. “He said, ‘Twitter brings you closer,’” the former employee recalled. “And I said, ‘To what?’ and he replied, ‘Our users always finish that sentence for us.’ And to me, I thought, Well, it’s going to be really difficult to set policy in place if we can’t define what this thing is.”
Internally, employees have long raised questions about whether Twitter was a media company — a broadcast platform that should be governed by content standards and practices similar to a a television network — or a piece of the internet’s infrastructure, like an ISP, that should remain open and free.
They ended up with a system that works kind of like the App Store: unclear rules, inconsistently applied, with certain people getting favorable treatment, and the sense that the platform isn’t providing the tools its users want.
Sources inside the company in the years after Twitter’s IPO also said that product decisions were often scrapped or never advanced out of initial tests if they were thought to inhibit user growth.
Update (2016-08-12): Lee Bennett:
Curious you got verified. They rejected @floridaconf that I control, official account for Adventist denomination in Florida.
Update (2016-08-13): Charlie Warzel:
According to a former senior Twitter employee, Costolo ordered employees to deploy an algorithm (which was built in-house by feeding it thousands of examples of abuse and harassing tweets) that would filter out abusive language directed at Obama. Another source said the media partnerships team also manually censored tweets, noting that Twitter’s public quality-filtering algorithms were inconsistent. Two sources told BuzzFeed News that this decision was kept from senior company employees for fear they would object to the decision.
Update (2016-08-18): Twitter:
Last year we began testing a quality filter setting and we’re now rolling out a feature for everyone. When turned on, the filter can improve the quality of Tweets you see by using a variety of signals, such as account origin and behavior. Turning it on filters lower-quality content, like duplicate Tweets or content that appears to be automated, from your notifications and other parts of your Twitter experience. It does not filter content from people you follow or accounts you’ve recently interacted with – and depending on your preferences, you can turn it on or off in your notifications settings.
While our work is not done, today we are announcing that we have suspended an additional 235,000 accounts for violating our policies related to promotion of terrorism in the six months since our February 2016 post. This brings our overall number of suspensions to 360,000 since the middle of 2015. As noted by numerous third parties, our efforts continue to drive meaningful results, including a significant shift in this type of activity off of Twitter.
Stay up-to-date by subscribing to the Comments RSS Feed for this post.