This Is How Twitter’s Edit Button Can Actually Work

In June 2021, Twitter told the world “you don’t need an edit button, you just need to forgive yourself.” Twitter founder Jack Dorsey even held firm against Kim Kardashian’s pleas when she cornered him at Kanye West’s birthday party in 2018. For years, the platform has held out against editing tweets. Until now. An edit button is imminent—but tricky questions remain about how it can be implemented without causing chaos.

“Everybody thinks it’s very easy to just put in an edit button,” says Christina Wodtke, a lecturer in computer science at Stanford University. Wodtke, who has worked on product design projects at LinkedIn, MySpace, Zynga, and Yahoo, argues that such a seemingly simple change will require a lot of careful thought. She puts forward a hypothetical situation: Donald Trump—whose return to Twitter some fear may be more likely since Elon Musk’s ascension to Twitter’s board—tweets something shocking or offensive. He subsequently edits his post to blunt its rough edges. But people have already responded to the content of the initial tweet, making their reactions nonsensical.

The obvious solution to that is a Slack- or Facebook-like change log of edits, where people can view the history of changes on a post. Facebook has let people edit posts since June 2012—but it’s a feature that has regularly been abused by scammers since its rollout. Alex Stamos, a former chief security officer at Facebook and now an adjunct professor at Stanford, noted that Facebook’s post-editing tools helped legitimize a cryptocurrency scam page to con users. Editing pages is a core feature of Wikipedia, but that leads to “edit wars” where individuals argue about the wording of an entry, including an 11-year battle over the origins of the Caesar salad. Similar third-party tools exist for Twitter bios, such as Spoonbill, which can track how a person’s profile has been amended over time. 

Yet such tracking comes with its own problems, says Wodtke. For one, the user who edited the post probably doesn’t want the original text to be accessible. “There’s all this complexity going on around how all the players in the system are going to react to this change,” she says. “You have to think through all these norms you’re now violating and changing.” Simply put, you need to design any new feature of this nature with the worst-case scenario in mind. Even if the majority of people use an edit button to nix typos, if a small minority use it for nefarious purposes it could cause chaos. “A prominent fear is that it just leads to more confusion and exhaustion on Twitter,” she says.

In an attempt to work through the problem, Twitter will start by testing an edit feature among users of Twitter Blue, its paid subscription service, in the coming months. Editing tweets has been Twitter’s most requested feature “for many years,” claimed Jay Sullivan, the platform’s head of consumer product. Twitter has also said that development of the feature has been underway since 2021—debunking any claims that a poll by Musk, asking users whether they wanted an edit button, was behind the decision.

The edit button announcement was welcomed by many—but raised concerns amongst others. Sullivan admits that ensuring the edit feature is used honestly may require “time limits, controls, and transparency about what has been edited.” So how do you code for honesty? Simply put, the way Twitter designs, tests, and implements the edit feature will determine its success—and could make or break the platform. “Are there risks?” asks Christopher Bouzy, founder of Bot Sentinel, a service that tracks inauthentic behavior on Twitter. “Absolutely. It could change the context of a tweet.” Disinformation and misinformation—the former deliberately sharing incorrect information, the latter accidentally doing so—are not exactly in short supply on Twitter, and the platform’s viral dynamics mean that some posters are loath to amend incorrect information. One 2018 academic paper found that fake news travels six times faster than the truth on Twitter, in large part because falsehoods are 70 percent more likely to be retweeted than fact-based posts.

www.wired.com

Leave a Reply