Twitter users may be given the ability to flag tweets containing misleading, false or harmful information often considered fake news, according to this report. Twitter has not provided a release date for their prototype feature, and neither has it confirmed that such a feature will ever be added. However, Twitter is slowly exploring this tool in order to counter rampant abuse on its platform.
Twitter has been facing many issues regarding content being posted that are in some way interconnected. One such issue is the frequent fake accounts that are easily purchasable due to the sheer quantity of stock and low prices that spread automated messages and false stories. Companies can use these accounts to freely advertise their products, and game the system by ensuring their own posts or posts that support them positively gain more visibility at the expense of competitors. Extremists are also able to use Twitter as a recruiting tool, and behind the safety of the screen trolls spread misinformation or hateful comments threatening women and minorities either to push or point or simply for the sake of it.
While these have been long-standing problems for Twitter, recent events, specifically the American presidential election and its aftermath, have further exasperated the issues. Considering the confidential nature that fake accounts further provide on top of the anonymity all provided, and the controversy that trolls were able to spread, a lot of posted content was seriously toxic regarding the public debate. A study shows that two-thirds of American adults believe that fabricated news stories on social media have caused “a great deal of confusion,” supposing the prevalence of misinformation and the effect it has.
While little information has been given, the tool could work similarly to Facebook’s anti-spam tool that allows users to flag content to dispute its authenticity. Should a post gain enough dispute reports, it is then sent to independent fact checkers to check whether the post is, in fact, truthful and supported by evidence, or merely fabricated. The intent is that regardless of the purpose, any misinformation would be subject to review, which in itself should see a decrease in the spreading of fake news, but it also helps identify accounts that are notorious for posting such news. This would help Twitter deal with two problems at once, restricting the spreading of false or harmful or false information while also cracking down on fake accounts.
One concern regarding an anti-spam tool is that it opens up a possibility for taking advantage of the feature, something that Twitter has a history of facing. This feature is open to all users, which would enable spammers and fake accounts to also lodge disputes for other source material to be checked, and depending on the criteria for content review, abusers would be able to cause a work overflow, making the tool effectiveness null and void. It will be difficult in creating a tool that seeks to reduce manipulation without being subject the very same manipulation it is trying to reduce.
Considering that Twitter has more than 300 million monthly users, managing and restricting the massive scale of abuse on social media websites proves a difficult task. Furthermore, there is consideration regarding the issue of policing content and censoring what is often a fine line between abuse and free speech. Defining the two is a challenge, because of the subjective nature of whether information is harmful or not, as what is considered abuse by one person can be considered free speech by another. Due to the ambiguity of the subject, tech companies have often refrained from making a final judgement, as they do not want to be in the business of policing their user’s free expression. Whether they want to or not, the role of arbiter has become a responsibility social media companies are required to fulfill, due to the comprehensiveness of social media within our daily lives.
Twitter is also considering developing a focus on machine learning and software that is able to detect micro-signals from accounts to determine whether they are fakcsdx`e or not. Depending on the success of internal tests should either of these features be designed, we may see a combination of manual and A.I. functions to help reduce the spreading of misleading, false or harmful information.