One of Elon Musk's first actions as the new owner of Twitter was to announce a change to Twitter's 'verified status'. Previously restricted to verified real people (and usually to those with some celebrity status), users would now be able to get their own blue tick for just US$8 per month (or equivalent in other countries). That change comes with immediate problems, as outlined in this article in The Conversation by Timothy Graham (Queensland University of Technology):
...Musk’s US$8 blue tick proposal is not only misguided but, ironically, likely to produce even more inauthenticity and harm on the platform.
A fatal flaw stems from the fact that “payment verification” is not, in fact, verification...
Although Twitter’s verification system is by no means perfect and is far from transparent, it did at least aspire to the kinds of verification practices journalists and researchers use to distinguish fact from fiction, and authenticity from fraud. It takes time and effort. You can’t just buy it.
Despite its flaws, the verification process largely succeeded in rooting out a sizable chunk of illegitimate activity on the platform, and highlighted notable accounts in the public interest. In contrast, Musk’s payment verification only verifies that a person has US$8.
Payment verification can’t guarantee the system won’t be exploited for social harm. For example, we already saw that conspiracy theory influencers such as “QAnon John” are at risk of becoming legitimised through the purchase of a blue tick.
Allow me to put an economics lens on the problems here. It relates to asymmetric information, adverse selection, and signalling.
First, there is asymmetric information on Twitter. Each Twitter user knows whether they are an authentic user and not a bot, a troll, or a scammer. However, each Twitter user doesn't know which other users are bots, trolls, or scammers. That leads to a problem of adverse selection. Each Twitter user, knowing that there are lots of bots, trolls, and scammers, doesn't know for sure if any other account is authentic or not. Whether any Twitter user is a bot, troll, or scammer, or not, is private information (known only to the user themselves, and not to others - that's why it is called asymmetric information). To avoid being trolled or scammed, a Twitter user's best (risk averse) option is to assume that every other account is a bot, a troll, or a scammer. This is what we refer to as a pooling equilibrium (because all other users are pooled together in the Twitter user's mind, as if they are all the same, and low quality). Since Twitter users don't want to engage with bots, trolls, or scammers, if they are assuming that every other account is like that, there is little point being on Twitter. Authentic Twitter users start to drop off the platform, and eventually the only 'users' left are bots, trolls, and scammers. This is what we call an adverse selection problem - each Twitter user wants to engage with other authentic users, but all they find are bots, trolls, and scammers.
Of course, Twitter hasn't collapsed as a platform, so it must have found a way to deal with this adverse selection problem. One way is through the blue tick (verified user) status, granted only to authentic users. The blue tick is a signal to other users that the user with the tick is authentic. In order for a signal to be effective though, it needs to meet two conditions. First, a signal must be costly. The blue tick was previously difficult to obtain, as users had to go through an authentication process (including verifying their identity). So, while there was no monetary cost, there was a cost in terms of time and effort. Second, a signal must be costly in such a way that those with low-quality attributes would not attempt it. Since the authentication process required identity verification, this was a process that bots, trolls, and scammers would be unlikely to attempt. So, Twitter's blue tick seems to meet the conditions of being an effective signal that users are authentic (despite some counter-examples). So, Twitter users could be fairly sure that they were interacting with authentic users, if those users had the blue tick. This is a separating equilibrium (because Twitter users are able to separate the authentic accounts that they want to interact with, from the bots, trolls, and scammers, that they don't want to interact with).
That is all about to change. As Graham's article in The Conversation noted, under the new regime all that it will take for a user to obtain Twitter's blue tick is the payment of US$8 per month. While that meets the first condition of an effective signal (costly), it fails on the second condition, because almost any bot, troll, or scammer with US$8 per month would be willing to pay for the tick. The blue tick will cease to be a signal of an authentic account.
Is that the end of Twitter though? Signalling is only one way to overcome the adverse selection problem. The alternative is screening - where the Twitter user themselves tries to reveal whether another account is authentic or not. That requires a bit of detective work on the part of each Twitter user, and is going to be far from perfect. Perhaps each Twitter user is best only interacting with people that they know personally, or people they have heard of and can be fairly sure are not fake accounts. Avoiding interacting with new accounts, that have few followers, and tweet mostly junk, has always been a good strategy, but will become even more important once the blue tick loses its value as a signal.
Twitter probably won't die as a result of the changes to the blue tick. But it's certainly not going to be as user friendly as before.
I love this article Michael! many forgo the economic twists that every decision can have. Will be sharing this one around!
ReplyDelete