When there is some social or economic change, Tyler Cowen (of Marginal Revolution fame) likes to look at who will gain in status, and who will lose in status, as a result. I've been thinking a bit about this in relation to ChatGPT. Who really benefits, and who really loses? A lot of others have obviously been thinking about this as well, especially in relation to the labour market (as in my previous post). However, I want to discuss it in relation to a particular context - that of signalling.
First, we need to understand what a signal is, and why we provide signals. Signalling is a way of overcoming problems of adverse selection, which is a problem of asymmetric information. Think about each person's quality, as measured by some attribute (intelligence, perhaps). Each person knows how intelligent they are, but we don't know. This is asymmetric information. Since we don't know who is intelligent and who is not, it makes sense for us to assume that everyone has low intelligence. This is what economists call a pooling equilibrium. Pooling equilibriums create problems. For example, if you can't tell people who are intelligent apart from people who are not, you may treat everyone as if they have low intelligence. That won't be good for anyone.
How can someone reveal that they are intelligent? They could just tell us, "Hey, I'm smart". But, anyone can do that. Telling people you are intelligent is not an effective signal. To be effective, a signal needs to meet two conditions:
- It must be costly; and
- It must be costly in such a way that those with low quality attributes (in this case, those who are less intelligent) would not be willing to attempt the signal.
An effective signal provides a way for the uninformed party to sort people into those who have high quality attributes, and those who have low quality attributes. It creates what economists call a separating equilibrium.
Ok, now let's come back to the context of ChatGPT. There are a lot of contexts in which writing well provides a signal of high quality for a variety of different attributes. Writing well is costly - it takes time and effort. Writing well is costly in a way that people with low quality attributes would not attempt, because they would be easily found out, or because it would take them a lot more time and effort to write well than people with high quality attributes. Now, because ChatGPT is available (along with Bing Chat and other LLMs), this reduces the costs of writing well, eliminating the signal value of writing well. That will lower the status of anyone who needs to write well in order to signal quality.
Now, let's consider some examples. Lecturers use essays to sort students into a grade distribution. Writing well is a signal of a student's quality, in terms of how well they have met the learning objectives for the paper. Students who write well get higher grades as a result. ChatGPT reduces the signalling value of writing well, meaning that an essay can no longer create a separating equilibrium for students. This is why I have argued that the traditional essay is now dead as an assessment tool. Smart students are likely to lose status as a result.
This can be extended to academic writing more generally. Academics get published in part as a result of the quality of their writing. Writing well is a signal of an academic's quality, in terms of the quality of their research. Academics who write well are more likely to get published. ChatGPT reduces the signalling value of writing well, meaning that good academic writing cannot be taken as a signal of the quality of the research. Good academics may lose status as a result.
There are lots of similar contexts, where the explanations are similar to those for students and academics. Think about journalists, authors, poets, law clerks, government policy analysts, or management consultants. Anyone who has ever read a policy document or a management consulting report will realise that the sort of meaningless banality you see in those documents and reports can easily be replaced by ChatGPT. The likes of McKinsey should be freaking out right now. ChatGPT is coming for their jobs. Good journalists, authors, poets, law clerks, government policy analysts, and management consultants will likely lose status.
There is one more context I want to highlight, which is a particular favourite of mine when teaching signalling to first-year students: online dating. It is difficult for a person to signal their quality as a potential date on a dating app. Anyone can write a good profile, and use a stock photo. However, one of the few signals that might be effective is the conversation on the app before a first date. A 'good' date should be able to set themselves apart from a 'not-so-good' date, by the things they say during a conversation. However, with ChatGPT in the picture, the signalling value of what people write in dating app conversations is reduced (in contrast to the assertions in this article in The Conversation). I wonder how long it will be before we end up in a situation where one instance of ChatGPT is talking to another instance of ChatGPT, because both dating app users are using ChatGPT at the same time (it has probably happened already). Anyway, good quality dates will lose status as well.
So, who actually gains status from the arrival of ChatGPT? That depends on what we do to replace the signals that ChatGPT has rendered useless. Perhaps we replace good writing as a signal with good in-person interactions. So, if lecturers use more oral assessments in place of essays, then smart students who are good talkers (as opposed to good writers) will gain status. Academics who are good presenters at conferences or in Ted Talks or similar formats will gain status. Podcasters (especially live podcasters, and other live performers) may gain status. The management consultants who present to clients may gain status (as opposed to those who do the writing). And so on.
What about online dating? It would be tempting to say that in-person meet-ups become more important as a screening tool, but this suggests that might not be effective either. If, as demonstrated in that tweet, anyone can have ChatGPT projected onto a pair of glasses in real time and then read from a prompt, then even those people who I suggested in the previous paragraph would gain in status, might not do so.
Or perhaps the quality of the underlying ideas becomes more important than simply good writing. The quality of the thinking still provides a good signal (at least, until ChatGPT becomes a lot more intelligent). That would help the top students, academics, journalists, authors, and poets to set themselves apart. However, it is much more difficult for the non-expert to judge the quality of the underlying ideas. It would be tempting to think that this raises the status of peer reviewers and critics. However, they can't easily signal their quality and are at high risk of losing status to ChatGPT. And if the expert judges can't be separated from the inexpert judges, then the quality of ideas can't be a good signal for non-experts to use. This is looking bleak.
Maybe lots of signals are about to be rendered ineffective? I feel like we should be more worried about this than anyone appears to be.
No comments:
Post a Comment