If you read enough empirical research papers, you will eventually start to notice that the names of countries are sometimes included in paper titles. This practice is far from universal, but it provides an immediate signal of the context of the research. I'm not generally in the habit of putting country names in my research paper titles, but I have had journal editors request it. Now, if you have read many empirical research papers, you may note that there is one striking aspect to the practice of country names in paper titles: the US appears to be almost entirely immune to it. It is rare to see the US named in a paper title.
Now, a new research paper by Andrés Castro Torres (Max Planck Institute for Demographic Research) and Diego Alburez-Gutierrez (Universitat Autonoma de Barcelona) takes a closer look at country names in paper titles. Specifically, they collected data from over 1.2 million publication records on Scopus between 1995 and 2020, of which over 560,000 mentioned at least one country in the title or abstract. Castro Torres and Alburez-Gutierrez focus on this narrower set of publications, and define whether they are 'localised' or not, where a 'localised' paper includes the name of the country in its title. They then look at localisation rates (the proportion of publications that focus on a country that have that country's name in the title), and find that:
...although articles about Europe and North America dominate our sample, they have the lowest localization rates. The vast majority of the research articles we study focuses on countries in the global North - more than 60% of the total articles mention a European or North American country in their abstract... but the localization rate of these articles is the lowest, hovering around 42% for papers published between 1996 and 2020... This percentage contrasts with the localization rates in other regions, particularly in Eastern and South-Eastern Asia and Sub-Saharan Africa. The ratio of the localization rates between these regions and Europe and North America ranges from 1.5 to 1.8 throughout the period of analysis.
So, localisation is far more likely for research outside of the global North (which includes Europe and North America, as well as Australia and New Zealand). Moreover, there is no trend in localisation - it is stable over the entire time period that Castro Torres and Alburez-Gutierrez examined:
We find no clear temporal trend. Even though localization rates decreased over time, they did so very slowly (e.g., the coefficient for the last period indicates that, compared to articles published between 2005 and 2010, those published in the last five years are exp(-0.06) = 0.94 times as likely to be localized).
Looking at individual countries, Castro Torres and Alburez-Gutierrez confirm my suspicions, finding that:
...compared to the US, the other top 10 countries of study display higher localization rates. The lowest coefficient among these countries pertains to the UK (0.54), implying that papers about the UK are 1.72 times more likely to be localized than papers about the US. This is a very significant gap, and it suggests that, even at the top of numerical dominance, the lack of localization is not explained by the use of the English language alone. By contrast, the largest coefficient pertains to articles about China. This coefficient implies a relative risk of 3.67, meaning that, while two-thirds of US-focused papers are not localized, more than two-thirds of China-focused articles are.
In other words, papers about China are nearly four times as likely to mention China in the title, than papers about the U.S. are to mention the U.S. in the title. And the comparison is qualitatively the same for every other country. I was surprised though, that as many as one-third of papers on the U.S. mention the U.S. in the title.
Finally, Castro Torres and Alburez-Gutierrez show that the geographical results hold across 27 subfields of the social sciences, although interestingly:
The localization rate ranges from 0.33 among papers classified as “Applied Psychology” (n = 3,422) to 0.66 among papers classified as “Development” (n = 69,724). Articles on “Political Science and International Relations” (n = 52,961) and “Demography” (n = 21,328) display localization rates very similar to “Development,” namely 0.64, whereas articles on “Psychology (miscellaneous)” (n = 247), “Experimental and Cognitive Psychology” (n = 1,416), and “General Psychology” (n = 3,771) display localization rates below 0.4...
Does any of this matter though? Castro Torres and Alburez-Gutierrez raise the problem of Eurocentrism:
...Eurocentrism, understood as a worldview that considers Western thought as culturally and intellectually superior, continues to shape the global production of social sciences, including its questions, methods, and approaches...
One problematic aspect of this perspective is that it glosses over the historical contingencies and structural violence that produced and sustain Western hegemony, including the imposition of metrics that makes the West the “default case” and the search for universal, timeless, and context- and value-free knowledge in science. This might result in societal processes observed in countries of the global North, such as market-based economic growth, rising human development, liberal democracy, market/trade integration, and globalization, being considered the “default” cases towards which other nations and societies ought to converge in the mid- or long term...
As I mentioned earlier in the post, having the country name in the title of the paper provides context for the results. It is fair to question the external validity of research beyond its particular cultural context. However, those questions are sidelined to some extent when the context is hidden from view. Now, this wouldn't be a problem if researchers and policy-makers carefully read research and noted the cultural context in which it was conducted and whether it is suitable to assume that those results apply more widely. However, many do not, and there is a fair amount of research that is cited without having been read (e.g. see here). That leads to a situation where, as Castro Torres and Alburez-Gutierrez conclude, failing to note the country context in the paper title:
...can be misleading, if not outright harmful, in particular when evidence-based policy is involved.
That might be overstating the case a little, but it is certainly something about which we should be more aware.
[HT: David McKenzie, on the Development Impact Blog]
No comments:
Post a Comment