The nature of the proposed measure, it's relationship to the objective then becomes crucial in understanding both how a policy will evolve and what will be the nature of the outcome
The human resource development ministry declared, in June, that the Academic Performance Indicator (API)-based assessment system for college teachers is being removed. The background to this decision is that ever since the Merit Promotion Scheme for college teachers had been introduced in the mid-1980s, a concern was being raised that people were being promoted without regard to “academic quality". Thus, with a view to address these concerns, the UGC had been continuously tinkering with the requirements for promotion. Each innovation was introduced and then discarded on grounds of excessive “subjectivity”. Thus, approximately a decade ago, a more objective points-based system was devised, which classified academic work in various “quantifiable categories”, each with their point scale. Research was sought to be measured by the nature and quality of the publication. The evaluation scheme faced criticism for a variety of reasons which do not concern us here and was the reason for its recent abandonment. In this debate, what is less appreciated is that parallel recent concern of the UGC, on predatory journals, has its roots in the same policy. The UGC, concerned with the rapid rise of predatory journals, has now started a system of listing approved journals. These lists have been criticised for both including the very journals they wish to exclude and excluding some of the most eminent journals in the field!
This sequence of developments is a classic demonstration of the unintended but wholly predictable consequences of developing a poor indirect measure of an attribute in an interactive and strategic world. In such a world, the logical consequence of a measurement protocol is that agents will act to improve their standing. This is equally true of academics trying to publish more, as with governments seeking to improve the ease of doing business, alleviate poverty, reduce inequality, empower women or any other objective that becomes politically important.
Thus, the nature of the proposed measure, it’s relationship to the objective then becomes crucial in understanding both how a policy will evolve and what will be the nature of the outcome. To illustrate the challenge consider target 5.b from the Agenda 2030 of the United Nations, which seeks to “enhance the use of enabling technology, in particular information and communications technology, to promote the empowerment of women”. The expert group constituted by the United Nations Statistical Commission to develop indicators for the Sustainable Development Goals (SDGs) proposed that progress on this indicator be measured by looking at the “proportion of individuals who own a mobile telephone, by sex”. In other words, if more women own mobile phones then we have made greater progress on this target! Once this is in place we can see demands for special schemes to promote mobile ownership women. Or even giving them free mobiles. The causal link between the proposed measure and the desired objective is not obvious. What is likely is that the indicator will merely alter the marketing practices of mobile companies and governments. The issue of empowerment or using technology for that purpose will remain by the wayside.
The proponents of an imperfect measure will agree that the measure is imperfect but will argue at a point of time the measure gives us an idea of the dimension of the problem. The ready availability of comparable data makes it a useful descriptive tool. In the initial report, proposing the measure, it is entirely possible all these ifs and buts will be duly footnoted. However, the logic of comparability and the 24X7 scrutiny of public policy in the social media will reduce a 15,000-word report to a 140-character headline. The discussion, thereafter, will then be defined by this headline. At this point, the pressure will naturally be to improve the score, rather than resolving the underlying problem. The policy to improve the score is often easier to implement than effecting a change in the underlying cause of the problem. The example of mobile phones and women’s empowerment seems rather small and limited, but if we look around examples of misdirected focus abound. This can be in areas as far apart as skill development, literacy and educational attainment of children or admitting students to institutions of higher education. In all of them, the goal of improving the measurable attribute often takes us away from the true objectives.
These issues raise a peculiar paradox, if recognising all this we stop using an imperfect measure then we remain in the dark about the problem. If we use the measure and publicise the problem, policymakers will seek to address the same by seeking to improve the measure. This then leads to a situation where our measure has improved but not the outcome. Creating a second round of ad-hoc measurement and equally ad-hoc intervention. There are no easy solutions to be found — neither in measurement space nor in the policy space. What is needed is a return to basics and focus only on measures and policy instruments, which are based on relevance, a clear unambiguous link to the objective and simplicity.
Returning to the problem which we started with, we should recognise that recognising and incentivising quality teachers is an art, not a science. Rather than trying to achieve the impossible of seeking to develop an objective measure of an inherently intangible quality, we should return to simpler systems of trust and reputation.
The writer is former Chief Statistician of India
To read the full story, Subscribe Now at just Rs 249 a month
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper