Who Sets the Rules? Rethinking Global Development Indices

The Cultural Bias Built Into Global Development Indicators
Every now and then in my household, we disconnect the screens and encourage the children to entertain themselves. Invariably, my middle daughter invents a game that draws them all in. Trouble arrives the moment she starts to lose — because that is precisely when the rules change. If the initial rule made the person with the most beads the winner, a sudden reversal ensures she always comes out on top. Her powers of persuasion are formidable. Without adult intervention, she would run the show every time.
Watching them play recently, a thought struck me: who sets the rules in international relations? Because it seems that whoever designs the game consistently wins it.
Why is human development measured primarily through economic parameters like GDP per capita? Why are agricultural indicators benchmarked against machinery per 100 square kilometres of arable land, rather than farmers and workers per 100 square kilometres — a measure that would tell a very different story in labour-abundant economies? Why does the standard measure of women’s status focus on their share of employment in the non-agricultural sector, when for hundreds of millions of women, the agricultural sector is the very centre of their economic and social lives? Who decided that working outside agriculture is inherently more indicative of a woman’s status than working within it?
These are not minor methodological quibbles. They are questions about power — about whose reality gets measured, and whose gets rendered invisible.
Consider health indicators. Why do they not capture the intergenerational transmission of herbal medicine, the community reach of traditional healers, or the proven immunological benefits of extended breastfeeding? Why does the measurement of ‘skilled birth attendance’ exclude doulas and traditional birth attendants, whose knowledge has sustained communities for centuries? Why has contraceptive prevalence been measured almost exclusively through modern pharmaceutical methods, with little recognition of natural family planning approaches that millions of women use deliberately and effectively? And should mortality data not capture something about the quality of death — including the growing reality in ageing societies of people dying alone, without community or kin?
The same questions apply to social development. Should indicators not measure rates of psychopathic crime and teenage delinquency alongside community care of the elderly and the strength of extended family networks? Should environmental indicators not account for the ecological knowledge of communities that have lived in balance with their landscapes across generations — not merely whether governments have biodiversity policies on paper? Should we not pay closer attention to the ratio of species still thriving against those lost, rather than to policy frameworks alone?
If the development community were to set its indicators against genuinely humane parameters — ones that measured human flourishing rather than the throughput of systems designed to perpetuate existing power relations — the rankings would look very different indeed. And that, perhaps, is precisely why they have not been redesigned. We might even stumble upon the accidental bonus of actually improving the human condition.
Getting Down to Specifics
The United Nations is currently consulting, through an open global process, on the indicators the world will adopt to measure progress on the Sustainable Development Goals — the framework set to replace the Millennium Development Goals and define global development priorities through 2030. Inputs have come from governments, civil society, academia, and the private sector. The conversation is live. The decisions are not yet final. This is the moment to engage.
For the first SDG — the goal of ending extreme poverty — the primary proposed indicator is the proportion of the population living below the international poverty line of $1.25 per day. To understand what that figure means and how it was arrived at, a brief history is necessary.
The international poverty line was introduced in 1990 at approximately $1.00 per day, based on the national poverty lines of the world’s poorest countries, expressed in 1985 purchasing power parity terms. When the World Bank updated its PPP methodology using 1993 price data, the line was recalculated to $1.08. The current $1.25 reflects a further update using 2005 PPP data. Each revision represents not a raising of the bar on what poverty means, but an inflation adjustment — an attempt to hold the real purchasing power of the line constant across time and economies.
But here is what those official thresholds have always obscured. Throughout the 1990s, World Bank research — including work led by economists such as Martin Ravallion — consistently found that the mean daily consumption of people living below the $1.00 line was approximately 70 cents. The official line was the ceiling of extreme poverty. The lived reality of the world’s poorest was 30 cents below it. The landmark 1997 UN Human Development Report, which introduced the Human Poverty Index, made exactly this point: the total income gap needed to lift every person below the $1.00 line up to that threshold was relatively modest — roughly 30 cents per person per day — and yet it remained unbridged year after year. Those living on 70 cents were not merely poor. They were, in the language of that moment, destitute: unable to meet even basic caloric requirements for survival. Regional poverty assessments for sub-Saharan Africa and South Asia during the same period, when converted to 1985 PPP terms, frequently placed national poverty floors in the 70 to 75 cent range, reinforcing this picture.
So when this article refers to 70 cents as a prior reference point, it draws on a figure that was very much alive in UN and World Bank policy discourse in 1997. The 70-cent figure was never a formal antecedent poverty line in the official sequence. But it was a real and widely cited measure of the depth of poverty — what people were actually living on, as distinct from the threshold below which they were counted. That distinction between the official line and the lived floor is precisely the critique being made here. The bar has not meaningfully moved. What has moved is the currency used to express it.
The broader methodological critique stands on equally firm ground. Purchasing Power Parity calculations are built on comparisons of standardised baskets of goods and services — baskets that poorly represent the actual consumption patterns of subsistence communities, rural households, or economies where a significant portion of welfare is generated entirely outside formal markets. The dollar remains the anchor currency, creating real distortions when African currencies lose value against it. When African currencies have depreciated by 30%, 40%, or 50% against the dollar over the period under review, the poverty line shifts without any corresponding change in the lived experience of the people it is supposed to measure.
African statistical agencies have already raised serious concerns during the SDG consultation process: that disaggregated employment data of the kind required by proposed indicators may be beyond the capacity of many national statistical systems to collect reliably. This is not a minor administrative inconvenience. It means the entire continent risks being placed at a systematic measurement disadvantage — appearing to perform poorly not because its people are faring worse, but because its data systems cannot capture their reality in the form the indicator demands. No prizes for guessing how African countries will eventually fare in the league tables built on these numbers.
What $1.25 a Day Cannot Capture
The measurement problems go deeper than currency distortion. How is $1.25 a day as a standard of living actually assessed? Is it disposable cash income? Daily consumption expenditure? Does it include the value of food grown and consumed from the family farm, never exchanged for money at any point? Does it take adequate account of people who live substantially off the land?
How do we measure the daily economic value of the grandmother who provides full-time childcare, enabling her daughter to return to school, run a small trade, or cultivate a subsistence plot? Her contribution is enormous and entirely real. It appears nowhere in the national accounts.
How do we compute the dollar value of a meal of yams, vegetables, and protein sourced entirely from family land — and compare it honestly against a processed ready-meal purchased from a supermarket? The farm meal may represent superior nutrition, a lower ecological footprint, and a richer social context. It scores zero on formal consumption measures because no cash changed hands. The supermarket meal — nutritionally inferior, individually consumed, ecologically costly — scores on every formal metric because it passed through a market transaction.
Does organic food only count when it is produced by a multinational corporation listed on a stock exchange? Apparently so.
Then there is SDG Target 1.3, which measures the proportion of the population covered by formal social protection systems — pensions, state benefits, institutional safety nets. This target makes no provision for family-derived welfare: the grandmother living with her children and grandchildren, whose quality of life is measurably better than it would be in a state-funded care facility, and who simultaneously provides childcare that reduces the burden on public systems. The extended family as a social institution — far more cost-effective than state provision and often far more humane in its expression — is invisible to this indicator. Worse, policies designed to improve scores on this target may actively incentivise the dismantling of these informal systems, pushing individuals toward formal, monetised alternatives that register on the spreadsheet.
This is not an accident of measurement. It is the logical consequence of indicators designed within a particular civilisational framework — one that defines progress as the movement of individuals from informal, community-embedded arrangements toward formal, monetised, state-mediated ones. Within that framework, the informal sector is always a problem to be solved rather than a resource to be understood. Traditional knowledge is always an absence of professional expertise. The extended family is always a gap in formal provision rather than a social achievement.
What can we learn from the western model of social welfare — specifically from the decades-long shift of welfare from the family to the state, and the inevitable shift in loyalty and social architecture that has accompanied it? What does the United States’s experience of transferring responsibility for child-rearing, discipline, and social formation from families to institutions tell us about where that road leads? These are not rhetorical questions. They are the questions that the architects of SDG indicators should be required to answer before those indicators are finalised.
The Window Is Still Open
The SDG indicators are not yet set. The consultation process is live. This is the time — the only time — to interrogate the rules before they are written into a global framework that will govern how development is defined, measured, funded, and judged for the next fifteen years.
If we do not, the results are already apparent. A game whose rules are written by one player, measuring outcomes that only that player’s strategy can optimise, is not a measure of who is developing and who is not. It is a measure of who wrote the rules. And a game that predictable is, after a while, excruciating to watch.
It is time to engage these questions seriously, publicly, and before the window closes. The big question — the one that should be at the centre of every conversation about the SDG indicator framework — is deceptively simple:
How culturally neutral are these indicators?
Fact-correction note
