evident.or.id

When Policy Reports Become Marketing Gimmicks: Indonesia’s Growing Pseudo-Data Problem

Published :

Section Title

No posts found.

When Policy Reports Become Marketing Gimmicks: Indonesia’s Growing Pseudo-Data Problem

In Indonesia’s policy ecosystem, reports have become powerful political weapons. Wrapped in the language of “evidence” — with neat graphs, scores, and technocratic jargon — they are consumed and circulated as though they were scientific evaluations. But behind many of these polished surfaces lie soft methods, unverified claims, and opaque data.

Terms like militeristik and tajam ke bawah appear in narratives — rhetorical framings masquerading as analytic categories. This isn’t a cosmetic issue. When perception surveys pose as neutral evidence, they don’t strengthen democratic accountability. They distort it.

Many local think tanks have mastered what can only be described as a marketing gimmick: wrapping opinion research in the aesthetic of science to project authority without methodological substance.

A credible evaluation has clear standards. Institutions such as the OECD, World Bank, Freedom House, and UNDP disclose methodology in full, combine perception with objective indicators, publish margins of error, and make their data replicable. Many local actors skip every one of these steps.

This dynamic isn’t theoretical. It’s already shaping the way the public interprets government performance.

A Case in Point: CELIOS

A recent “evaluation” by the Center of Economic and Law Studies (CELIOS), ostensibly assessing one year of the Prabowo Subianto–Gibran Rakabuming Raka administration, is a textbook example of this pattern.

CELIOS relies on two perception instruments — an “expert-judgement” panel and a public survey. The panel is composed of 120 journalists selected through purposive sampling. CELIOS states they scored “based on predetermined criteria,” but publishes no scoring rubric, no definition of expertise, and no inter-rater reliability checks.

The public survey is a national digital survey (N=1,338) using targeted digital sampling with demographic weighting to BPS. The questionnaire itself is not appended in the PDF. Fieldwork windows: panel 30 Sept–13 Oct 2025; public 2–17 Oct 2025.

Across the methodology panels and figures, CELIOS reports category percentages but provides no margin of error (MOE), no confidence intervals (CI), and no detailed sampling frame beyond demographic weighting. To put it plainly: no statistical disclosure.

The PDF provides no published questionnaire, raw dataset, weighting schema, or codebook — which means its findings cannot be independently verified or replicated. This is fundamental to the transparency and replicability of actual policy reports.

Taken together, these choices place the CELIOS product much closer to advocacy than to transparent, evidence-based policy evaluation. And, crucially, CELIOS is not unique. It is emblematic.

Perception ≠ Performance

Perception surveys can be useful — they capture sentiment. But they are not evidence of actual performance. Public dissatisfaction can reflect real issues; it can also stem from short-term adjustment costs, polarization, or algorithmic outrage cycles.

In credible governance indices, perception data is triangulated with objective indicators. In CELIOS’s case, perception is the entire argument.

Bias exists in every dataset. Bias itself is not the problem — hiding it is. A journalist panel is not a neutral expert body. A targeted digital sample is not a representative national survey. Without clear disclosure of sampling limits or scoring criteria, these methods inflate their own authority — and mislead the public.

Presentation plays a quiet but powerful role. A clean graph can make weak data look credible. This is aesthetic objectivity: numbers, design, and jargon creating the illusion of rigor. We should remember: a pretty PDF is not peer review. A bar chart is not a methodology.

Another structural flaw is temporal. CELIOS evaluates the government at Year One. Serious evaluators know Year One is noisy: reforms take time, perception lags performance, and expectations are at their peak. International indicators use multi-year baselines to avoid this distortion. CELIOS does not. A one-year snapshot tells us more about political noise than about governance.

Good science invites interrogation, replication, and falsification. CELIOS provides no instruments, no raw data, and no documentation. The result is not replicable — yet, like many similar reports, it dominates headlines and shapes opinion. This is not how evidence works; it is how narratives consolidate power.

Raising the Scientific Bar

None of this means critical voices should be silenced. It means critique should meet the same standards it demands of power. If a think tank claims to be “scientific,” it should, at the very least: publish its methodology, disclose its sample and error margins, define its so-called expert criteria and scoring rubrics, distinguish perception from performance, and allow replication.

If it does not, media and policymakers should call it what it is: advocacy, not analysis.

Indonesia deserves better than policy debates built on pseudo-data. If we demand transparency from those in power, we must demand no less from those who claim to hold power accountable.

Because data without discipline isn’t evidence. It’s performance.