Algorithmic Tradeoffs, Applied NLP, and the State-of-the-Art Fallacy

AJ Alvero, Ruohong Dong, Klint Kanopka, David Lang

Published: 2025/9/10

Abstract

Computational sociology is growing in popularity, yet the analytic tools employed differ widely in power, transparency, and interpretability. In computer science, methods gain popularity after surpassing benchmarks of predictive accuracy, becoming the "state of the art." Computer scientists favor novelty and innovation for different reasons, but prioritizing technical prestige over methodological fit could unintentionally limit the scope of sociological inquiry. To illustrate, we focus on computational text analysis and revisit a prior study of college admissions essays, comparing analyses with both older and newer methods. These methods vary in flexibility and opacity, allowing us to compare performance across distinct methodological regimes. We find that newer techniques did not outperform prior results in meaningful ways. We also find that using the current state of the art, generative AI and large language models, could introduce bias and confounding that is difficult to extricate. We therefore argue that sociological inquiry benefits from methodological pluralism that aligns analytic choices with theoretical and empirical questions. While we frame this sociologically, scholars in other disciplines may confront what we call the "state-of-the-art fallacy", the belief that the tool computer scientists deem to be the best will work across topics, domains, and questions.

Algorithmic Tradeoffs, Applied NLP, and the State-of-the-Art Fallacy | SummarXiv | SummarXiv