What if we are not teaching machines to think—but teaching them to think in only one kind of grammatical cage?
But the more pointed critique came from literary circles. Critics like Harold Voss (The New Criterion) argued that Rednik reduces literature to a mere wiring diagram. "She treats Proust's subjunctives as engineering schematics," Voss wrote. "The soul is missing." Lara Isabelle Rednik
Digital Humanities / Emerging Voices
Her central, provocative thesis: The bias in AI is not just social. It is grammatical. This is where Rednik gets interesting. Most critics focus on biased training data. Rednik focuses on mood and aspect —the parts of grammar that deal with time and reality. What if we are not teaching machines to
Her 2025 experiment, now known as , found that when asked to generate counterfactual histories (e.g., "What if the printing press had been invented in 100 AD?"), models trained primarily on English produced 40% less creative divergence than models fine-tuned on Romance languages. This is where Rednik gets interesting
Her conclusion was stark: By training our AIs on a global, flattened English corpus, we are not just standardizing language. We are standardizing imagination. Naturally, the tech world has pushed back. OpenAI’s chief ethicist called her work "linguistic determinism dressed up as data science." A prominent Google DeepMind researcher accused her of "romanticizing non-English syntax."
Beyond the Algorithm: The Quiet Disruption of Lara Isabelle Rednik