By Andy Benzo
International Speaker on the Intersection of AI, Ethics, Language & Law | Translating AI Risks for Linguists, Lawyers & Doctors | Bridging Language, Law & Technology.
September 3, 2025
A recent article in @HarvardBusinessReview defines a troubling new reality at work: “workslop.” It refers to AI-generated content that masquerades as competent output, polished on the surface, but hollow at the core. It’s content that “looks like work,” but in fact creates more work for someone else: a downstream burden of interpretation, correction, and rework.
Sound familiar?
That’s exactly what’s happening in the world of translation.
Let’s call it the LinguiTrap: it’s the AI-fueled illusion that translation has been “handled,” when in reality, the job has only begun. AI can produce fluent sentences, long reports, and “readable” legal language. But beneath the gloss lies missing context, broken terminology, cultural erasure, and dangerous ambiguity.
The Harvard article explains that workslop doesn’t just waste time, it transfers the effort from the creator to the receiver, often under the illusion that the job is done. In translation, this burden falls squarely on human professionals. What should be a collaborative process becomes a repair job, often under tight deadlines and without recognition of the cognitive toll.
In fact, studies cited by HBR show that workers spend nearly two hours cleaning up AI-generated “slop” per incident. Now imagine applying that to legal contracts, medical records, asylum claims, or pharmaceutical instructions. We should not settle for “good enough,” because the real risk is in that false fluency.
For translators, this reality is not theoretical. Post-editing poorly generated content means:
- Decoding intent with no access to the author.
- Rebuilding terminology from scratch.
- Rewriting for tone, culture, and audience.
- Carrying the legal or ethical liability for decisions a machine guessed.
In short: what was meant to save time ends up doubling the work.
The Linguitrap Is Real
If “workslop” is the corporate term, we propose a linguistic counterpart: Linguitrap.
LinguiTrap happens when AI-generated translations lure readers (and clients) into believing the job is done, but in reality, it traps professional translators into untangling errors, filling in missing context, and restoring meaning. The output appears fluent, but beneath the surface, it’s incomplete, misleading, or just wrong. It’s what happens when “machine-assisted” quietly becomes human-rescued, and the translator is left cleaning up the illusion of accuracy.
Like HBR’s warning, we also need to:
- Recognize that not all tasks are suited to AI.
- Reaffirm the value of human linguistic expertise.
- Stop treating translation as a checkbox and start treating it as the strategic, ethical, and cultural function that it is.
- Demand transparency, consent, and labeling when AI is involved.
If we don’t, we risk overwhelming the world with bad translations and turning translators into repairers of faulty automation.
Let’s Lead with Intention
AI can be a powerful ally. But, only when used purposefully, ethically, and with a human-centered approach. As Harvard Business Review puts it: “What a sender perceives as a loophole becomes a hole the recipient needs to dig out of.”
Translators shouldn’t have to dig anymore.
We deserve better.
So does the world.
Reference: Niederhoffer, K., Rosen Kellerman, G., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025, September 22).AI-generated “workslop” is destroying productivity. Harvard Business Review. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity