In 11th grade, I sent several emails to local professors begging to intern in their research labs for the summer. These weren’t good emails. Littered with flowery adjectives about what I wanted to delve into for the summer, most emails unsurprisingly went unread. And yet a couple professors read them and replied, even commending my effort in putting down whatever half-baked ideas they had to parse through.

I don’t think this strategy works anymore, and large language models are at least partially to blame. I recently received an email from a high school student asking to do research with me. On the surface, the email sounded personal, with mentions of my papers and follow-up ideas. On closer look, there were suspicious signs of AI-generatedness: the email’s structure was familiarly formulaic, and the descriptions of ideas were vague and over-the-top. Of course, misguided emails from young students are nothing new; there is no way I can really be sure whether they used ChatGPT or not. But the existence of that possibility means that my reaction to these emails feels different now in 2024, compared to just two years ago. The fact that cold emails could be AI-generated casts doubt on the whole enterprise.

In their commentary on deepfakes, Robert Chesney and Danielle Citron forecast a similar dynamic1. As the public becomes more aware of deepfakes, they note that people “will be primed to doubt the authenticity of real audio and video evidence” (1785). This skepticism can be exploited by bad actors to sow distrust in real events, to their benefit; consider, for example, a clip which paints a politician in a bad light. The spread of deepfakes—or more precisely, the public’s belief that deepfakes are everywhere—creates a liar’s dividend. The liar’s dividend is the value that liars gain from the presence and awareness of deepfakes.

While both trends rely on the shifting public perception of AI, the situation I point out ends up harming innocent actors, rather than benefiting an adversarial actor. At least currently, those who use AI for tasks like writing essays, applications, and emails are at a competitive disadvantage. While they may achieve a more personally preferable effort-reward tradeoff, their rewards are modest: LLM-written cover letters are likely worse than the top applicants’. However, genAI users alter the ecosystem for everyone else in two ways. First, some types of content, like a 10-page essay or a Shakespearian sonnet, used to reflect some minimum amount of effort—even if the actual substance was mediocre. Now, there is no such “proof-of-work,” because generative AI can trivially produce something of the right form. Second, a corollary of genAI’s widespread accessibility is that human systems are easily flooded: a firm that used to receive one good application and one bad application per day may now receive one good application and nine bad applications, making it less likely that the good applicant will be discovered.

These effects are what economists refer to as negative externalities. For example, smokers impose a negative externality on nearby bystanders due to secondhand smoke. If enough smokers exist in a community, one may feel compelled to wear a mask, a necessary burden to preserve one’s health. Similarly, as the proliferation of AI-generated content (AIGC) invites skepticism towards all content, nonusers of AI are saddled with the burden of authenticity: they must perform additional work in order to avoid the penalties of an AIGC-filled world.

How does the burden of authenticity manifest for nonusers of AI? First, there is a degradation in how nonusers’ work is perceived, in particular due to the recipient’s suspicion of whether it is genuine. If one’s writing or art happens to align with the patterns that GenAI uses, the quality of their work may be falsely discounted. If I see an email with a particular structure and phrasing— ”I hope this message finds you well” and “Thank you for considering my request”—I assume it is AI-generated, even though formal emails long precede genAI. Perhaps no one is sad about the death of formal emails, but consider art: DALL-E outputs reflect some art styles (and even particular artists2) more than others, so it risks especially devaluing those styles.

Second, determining whether content was produced by generative AI is far from perfect3, and so it invites bias. This mostly applies to writing. LLM detector tools themselves are often biased against non-native English speakers4, because they tend to write more structured text. But even without a detector, it is likely that whoever is looking at a piece of content may hold implicit biases about who is or isn’t likely to be using AI, and so groups like immigrants may disproportionately bear authenticity penalties.

Third, for certain nonusers, there is a penalty towards learning. Consider a hobby artist whose work is currently worse than what could be produced by DALL-E. Prior to DALL-E, she may continue to create mediocre art because it is a pathway to improvement. But because GenAI inflates what can be produced with minimal effort, mediocre art is less valuable, so she may just rely on DALL-E instead. A similar pattern may show up in education, where the average high school literary analysis might well be worse than what LLMs produce. Of course, students aren’t writing essays for The New Yorker, but rather for pedagogical value. But if their best work would receive a lower grade than their friend’s ChatGPT essay, it’s hard to see why they wouldn’t use ChatGPT as well. Here, the burden one carries is that of others’ (or one’s own) inflated expectations, because AI raises the baseline.

How do we mitigate the burden of authenticity? If an oracle could perfectly determine whether a piece of work is AIGC or not, that would help: AI-generated emails could be screened out, or teachers could assign lower grades to ChatGPT essays in order to preserve learning incentives. Whether AI emails or essays should actually be screened out is a separate question, but the oracle would at least help mitigate any unintended collateral damage imposed on AI nonusers. Algorithmic AI detectors aim to be this oracle, but they are notoriously imperfect. I think they can still be useful, especially if we coax them to produce calibrated confidence estimates (i.e., 95% of the content for which the detector outputs 95% confidence is actually AIGC). But we also want to avoid a cat-and-mouse game, where better detectors breed better detector evasion strategies.

In the education example, Arvind Narayanan takes a different perspective: if an essay can be written by genAI and receive a good score, then it’s a bad evaluation5. That is, the question of whether something is AIGC or not shouldn’t be salient for any high-stakes decisions, because a task that can be solved by genAI is likely busywork. If the real-or-genAI question is salient, we should use a more thoughtful evaluation, which he argues would be a win overall. I worry this is wishful thinking. Calculators can do arithmetic, but arithmetic is still a necessary foundation. Even if you don’t believe that arithmetic or three-paragraph style essays are pedagogically useful, it’s only a matter of time until an LLM can complete tasks which everyone agrees are pedagogically useful.

How will things play out longer term? My first prediction is that dynamics will self-correct: “the market” will figure out where there should and shouldn’t be authenticity penalties. For example, if firms begin rejecting formulaic-sounding cover letters because they imply a lack of effort, this may result in AI nonusers incurring a penalty in the short term. However, there are forces on both sides which will help things correct: firms will realize they are losing good candidates, and relax their policies; committed applicants will strategically incorporate authenticity cues to avoid being perceived as AIGC, which isn’t much more arbitrary than other factors used to evaluate candidates.

My second prediction is that as AI-generated content becomes more common, the public will continue improving at sussing it out. We are still in the early stages of genAI, and there is evidence that human heuristics for detecting LLM-written text are flawed6, or that humans rate human-written text as human only modestly more often than GPT-4 text7. But even these two studies, separated by about a year and one GPT generation, demonstrate that we might be outpacing AI: the public’s ability to identify human-written text has improved. We also aren’t alone, as community forums like Reddit8 and crowdsourcing methods like Community Notes9 will help the truth bubble to the surface. That is, the oracle might not be a detection algorithm but rather our collective knowledge, and this too can dampen the adverse impacts of AIGC. Notably, more reliable detection would reduce both the liar’s dividend and the burden of authenticity.

What’s certain is that GenAI will uproot our preconceptions of which content deserves value. It’s helpful to remember that we humans have the agency to decide how that happens.

Thanks to Kenny Peng and Myra Cheng for helpful thoughts on this essay.

  1. Chesney, Robert and Citron, Danielle. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review, 2019. [Link] 

  2. Heikkilä, Melissa. “This artist is dominating AI-generated art. And he’s not happy about it.” MIT Technology Review, September 16, 2022. [Link] 

  3. Elkhatat, Ahmed M., Elsaid, Khaled, and Almeer, Saeed. “Evaluating the Efficacy of AI Content Detection Tools.” International Journal for Educational Integrity, December 2023. [Link] 

  4. “AI-Detectors Biased Against Non-Native English Writers.” Stanford HAI, May 15, 2023. [Link] 

  5. Narayanan, Arvind. “Students Are Acing Their Homework by Turning in Machine-Generated Essays. Good.” March 20, 2023. Link 

  6. Jakesch, Maurice, Hancock, Jeffrey T., and Naaman, Mor. “Human Heuristics for AI-Generated Language Are Flawed.” PNAS, March 14, 2023. [Link] 

  7. Jones, Cameron R., and Bergen, Benjamin K. “Does GPT-4 Pass the Turing Test?” arXiv, October 31, 2023. [Link] 

  8. A Reddit thread on words that distinguish LLM-written text. 

  9. An overview of X’s Community Notes. 


Website forked from this repo.