A recent trend in professional settings involves individuals submitting ideas or documents for review accompanied by notes like “here are some thoughts from ChatGPT” or “this draft was created with ChatGPT’s assistance.” Unfortunately, the content often remains unedited or contains generic information that lacks essential context. In some instances, text appears AI-generated without proper acknowledgment, leading to recipients becoming unwitting editors, which was not their intended role.
For instance, one professional was recently requested to review a strategy document. The sender stated, “I had this idea and I am curious about your thoughts,” noting that “AI was very helpful.” What followed was a 1,200-word memo seemingly generated by AI. The recipient dedicated about half an hour crafting a detailed response, yet the initial sender never replied. This experience left the reviewer frustrated, feeling that they were addressing a work product the sender hadn’t committed much effort toward themselves.
This situation is becoming increasingly common and highlights a challenge in the adoption of generative AI: these tools allow users to create content quickly, shifting the responsibility for enhancement from the creator to the recipient. The ease of generating content may lead individuals to overlook thorough reviews of such outputs, causing a cognitive burden for others.
The urgency for norms around generative AI in the workplace is apparent. Since these tools are often touted for their time-saving potential, it’s easy to understand why users may be less meticulous when reviewing AI output. As Erica Greene of Machines on Paper recently articulated, “productivity is not about outsourcing the thinking work, but accelerating the execution speed.” Thus, the time saved by one person might simply become time spent by another correcting subpar work.
A study from Microsoft and Carnegie Mellon highlighted that generative AI alters where our critical thinking is applied—from tasks like gathering information and composing content to those involving verification, editing, and guiding AI’s outputs. Leaders now emphasize critical thinking, judgment, and taste as vital skills for future success. The concern arises when individuals neglect important verification and editing steps, risking a reliance on AI recommendations without sufficient scrutiny.
Moreover, some users might find the process of verifying AI output less engaging and creatively fulfilling. Research involving material scientists showed that, despite significant productivity boosts from AI tools, many felt less satisfied in their roles, believing AI was doing the creative work while they simply revised the results. This sentiment resonates with those who find themselves evaluating AI-generated text without clear guidance.
As the integration of AI tools becomes standard practice, establishing clear guidelines for responsible usage is essential. Organizations should create norms to ensure that AI-assisted work does not create an unfair burden on team members. To evaluate work quality accurately, individuals should be assessed based on output rather than the tools employed. Key best practices include limiting the sharing of AI-generated content, verifying facts before passing along information, and providing necessary context that AI cannot offer. By implementing these strategies, teams can enhance their AI use while maintaining quality standards.