A Friendly Reminder: A.I. Work Isn’t Yours

Using A.I. without acknowledging it is a problem that workplaces need to address.
A Friendly Reminder: A.I. Work Isn’t Yours

Send questions about the office, money, careers and work-life balance to workfriend@nytimes.com. Include your name and location, or a request to remain anonymous. Letters may be edited.

I am a senior lead and manage large, complex projects and teams. My organization has a thorough review process for all projects, with many quality enhancement steps. I recently noticed colleagues are sending ChatGPT generated responses without acknowledging that their response was generated by artificial intelligence. After seeing a recent review with a suspected A.I.-generated set of bullet points, a team member was able to verify that the response had been generated by A.I.

I support the use of A.I. to improve efficiency, but this sort of review has the opposite effect: The A.I.-generated response does not take into account previous discussions/decisions during the review process and can generate unnecessary busy work.

Are other organizations seeing this sort of internal use of A.I.? And what is the best way to broach this subject without causing a negative reaction?

— Anonymous

Many organizations are grappling with how to manage A.I. in the workplace. The next time someone turns in work generated by A.I. without an appropriate acknowledgment, simply tell them that moving forward, they need to identify all A.I.-generated work. But it’s also important to take a more expansive approach instead of trying to address the issue on a case-by-case basis. Collaborate with relevant stakeholders to develop A.I. guidelines that reflect the realities of the work your organization does. When is it appropriate for employees to use A.I.? How should they acknowledge and cite A.I.-generated work? When is using the technology not appropriate? What are the consequences for employees who do not follow these guidelines? How are you going to train staff to use A.I.? How are you going to train managers to identify work that is generated by it? These are the early days of the mass adoption of A.I. It is an imperfect tool, however exciting its potential may be. We need to think carefully about the ethics of using A.I. and remember that artificial intelligence is not human. It lacks a moral code. It lacks judgment. There are limits to what it knows and what it can do.


I’ve been in the same office for more than 25 years, and I’m largely self-taught. Because I’ve shown a willingness to learn independently, whenever we have new tools or changes to existing tools it’s assumed I don’t need training. I’m also expected to train new employees. However, because I’m self-taught I have trouble articulating steps. Are there resources that can help with this? The irony that I’m self-taught but don’t know how to train myself to be a better trainer isn’t lost on me, but I sincerely want to give others the sort of help I haven’t always received myself.

— Anonymous

Don’t be so hard on yourself. Training and instructional design are specific areas of expertise. If your employer expects you to train new employees, ask if they will support your professional development and pay for you to take an instructional design course or workshop. There are also many books that you may find helpful. Cathy Moore’s “Map It: The Hands-On Guide to Strategic Training Design,” is well-regarded. I’d also suggest “Design For How People Learn” by Julie Dirksen. As you start to develop resources, think, what are the most important things people need to know about using these tools? How can you best communicate that information to new learners? Good luck!