in

Company “Sheepishly” Admits Its Employee Handbook Was Generated With ChatGPT, Doesn’t Have Anti-Harassment Policy


So much for saving time and money.

Chief People Offender

Caught with pie on their faces, some human resources departments are copping to using ChatGPT and other generative AI tools to write important policy documents.

In an interview with Forbes, the CEO of the HR consultancy Humani said that one of her clients was faced with a peculiar and self-inflicted fiasco: in the midst of an escalating harassment claim, its AI-generated employee handbook just straight up didn’t have the policy to handle it.

During a discussion about the debacle, the unnamed client “sheepishly” told Carly Holm, the CEO of Humani, that ChatGPT had written its employee handbook and left out the key anti-harassment section.

“If the workplace does not have appropriate policies in place like a zero tolerance policy for sexual harassment, workplace violence, etc, the investigation will then look at the employer, and there will be consequences,” Holm told the magazine. “So they realized, ‘Wow, we should have brought in professionals to write these policies the proper way.'”

This is not, of course, anywhere near the first time AI has been used to write legally important documents.

In the 18 months since OpenAI released ChatGPT in November 2022, we’ve seen headline after headline about the popular software being used to write everything from judicial rulings to legal briefs. As HR professionals told Forbes, this practice is common on the employer side of things as well, and companies have begun using ChatGPT to write offer letters and separation agreements, too.

Those same HR experts noted that while using AI chatbots saves time and effort, it also runs companies the risk of leaving out key segments of such important documents — as the harassment-less employee handbook exemplifies.

Employee Failbook

It’s pretty easy to imagine scenarios in which this sort of unnecessary oversight could backfire spectacularly for employers. And as the director of the HR consulting firm Iris told Forbes, those sorts of scenarios can and do happen.

According to Iris director Daniel Grace, a company in the UK used Microsoft’s Copilot AI to write a severance agreement that was missing a bunch of essential information. When handed over to the company’s lawyers, those omissions turned out to be very expensive indeed.

“Their lawyer essentially just threw it out the window and said this is useless,” Grace described. “It really tarnished their own negotiation and they ended up having to pay a higher settlement amount.”

At the end of the day, Grace told Forbes that AI tools are best left on the drafting desk.

“Let’s face it, people are using artificial intelligence tools to speed things up… and they really shouldn’t,” Grace said. “These things have teething issues.”

More on AI gaffes: Another OpenAI Executive Choked When Asked If Sora Was Trained on YouTube Data

anthropos raises $2.7 million

Anthropos Raised $2.7M to Help Companies Build the Workforce of the Future Using AI

Top trends cybersecurity experts are talking about

Top trends cybersecurity experts are talking about