Skip to content

Generative AI in HCM: Use Cases, Cautions and Mitigations

Featured Image

In the first blog of this 3-blog series from HRTech veteran Steve Goldberg, “Generative AI in HCM: Innovation and the Double-Edged Sword”, two of the most prominent Generative AI use cases explored were assisted authoring or content suggestions, and summarization to ease information consumption.

Blog Series Poll
Do you want to read the first part of this blog series?
Yes
No
 

This second blog in the series will dive into a broader range of use cases, with some branching off logically from the first two examples highlighted, and others having a basis in other core functions or sources of business value.

We will then briefly discuss ways of mitigating the main concerns or cautions associated with “Gen AI” usage, namely the potential for bias, an absence of empathy in communications when needed, and whether some workers will need to move into other types of jobs.

But first, given the critical role that constructing a “productive prompt” plays in yielding the most helpful Gen AI content, let’s start by outlining the components of such a prompt.

A tool such as ChatGPT, or alternatives like Google Bard or Bing Chat, needs three core elements to generate good-quality results: A clear objective or goal, solid context that guides, and a prescribed or desired format.

This prompt should illustrate: “Please make a list of items to bring on a camping trip (objective), that a family of two adults and two small children would find useful when camping in a remote mountainous area (context), and present in bullet form along with whether these are commonly found in homes (format).”

Use Cases Only Limited by Imagination

Extensive coverage of the dynamic duo of Gen AI and LLM’s (Large Learning Models) has already given ample attention to capabilities like generating content in different writing styles and different languages, or breaking down the components of an issue or problem to move closer to a viable solution, or assisting with keyword research for optimizing SEO, or even helping software developers by kickstarting complex coding assignments.

That said, since many of us are still learning the boundaries of Gen API applicability, and since there is not much subtlety in the above examples, here are some other use cases (e.g., for ChatGPT) that showcase a bit more creativity in its usage:

  • Have content tailored to an audience, providing various points of context.
  • Generate content that is more “inclusive” in nature.
  • Recruiters can have ChatGPT generate interview questions that enable skill level, or probe into areas not made clear in one’s resume or application.
  • Ask how best practices have evolved in certain challenging business situations, such as giving critical feedback to a team member, or managing a staff reduction.
  • Learn what the typical ramp up time is for progressing from one level of proficiency to another or for learning a new skill (if AI-powered analysis has not yet determined this). Note: If this information is being derived from Gen AI web searches, bear in mind that it will be fairly generalized guidance unless ChatGPT is given considerable context to work with.
  • Ask for help in time management around one’s busy schedule and priorities, again providing solid context for maximum relevance.
  • Let ChatGPT create the first iteration of a great marketing video or web site.

AI is a tool. The choice about how it gets deployed is yours!

As for cautions, let’s start with the most obvious one: The ubiquitous concern of Gen AI bias rooted of course in human bias. Why is this such a concern?

One answer lies in the fact that of over 42,000 CEO’s in the United States, only 31.5% are women, and only 24% are people of color (Zippia Research).

And what’s the significance of this in the realm of Gen AI usage?

It’s simply that CEO’s are obviously the primary architects of corporate strategy and policy, which is a major source of content/info and supposed facts used by Gen AI models.

Moreover, per The Guardian, globally, men are 21% more likely to be online than women, rising to 52% in the least developed countries.

The similar implication here is that an online presence also means sources of content/info and supposed facts used by Gen AI models.

Another major concern for anyone focused on change management best practices is the possibility of shortcuts being taken in organizations where Gen AI partially, or worse, fully writes employee communications in (change management) moments that matter.

These might relate to kicking off a business transformation, announcing a major acquisition or relocation of staff, or a change in strategic direction or leadership.

Change naturally connotes uncertainty for many, which therefore suggests personal risk or challenge.

Delegating the task of generating an employee communication in a situation demanding the human qualities of sensitivity and empathy can be a huge mistake.

This arguably holds true even if extensive context and prescriptive instructions are provided to the tool, or if instructions are given to “communicate like a sensitive human.”

Many receiving the communications will discern the lack of authenticity, particularly if this scenario is not a new one in that organization.

Issues of potential bias or inauthentic, human-simulated communications perhaps only become disruptive / destructive forces within organizations when Chat GPT-like tools are viewed as an end-to-end solution when they clearly should not be.

Most complicated and/or sensitive situations will require a skilled, thoughtful human to go the last mile, or the last several miles as it were.

Beyond that, users of Gen AI should follow simple rules such as “always review and edit” and “always let others know” will no doubt come in handy.

Additionally, keep in mind that Gen AI also has no “real-time” dimension to it. In other words, it can only be useful when there is some historical record of information or experience, and not ten minutes worth, but enough of a record or base of knowledge to pass an appropriate reasonability test.

I’ll conclude by citing new findings from research firm Pearson around the impact of AI technologies in general.

The research revealed that companies in the U.S. for example will have an opportunity to retrain 23.5 million people for higher-value work that is not automatable and some of the most humanlike of competencies—communication, collaboration, innovation—will be the most in demand skills for these newly created job opportunities.

Where will this labor pool probably be coming from? Positions potentially impacted by Generative AI development would seem like one major channel.

Steve Goldberg
Steve Goldberg
‪HR Process & Tech Leader | HCM Analyst/Advisor

Steve Goldberg's 30+ year career on all sides of HR process & technology includes HR exec roles on 3 continents, serving as HCM product strategy leader and spokesperson at PeopleSoft, and co-founding boutique Recruiting Tech and Change Management firms. Steve’s uniquely diverse perspectives have been leveraged by both HCM solution vendors and corporate HR teams, and in practice leader roles at Bersin and Ventana Research. He holds an MBA in HR, is widely published and is a feature speaker around the globe. He’s been recognized as a Top 100 HRTech Influencer. Steve is also a close advisor to Azilen Technologies, this post’s sponsor.

Related Insights