28 May 2024

AI and people: what does the future of work hold for HR and hiring?

On 11 April techUK hosted a webinar titled "AI and people: what does the future of work hold for HR and hiring?" as part of our Exploring the Future of Work series.

The panel had a variety of experienced industry insiders, as follows:

Chair: Nimmi Patel, Head of Skills, Talent and Diversity, techUK

  • James Johns, Head of Corporate Affairs, Workday
  • Ian Lithgow, Managing Director - Health and Public Service, Accenture
  • Riham Satti, Co-founder and CEO, MeVitae
  • Emily Campbell-Ratcliffe, Head of AI Assurance, Responsible Technology Adoption Unit, DSIT

You can watch the full recording of the event here, or read our summary of the event below:

Introduction:


Nimmi Patel:
Nimmi, the head of Skills, Talent, and Diversity at Tech UK, warmly welcomes attendees to a webinar focused on the future of work in HR and hiring. She highlights that it is part of techUK’s "Exploring the Future of Work" series, which gathers experts from the tech sector, policy, and skills landscape to discuss how technology is reshaping employment.

The session will discuss the impact of AI on HR and the people profession, acknowledging both the opportunities and challenges it presents.
 

Individual Introductory Remarks


James Johns:
James from Workday discusses their focus on AI-enabled HR and finance software, which is widely used by organisations ranging from Fortune 50 companies to public sector organisations. They've been integrating AI into their products for several years, with applications ranging from simple tasks like scanning expense receipts to more complex functions like analysing skills data to direct people to opportunities and training. Workday's AI aims to ground outputs in structured information to prevent biases and ensure accurate decision-making.

Riham Satti:
Riham from MeVitae shares their goal of “mitigate cognitive and algorithmic biases” in the workplace and in hiring processes. MeVitae provides solutions to detect and address imbalances in hiring pipelines, with a particular focus on factors like gender, ethnicity, disability, sexuality, and age. They highlight a successful case where anonymised recruiting increased diversity and widened the talent pool for a client, leading to more diverse hires and improved retention rates.

Emily Campbell-Ratcliffe:
Emily from DSIT emphasises the importance of AI assurance, particularly in areas like HR recruitment where there's no specific sectoral regulator. DSIT aims to develop an effective AI assurance ecosystem to ensure the trustworthiness of AI systems. They have chosen to start with HR recruitment due to its broad impact across different sectors, recognising the need for collaboration between various government departments and regulators to implement effective AI policies.

Ian Lithgow:
Ian from Accenture tells us why the topic matters so much. Recent research conducted across various regions indicates that a significant portion, ranging from 31% to 48%, of working hours will be impacted by generative AI, both augmenting and automating tasks. This impact varies across industries, with percentages ranging from 25% to 73%. HR, like other sectors, faces administrative challenges, with functions such as recruitment, training, and workforce management being significantly affected by AI. Concerns arise among employees, with 58% expressing worries about increased stress or burnout due to AI implementation. Questions regarding job security and future career prospects also emerge. Additional research on hidden workers reveals biases in recruitment processes, highlighting the potential for generative AI to address these issues with the right frameworks and assurances in place.

 

Nimmi’s Questions to the Panel:


What are the new innovations in AI Technology for HR? Perhaps some examples would be useful.

James Johns:
James elaborates on the potential impact of generative AI on productivity. He provides a tangible example from Workday, where users generate approximately 34 million job descriptions annually, each taking hours to create manually. With the introduction of generative AI, Workday plans to automate the initial draft of these descriptions, significantly reducing the time required. Although human review remains essential, this illustrates the substantial productivity improvements achievable through AI. Beyond job descriptions, James envisions AI automating tasks job descriptions, Knowledge Management articles, employee growth plans, and even software writing through Workday Extend. This application of generative AI in a no-code or low-code environment empowers customers to customise and enhance their Workday experience independently.

How might the adoption of such tools change the role of HR professionals?

Ian Lithgow:
Ian highlights how organisations are leveraging AI for ongoing employee engagement surveys, automating up to 70% of recruitment tasks, including job descriptions, adverts, and interview scheduling. He introduces the concept of an AI Navigator tool to predict future roles in HR, emphasising the potential for productivity improvements and capacity freeing. He envisions HR professionals transitioning from administrative tasks to strategic roles like Talent Acquisition Advisors, focusing on employee value proposition and future skills needed within the organisation. Ian emphasises the importance of HR in enabling and managing cultural impacts of AI adoption, noting the need for leadership, cultural adaptation, and workforce upskilling. He also explains the need for the human-centric approach to AI adoption, where HR leads with value and ensures workers are taken along on the transformation journey. Overall, Ian sees a significant shift in HR towards a more strategic, enabling, and human-cantered role within organisations in the face of AI-driven changes.

How do you take an employee on that journey of new HR tools and how to use them?

Riham Satti:
Riham emphasises the importance of transparency as being the key in guiding employees through the adoption of new HR tools. She also states the need for clarity regarding the use of data, from candidate applications to self-identification questions, ensuring individuals understand how their information is utilised and stored. Riham highlights the fear often associated with such processes and advocates for open communication to alleviate concerns. Additionally, she stresses the importance of transparency in hiring algorithms, advocating for audits to ensure fairness and accuracy in decision-making.

How can we ensure the responsible adoption of AI and address these concerns?

Emily Campbell-Ratcliffe:
Emily addresses the question of ensuring the responsible adoption of AI and addressing associated concerns by emphasising the importance of trustworthy AI tools, assurance mechanisms, and global technical standards. She acknowledges the risks of perpetuating bias and discrimination in recruitment and employment decisions, as well as digital exclusion. Emily stresses the role of assurance tools in measuring and evaluating system trustworthiness, advocating for both qualitative and quantitative techniques. She highlights the need for organisational culture and change management to ensure employee buy-in. Emily talks of the importance of being an informed buyer, asking relevant questions to suppliers, and upskilling users to understand system capabilities and limitations. She suggests running internal pilots post-procurement to familiarise users with the system and identify training needs before full implementation.

Riham expands on the same question:
Riham emphasises the urgency of addressing AI adoption responsibly, citing neurological insights that humans make 35,000 decisions a day, 95% of them unconsciously. She tells us that using machines to aid decision-making can rapidly escalate impact, urging organisations to act promptly. Riham explains the decision-making process, highlighting the importance of rationality over emotional responses and the need for understanding AI logic.

Should businesses disclose use of AI to candidates, and is that an important issue?

James Johns:
James states that it is important to be transparent in use of AI to candidates, aligning with their ethical AI framework. He acknowledges that some AI use cases may warrant disclosure to candidates, advocating for transparency as a general principle. James also highlights the value of piloting AI tools to build confidence among HR professionals and broader business users. He suggests incentivising employees to provide data for AI tools, citing examples of linking data provision to access to talent marketplaces. This approach aims to demonstrate the additive value of AI capabilities in the working environment, fostering confidence and trust among employees.

James you spoke about incentives, what might they be?

James Johns:
James suggests offering meaningful and positive outcomes. These incentives may include access to new assignments, learning opportunities, and training programmes. By contributing data to drive AI functions like Workday's Skills Cloud, employees can access a richer set of experiences, aligning with the company's goals and fostering engagement with AI tools.

A question to all of you – Does the growing prevalence of AI in HR present a problem for worker’s rights?

Ian Lithgow:
Ian acknowledges the potential of AI in HR to offer opportunities for meaningful work but stresses the importance of clearly defining workers' rights and ensuring transparency to mitigate associated risks and concerns.

Emily Campbell-Ratcliffe:
Emily discusses the importance of addressing workers' rights, particularly for those classified as workers, highlighting the need for protection regarding discrimination, data processing, and privacy in the context of AI and automated decision-making systems. She highlights the necessity of including workers in the design and deployment process to mitigate risks effectively.

Who is ultimately responsible for appropriate use of technologies that they use?

James Johns:
James believes that while companies bear legal responsibility for technology use, his optimism about AI's potential to positively impact workers' rights remains. He emphasises the importance of transparency and fairness, advocating for a framework that amplifies human potential and places individuals at the forefront of technological development. By focusing on examples such as measuring employee engagement and facilitating career opportunities, he underscores AI's capacity to enhance the working environment and drive positive change. James asserts that with the right approach, AI can not only improve workplace dynamics but also foster a culture of empowerment and growth.

What excites you about what opportunities AI can bring?

Ian Lithgow:
Ian's optimism stems from the transformative potential he sees in the synergy between humans and machines, believing it can greatly benefit productivity and job satisfaction across various roles. He acknowledges concerns about misuse but highlights real-world examples like drug discoveries combating super bugs as evidence of AI's positive impact. Excited by the ongoing advancements, he envisions endless opportunities, even for future generations, expressing regret that he's nearing the end of his career rather than the beginning amidst such promising prospects.

Emily Campbell-Ratcliffe:
Emily acknowledges the current overconcern but agrees of the numerous opportunities, particularly in terms of more fulfilling work and new job opportunities emerging from the intersection of technology and philosophy. She highlights the potential for upskilling and learning new skills as roles evolve due to automation, while also expressing optimism about the societal benefits and solutions to major problems that AI could offer, preferring this perspective over dystopian scenarios like Terminator.

How should organisations ensure that AI remain within ethical guidelines?

James Johns:
James, like Ian, expresses excitement about AI advancements but isn't concerned about dystopian scenarios. Drawing from "War Games," he emphasises the importance of human oversight in AI systems to prevent unintended consequences, highlighting the responsibility of organisations to choose trustworthy AI suppliers aligned with their ethical values and needs, ensuring the technology meets their requirements and ethical guidelines.

Does anyone have a view on how we can monitor use of AI, and if tools are fit for purpose?

Riham Satti:
Riham highlights that if the technology doesn't meet their standards, they won't release it to the market. She encourages other organisations to adopt similar auditing practices and engage in transparent conversations with stakeholders to understand the impact of AI systems on different individuals, promoting data-driven decision-making to determine the suitability of the technology for its intended purpose.

Emily Campbell-Ratcliffe:
We recommend a continuous monitoring, check if there is any model drift, and whether you need to retrain it if there are. You also need feedback loops so people can feed bugs and errors to you.
 

Conclusion


What is one key takeaway for the audience and help HR professionals capitalise on AI?

James Johns:

People are encouraged to try it out. If there is already an enterprise-class HCM tool in the organisation, it likely has AI features. Give it a go, verify its functionality and promised benefits. This is the immediate step advised.

Emily Campbell-Ratcliffe:
Start by engaging with your teams to assess your readiness. Determine whether you're aiming to augment or automate processes, then establish the necessary practices. Engage with your employees to understand their needs and assess your readiness.

Ian Lithgow:
Consider the value that these technologies can bring to both employees and the organisation. Focus on envisioning the future of work processes and outcomes, rather than being solely driven by technology.

Riham Satti:
When piloting new technology, ensure to measure its impact on your organisation. Regularly reevaluate to confirm if the decision aligns with your goals and be prepared to make adjustments as necessary.

 

Thank you everyone for coming, I hope we will see you at another techUK event very soon.


Nimmi Patel

Nimmi Patel

Head of Skills, Talent and Diversity, techUK

Oliver Alderson

Oliver Alderson

Policy and Public Affairs - Team Assistant, techUK