Artificial intelligence has spent the past two years reshaping every industry, and recruitment is no exception. In conversation, it is described as a game changer as it steps in to manage the finding, evaluating and hiring of individuals. In reality, it is not ready to step into human shoes, and probably never should. As a tool it should be used to augment recruitment, automating tasks, optimising workflows and lifting the administrative burden. It can, and should, liberate the professional to focus more effectively on finding the right people.
It should not be allowed to roam unfettered across platforms and data because it introduces ethical and security concerns that have yet to be adequately addressed and managed. As the Responsible AI in Recruitment guidance from the UK Government highlights, there remains a risk of ‘unfair bias or discrimination’ and the technology can introduce ‘digital exclusion[1]’. Audits undertaken by the Information Commissioner’s Office found that some of the AI recruitment tools assessed didn’t process personal information fairly and introduced the potential for discrimination[2].
These are just the baseline challenges that AI presents in recruitment. The technology is also not nuanced or intelligent enough to capture the depth of a person’s expertise, the implications of their work experience, or the value that recommendations and experience bring to the table.
AI does provide efficiency gains and deeper candidate insights on some levels, but when seeking out skilled individuals, particularly at the C-Suite level in highly complex industries like financial services, it is no more than a tool. Against a backdrop of tightening regulatory environments and growing concerns about data protection, technological innovation must be balanced against ethical considerations in talent acquisition strategies.
Understanding the limitations of AI in recruitment
Executive appointments have a substantial impact on an organisation’s performance which means the stakes for getting the balance between AI and human expertise right, have never been higher. The consequence of a misstep goes beyond just regulatory penalties or privacy issues, it can impact the reputation of your company and erode trust which then will affect how candidates and clients view your company and how you approach recruitment.
AI will only get the recruitment process to a point, especially when working with talent at the C-Suite level. Experience counts for everything in the C-Suite and the track record of a candidate at that level isn’t the only thing that speaks to their competence for a role. AI can’t explore how a person performs as a leader, how they relate to their team, how they communicate crises within the business, or how they have forged relationships.
While AI tools claim to provide comprehensive insights, they fundamentally lack the nuanced understanding required for executive-level appointments. These are built on a foundation of industry expertise, an understanding of the roles and responsibilities – particularly when working with talent acquisition in high-level industries like finance – and long-term relationships. Responsible recruitment requires contextual judgement and relationship building that algorithms cannot replicate or replace.
A good approach to C-Suite recruitment is to target from a number of different angles. There is psychometric testing, background and experience assessment, skills, leadership qualifications and depth of experience. AI definitely assists from a productivity perspective, assisting in categorising and sorting CVs, but we bring the human connection to every part of the process.
Social media screening: Raising the spectre of privacy
Using AI to scan people’s profiles or social media accounts introduces numerous privacy concerns.
This is a question a lot of people are asking right now. Many companies we work with are investing money and research into understanding these privacy issues because they all recognise this as a global concern.
The growing practice of algorithmic social media screening is a rising red flag. Research has found that these sourcing algorithms introduce issues around fairness, systematic group differences, limited access to opportunities, and discriminatory hiring outcomes[3]. Research undertaken by Wiley University found that the more a sourcing algorithm bias favoured a specific group, the more the hires in a minority group decreased. It seems obvious, but this skew in outcomes has legal and ethical ramifications that have yet to be fully resolved.
The same applies to the harvesting of personal data from social platforms. Financial institutions, already operating in highly regulated environments, face particular scrutiny in this regard. Recruitment needs clear boundaries for technology use while still gathering meaningful insights to inform hiring decisions.
Professional reputation assessment, cultural fit determination, and leadership potential evaluation are where human judgment and experience deliver superior results. The most valuable recruitment approaches combine technological efficiency with human discernment. This is particularly true in financial services, where leadership appointments carry significant strategic implications and where cultural alignment is as important as technical capability.
[3] https://onlinelibrary.wiley.com/doi/10.1111/ijsa.12499
Phryne Williams is the founder and director of Capital Assignments.