Smart tech, smarter people: Why emotional intelligence still matters for employers in the age of AI

As workplaces become more tech-driven, Artificial Intelligence (AI) is said to be transforming how employers manage recruitment, performance, and decision-making.

Its appeal is said to lie in speed, consistency, and the ability to process vast data, making it a powerful tool for streamlining operations.

But it is also said that AI lacks emotional intelligence: the human ability to understand feelings, build trust, and lead with empathy. These qualities will remain essential, especially when navigating such matters as sensitive workplace issues.

What’s more, with the age of AI comes a growing responsibility for employers to ensure that the data they are feeding into their automated systems is handled lawfully, transparently, and in line with data protection obligations.

So, how can employers look to can harness AI effectively while preserving the human insight and data protection safeguards required for fairness, legal compliance, and a people-first culture?

Striking a balance

AI systems such as Microsoft Viva Insights, HireVue, and Workday are just some of the platforms that are entering the market to reshape business operations.

They can automate repetitive tasks, analyse trends, screen CVs, and even handle routine HR queries. Tools like recruitment platforms, productivity trackers, and chatbots offer speed and consistency, helping employers stay competitive and focus on strategic goals.

However, AI has limitations. Algorithms can replicate bias if not carefully managed, and they lack the nuance of human judgment, especially in decisions that require empathy or context. This can result in unfair outcomes or missed opportunities to support individual needs.

Data protection also lies at the heart of AI use in workplaces. Many of the systems listed above rely on large volumes of employee data, sometimes including sensitive or inferred data. Without robust human oversight, employers risk breaching transparency obligations, misusing personal data, or relying on automated decision‑making in ways that contravene data protection law. Ensuring that data is collected lawfully, used proportionately, and subject to meaningful human oversight is essential.

That’s where emotional intelligence plays a vital role. Skills like empathy, self-awareness, and effective communication are irreplaceable in leadership, conflict resolution, and mental health support, not to mention the vital nature of human oversight when it comes to complying with data protection law. While AI can enhance efficiency, emotionally intelligent leadership is essential for navigating the human side of work and most businesses still have people at the heart of their business.

To build fair and legally sound workplaces, employers must integrate both technological capability and emotional insight. This will leverage AI without losing the human touch or compromising legal obligations.

Some of the potential legal implications of AI for employers

Whilst AI offers efficiency, it also introduces legal, ethical and data protection risks if not carefully and actively managed.  Employers and businesses need to understand the risks and take advice from experts, and that is where the team here at BTO come in.

Discrimination

One major concern that should be at the forefront of employers’ minds is discrimination. Employers must be constantly aware of the protections under the Equality Act 2010 and the duties flowing from this

If AI tools such as recruitment algorithms aren’t carefully audited, they may unintentionally favour or exclude certain groups of individuals.

Moreover, overreliance on AI (particularly for HR functions) could result in the need for reasonable adjustments for disabled employees being overlooked or missed. Understanding individual needs of your employees to remove or reduce a disadvantage relating to their disability, and responding appropriately, requires emotional intelligence that AI simply cannot replicate.

Duty of Care

Does AI have the emotional awareness needed to meet an employer’s duty of care? Can it spot the signs of stress, burnout, or harassment, which may lead to missed opportunities for support or intervention?

Employers should ensure that any AI tools are risk assessed, advice taken at the implementation stage and complemented on an ongoing basis by regular “in person” interaction, check-ins, and so on. People management is required and should be carried out by people.

Possible data protection infringement

Aside from the potential employment claims, employers relying on AI processes may leave themselves open to some data protection risks.

Where AI is utilised in recruitment or HR processes without human input, this may constitute automated decision making in terms of UK GDPR. Privacy notices should already advise data subjects if automated decision making is taking place, but far more companies are using automated decision making than ever before. Therefore, it is essential that employers and businesses are making sure their privacy notices reflect this.

Organisations must continue to ensure transparency, fairness, and meaningful human oversight, as overreliance on AI still carries a real risk of unlawful processing. If employers assume the new rules give them broader freedom without updating privacy notices, governance processes, and review mechanisms, they could easily fall foul of UK GDPR rules.

Businesses will want to strengthen their data protection governance. Crucially, businesses should review privacy notices, ensure transparency about how AI uses your employees’ data, and regularly assess whether your automated decision‑making complies with UK GDPR and upcoming legislative changes

A future with AI

AI is here to stay, but it’s only part of the picture. Employers who combine smart tech with emotionally intelligent leadership are likely to build stronger, fairer, and more resilient workplaces.

This is of course an evolving area and with challenges come opportunities for all. Technology is to be embraced. Make sure your business is aware of the risks in embracing new tech, and make informed decisions.

Our expert Employment Law and Data Protection teams can support your business navigating the opportunities and risks of using AI in the workplace.

We can assist with drafting and reviewing AI policies, managing discrimination and reasonable adjustment risks, and responding to grievances or tech‑related tribunal claims.

We also provide specialist data protection support for employers, including compliance checks, updates to privacy notices and governance frameworks, and guidance on automated decision‑making under UK GDPR and the changes to be introduced by the Data (Use and Access) Act 2025.

Whether you’re adopting new AI tools or refining your leadership approach, our teams are on hand to help you adopt AI confidently, lawfully, and responsibly. Get in touch with our expert Employment Law and Data Protection Teams today.

Caroline Carr, Chair and Partner & Accredited Specialist in Employment Law: cac@bto.co.uk / 0141 673 1015

Lynn Richmond, Partner & Accredited Specialist in IP: lyr@bto.co.uk / 0131 222 2934

Natalie Boal, Trainee Solicitor: nbo@bto.co.uk / 0141 221 8012

STAY INFORMED