Where do you see the legal risk ‘red lines’ for Scottish Employers using AI in HR right now?

For Scottish employers, the legal “red lines” for using AI in HR revolve around discrimination, data protection and the preservation of meaningful human involvement.

Regulators such as the Information Commission’s Office (ICO) and the Equality and Human Rights Commission and the Equality and Human Right Commission (EHRC) have warned that employers are underestimating AI’s role in decision making in the workplace.

AI should never have sole, final authority over significant employment decisions. There are four categories where fully automated AI decision-making is unsafe and likely unlawful without strict safeguards.

  1. Hiring, Firing, Promotion and Disciplinary Decision

These are all high impact decisions that effect a workplace and staff. Under UK GDPR, soley automated decision making that produces legal or similar significant effect is restricted unless specific conditions and safeguards are met.

If AI is effectively determining who gets hired, promoted, or dismissed, employers risk breaching data protection principles unless:

  • There is human review
  • Individuals can challenge decision
  • The logic behind the decision is explainable.

Failure to meet these standards could expose employers to claim not only under data protection law but also unfair dismissal and discrimination legislation.

  1. Discrimination sensitive decisions (protected caricaturists risks)

AI must not be relied upon to make decisions that could directly or indirectly discriminate against individuals with protected characteristics ( e.g. sex, race, age, disability, religion or belief).

AI systems trained on historical HR data can replicate or even amplify existing biases. This creates a real risk of indirect discrimination, even where the employer did not intend it.

  1. “Black Box” performance management

AI cannot be the opaque judge of employee performance. Employees must be able to understands and challenge decisions affecting them and have the opportunity to challenge them.

A lack of transparency undermines procedural fairness, which is central to employment law, particular in unfair dismissal claims. If an employer cannot explain how an AI system reached a conclusion, it will be difficult to defend that decision in a tribunal.

  1. Decision involving sensitive HR Judgment

AI should not determine outcomes were context, credibility or nuance matters. Employment tribunals place significant weight on the reasonableness of process, not just an outcome. AI cannot (yet) replicate:

  • Credibility assessments
  • Workplace context
  • Proportionality judgments

The Evolving Regulatory Landscape

The regulatory AI landscape is shifting. The UK Government has moved away from purely voluntary principles towards a more interventionist approach, with a focus on regulating advanced AI models in the pipeline for 2026.

For example, the Data (Use and Access) Act 2025 has already amended the UK GDPR rules regarding automated decision‑making, increasing scrutiny on how employers use AI to screen applicants and monitor performance.

Despite Brexit, the EU AI Act (rules in place since early 2025) sets global standards that Scottish employers operating internationally, or using global software, must adhere to.

Practical Takeaway

AI can support HR decision making, but it must not replace human judgment. Employers should treat AI as an assistive tool and ensure robust governance, transparency, training and accountability at every stage.

STAY INFORMED