The Rise of AI in Legal Practice

The term “Artificial Intelligence” (“AI”) can be used to refer to advanced computer systems that perform tasks requiring human intelligence, such as problem-solving, pattern recognition, and decision-making. In the legal sector, AI includes natural language processing, machine learning, and automation tools that analyse vast amounts of legal data, assist with research, and streamline drafting. While AI lacks independent reasoning or judgement, it processes information at unparalleled speeds, enhancing efficiency in legal practice.

AI is no longer a futuristic concept. Indeed, its rapid advancement is reshaping legal practice in Scotland, with law firms leveraging it to improve operations and performance. However, responsible integration is necessary in order to address ethical and professional risks and to maintain trust and compliance within the legal profession.

In recent years, there has been much debate surrounding the role of AI in contract law. AI can interact with its environment and make autonomous decisions, making it possible for AI to negotiate and agree to contracts on behalf of principals. Does this make AI the agent for the principal or merely a tool used by the agent to conclude contracts? The ruling in a case before the courts in Singapore (Quoine Pte Ltd v B2C2 Ltd) has raised interesting questions which are likely to be considered again in the near future.

Quoine Pte Ltd v B2C2 Ltd [2020] SGCA (I)2

The facts

Quoine is a cryptocurrency trading platform that allows traders to place limit orders, which are instructions to buy or sell Bitcoin at a specified price or better. Limit sell orders are placed by traders who agree to sell Bitcoin at a minimum price they set, while limit buy orders are placed by buyers who wish to purchase Bitcoin at or below a certain price. The platform also supports automated trading, where trades can be executed without direct human involvement, relying on the platform’s order-matching system to automatically match buy and sell orders based on the specified criteria. However, while the platform allows for automation, human oversight is still required to ensure the system functions correctly and to manage trading activities.

B2C2’s algorithm placed buy orders at an extremely low price because it was programmed to buy Bitcoin at any available price, subject to market conditions. The dispute in this case arose when a glitch in Quoine’s platform removed all reasonably priced buy orders, leaving only B2C2’s low-priced orders available for matching. Meanwhile, some traders had placed market sell orders, agreeing to sell Bitcoin at the best available price, rather than setting a minimum price. As a result, the platform’s automated system matched these market sell orders with B2C2’s extremely low buy orders, causing the Bitcoin to be sold at a much lower price (about 250 times cheaper than market value) than expected.

After the trades were completed, Quoine attempted to reverse the transactions, claiming they were based on a “mistake” and that no rational trader would have agreed to such prices. B2C2 sued, arguing that the trades were valid and should not have been cancelled, as they were executed according to the platform’s rules.

The decision

The Singapore High Court ruled in favour of B2C2, determining that Quoine had no right to cancel the trades after they had been executed. The Court found that the trades were legally binding because B2C2 had acted within the platform’s rules, which allowed for automated contracts and automated trading. The court held Quoine responsible for the malfunction and could not undo the trades once they had been executed.

Though a decision of the Singapore High Court, the decision offers food for thought for lawyers elsewhere.

What if AI had been involved?

It is important to note that this case did not directly involve AI, as the trading platform’s system was an automated order-matching system that executed trades based on predefined rules, rather than being powered by AI or machine learning. The system operated by following set algorithms to match buy and sell orders, without learning or adapting based on market conditions. Therefore, the system was rule-based and not driven by AI, which would typically involve the ability to learn from data and make decisions autonomously. However, it does beg the question, if AI was involved in the conclusion of contracts, what would the implications be?

AI systems can learn and adapt autonomously, making their behaviour harder to predict and trace. If AI had been involved in the case, the outcome could have been much more complex. For instance, if B2C2’s trading algorithm had been AI-powered, it might have exploited the glitch or acted unpredictably, leading to unforeseen outcomes—like executing trades at distorted prices, exploiting market inefficiencies, or acting in ways that humans might not anticipate. In contrast to simple automated systems, AI’s capacity to exploit glitches or make decisions based on faulty data could have led to even greater financial harm and legal challenges.

Who is really in control?

The court ruled that the programs in Quoine were deterministic and did not have a mind of their own, and therefore that they should not be treated as legal agents but rather as mere machines, since the systems in question were not powered by AI but were simple automated rules-based systems. Thus, the court was able to focus on human accountability. As a result, the legal status of AI negotiated contracts remains unanswered, and questions around whether the AI system itself could be held responsible for the trades, or if its creators and operators would bear the responsibility for its decisions. Following this case, it is likely that courts may have to consider this question in the near future.

Although, this has not been determined by the courts, there has been much academic debate on the subject. Some scholars have argued that AI negotiated contracts are unenforceable within existing contract law doctrines and we must turn to principles of agency law in order to enforce them. This is because legally binding contracts require consent to the specific terms, without human intervention, there is no consent. To remedy this, AI would need to be treated as a legal agent to consent on behalf of the principal. The issue, of course, is that AI does not have legal personality – it is not recognised as an autonomous entity in its own right, at least in the UK.

Some scholars disagree that agency law is required to be engaged to remedy the issues of consent in AI negotiated contracts. This school of thought argues that they can be enforceable if the person operating an AI program makes an open offer to contract on whatever terms the AI program agrees and criticises the agency approach as it would allow the human responsible for programming or implementing the AI-powered program to escape liability for any issues that may arise.

If AI-powered programs were viewed merely as tools for populating specific elements of offers made by contracting parties, rather than as legal agents, courts would not need to examine the system’s inner workings to assess whether it is sophisticated enough to possess independent intent. This perspective suggests that treating AI as a tool rather than an agent simplifies contractual analysis and ensures accountability remains with the human operator.

Conclusion

The growing use of AI in contract negotiation raises critical legal questions about consent, liability, and enforcement. While the Quoine case dealt with automated systems rather than AI, it underscored the need for clear accountability in technology-driven contracts.

If AI were to autonomously negotiate and execute agreements, courts would need to determine whether it can provide valid consent or if legal responsibility should always rest with human operators. Treating AI as a legal agent would require a fundamental shift in contract and agency law, potentially allowing AI to act on behalf of a principal. On the other hand, viewing AI as a sophisticated tool ensures that liability remains with those who design, deploy, and use it. As AI becomes more advanced and autonomous, courts and lawmakers must clarify its legal status to balance innovation with accountability in contract law.

STAY INFORMED