Artificial intelligence (AI) is used almost everywhere in today’s technologically advanced world. It’s in everything from smartphones to autonomous vehicles.

Essentially, AI tools help to automate and speed up processes, increasing efficiency in the workplace and improving decision-making. Such is its breadth of scope that the possibilities for the technology are almost endless.

However, despite all the clear benefits it brings, there are still concerns about the risks it presents. The consequences of what happens if something goes wrong have been well documented, in many cases leading to discrimination or bias.

That has led to growing calls for AI to be regulated, particularly in fintech, a market which is expected to reach $46.8 billion by 2030. But the biggest problem is trying to regulate a technology that has a global reach, with every country or region having their own set of rules that have to be complied with.

The AI Act

Leading the way on regulation is the European Union (EU), whose AI Act was this month approved by the European Parliament. The landmark legislation, which is the first of its kind introduced by a major regulator, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability.

It also aims to address key ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. The law uses a simple classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person, ranking it either as unacceptable, high, limited or minimal.

The new rules will also apply to organisations outside the EU that provide AI systems for other entities in the Union or use systems whose output is used within the EU. And they carry with them fines of €10 to 30 million or two to six percent of a firm’s global annual turnover, whichever is higher.

Impact on fintech

The main impact of the AI Act on fintech will be on how firms use, provide, import or distribute software for biometric identification, human capital management or credit assessment of individuals. It will also prohibit software that exploits subliminal techniques or vulnerabilities due to age or disability and oblige software providers and users to be transparent.

The danger is that the new law will push many startups out of the EU to places such as the US. Reflecting this, 50% of AI startups said the legislation will slow down innovation in Europe and 16% said they’re even considering stopping developing the technology or relocating outside the EU.

One of the biggest challenges of complying with the new legislation is determining what software is classed as an AI system and which entities within a group are subject to these obligations, particularly if they operate in multiple countries. At the same time, global firms will also have to contend with country-specific regulations.

Similar moves to bring in new regulations are afoot in other parts of the world. In September 2021, Brazil’s Congress passed a bill that creates a legal framework for AI, but it still needs to pass the Senate, while the UK Government has also launched a consultation for establishing a regulatory regime for the technology.

With the AI Act’s adoption mooted for June, it's only a matter of time before a widespread global AI regulation is introduced. That’s why companies need to get up to speed with the latest rules and make sure that they remain in compliance with them.