Skip to content
MENU
SELECT LANGUAGE SELECT COUNTRY

Navigating the AI Act

What Retailers Need to Know

In a historic decision, the European Parliament has given the green light to the world’s first comprehensive AI legislation – the AI Act. This landmark law aims to foster responsible and ethical AI development while addressing the risks associated with powerful models.
– Think of it as the GDPR for AI, with a strong emphasis on safeguarding human rights, safety, and ethical principles, says Caroline Thorén, Head of Retail, Nexer Group.

Overall, the goal is to protect citizens’ fundamental rights, while at the same time promoting innovation and EU leadership in the field. Given the severity and scale of the risks involved — including hefty fines for noncompliance with the Act — decisions on the use of AI should be made at highest level within the organisation.

The strictest provisions of the law will come into force in the autumn of 2024, followed by a gradual implementation to full compliance by 2026. Once the regulation is approved, no further national legislation is needed, and the new rules will apply across all EU countries.

Clear labels and informed decisions
Instead of treating all AI the same, the law differentiates its use based on potential harm. If an AI system poses a high risk, it has stricter rules and quicker implementation. If it’s less risky, the rules are more relaxed.

What, then, should retailers consider when it comes to the new legislation? We asked Arba Kokalari (M), Swedish representative in the EU parliament, working with tech laws and AI regulation.

– In general, it’s important to understand that if you’re using AI in your products, you need to be transparent, and inform customers and employees that you’re using AI. Also, make sure to conduct thorough risk assessments when it comes to the law’s stricter risk categories, Kokalari says.

Any AI-generated content will need clear labels, including ads, e-mails, and websites, to help customers make informed decisions.

– Retailers need to carefully consider their recommendations – from product suggestions to personalized offers – and be transparent about whether the suggestions were based on human expertise or AI algorithms, says Caroline Thorén.

Balancing risks and opportunities
The law sets requirements for AI systems related to potential risks and impact. A key limitation is the use of facial recognition, emotion recognition, profiling, and manipulation of human vulnerabilities. While there are some exceptions for law enforcement authorities, there are also clear restrictions.

Employment falls in the high-risk category, so employers need to be extra careful when using AI technology for hiring decisions. There will be strict rules to follow, including how to sort CVs and evaluate employee performance.

– Overall, you need to consider how to use AI in your business, and really have a strategy going forward, says Caroline Thorén.

AI Risk Levels: From Unacceptable to Minimal
The law outlines four risk levels, with the highest level, “Unacceptable”, prohibiting certain AI applications entirely.

RISK LEVELS

Unacceptable Risk

This category refers to AI systems that pose a “threat to human security, livelihoods, and rights.” Such systems will be outright prohibited. For instance, social scoring AI systems, currently in use in China, fall into this category.

High Risk

AI systems used in critical areas like infrastructure, education, employment, migration, asylum, and border control fall under this category. These systems will be subject to “strict obligations” before being allowed in the market. Law enforcement AI also falls within this risk level.

Limited Risk

Examples of limited-risk AI include marketing, retailing processes and personalisation. Users interacting with chatbots should be aware that they are interacting with a machine.

Minimal Risk

AI systems falling into this category can be freely used. Examples include AI-supported video games and spam filters.

Overall, the law balances fundamental rights protection with innovation and EU leadership in AI. To secure compliance, collaboration with technology providers will be essential. As AI becomes integral to industry practices, brands will need to team up with AI developers who focus on ethical data sourcing, transparent algorithms, and responsible practices – paving the way for broader discussions and best practices across the industry.

The EU Parliament believes that the AI Act will help guide responsible tech development. Businesses can use it as a framework for creating reliable AI tools, focusing on transparent communication about data usage and consumer consent. Although this law only applies to the EU, it might set a global standard for brands and retailers, similar to the GDPR.