Artificial Intelligence (AI) now plays a more significant role in customer service than would have been assumed two years ago. Companies are increasingly relying on intelligent systems to enhance their efficiency, better serve customers, and offer innovative solutions. Through the deployment of voicebots and chatbots, automated speech recognition systems, and personalized recommendation algorithms, companies can provide fast, personalized, and tailored service around the clock. Furthermore, AI contributes to improving the customer experience by continuously analyzing large volumes of data, identifying trends, and optimizing offers and services. However, alongside the opportunities and potentials, the use of AI also poses risks and raises many questions.
Legislative package as product liability law
The advancing digitalization has not only changed the interaction between companies and customers but also the role of AI in this process. With the EU AI regulation, this area is now even more in focus. The AI regulation is a legislative package that the European Union has introduced in response to the rapid development and deployment of AI in various industries and aspects of life. It is a significant legislative initiative of the EU aimed at regulating AI technologies. Its goal is to establish clear guidelines for the use of AI to promote innovation while simultaneously protecting the rights and interests of citizens. The regulation aims to promote trustworthy AI and address risks by setting clear requirements and obligations for developers, providers, and users of artificial intelligence.
As the AI regulation primarily regulates the application and operation of AI systems rather than the research and development of these systems, this regulation is primarily a product liability law. Along with the revision of the Product Liability Directive and the AI Liability Directive, companies are obliged to diligently comply with compliance requirements, especially in the deployment of high-risk AI systems.
Key points and risk categories
With the regulation, AI systems and their development are classified into various risk categories concerning the protection of fundamental rights and human dignity, ranging from minimal to unacceptable. Regulation and requirements vary according to the level of risk. Certain applications deemed particularly risky or harmful are simply prohibited. In addition, there are so-called high-risk AI systems, which, although not banned, are subject to strict requirements for transparency, traceability, and monitoring.
Key points of the AI regulation include transparency requirements, risk assessments, liability rules, and ethical principles. Companies using AI in customer service should therefore familiarize themselves with the provisions of the AI regulation sooner rather than later and ensure that their systems meet the requirements. The regulation aims to address specific risks, prohibit unacceptable practices, and regulate high-risk applications. All companies using AI in their business and decision-making processes will be affected by the EU AI Act. Since consumers or citizens interact directly with AI in customer service processes, these areas will be particularly in focus. The goal of the AI regulation is to promote a balanced and responsible use of AI that strengthens both the innovation and competitiveness of EU member states and protects fundamental rights and values.
EU AI Act: Explanation, objectives, key provisions
The AI regulation is a legislative proposal by the European Commission aimed at establishing clear rules and obligations for developers, providers, and users of AI within the EU. It aims to address specific risks, prohibit unacceptable practices, and regulate high-risk applications. So-called high-risk AI systems encompass various technologies used in critical areas such as critical infrastructures, education, product safety, employment, essential services, law enforcement, and the administration of justice and democratic processes. Before a high-risk AI system is placed on the market, strict requirements must be met, including adequate risk assessment and mitigation, high-quality datasets, human oversight, and high robustness, safety, and accuracy.
Implementation and compliance timeline
On March 13, 2024, after a legislative process of about three years, the European Parliament adopted the AI regulation with a large majority. A general transitional period of 2 years from the entry into force of the regulation is provided. However, individual provisions will enter into force in a staggered manner. For example, the ban on AI systems with unacceptable risks is expected to take effect 6 months – that is, still within the transitional period – after entry into force, and regulations and obligations for General Purpose AI (GPAI) after 12 months.
Categorization of AI systems
A central aspect of the AI regulation is the categorization of AI systems based on their risk potential for fundamental rights and human dignity. The legal framework of the AI regulation establishes a total of four risk levels:
Unacceptable Risks
High Risks
Limited Risks
Minimal Risks/No Risk
Unacceptable Risks: AI systems in this category are simply prohibited. Existing AI systems within the EU must be removed from the market. This applies to all AI systems that are considered a threat to the security, livelihood, and rights of individuals.
Included in this category are AI systems that employ subliminal or manipulative techniques to significantly influence individuals' behavior and impair their ability to make informed decisions. This can lead to significant harm or exploit weaknesses in certain groups of people (based on age, disability, or social/economic situation) to influence their behavior and cause harm. Also affected are biometric categorization systems used to categorize individuals based on sensitive characteristics such as race, political views, or sexual orientation, as well as AI systems for evaluating or classifying individuals based on social behavior or personal traits if this leads to unjustified discrimination (social scoring systems). Biometric remote identification systems are considered risky and are therefore prohibited except in certain cases, such as searching for missing children or combating terrorism. Their use requires authorization from independent authorities and is subject to clear limitations.
Additionally prohibited are AI systems for assessing individuals' risk of committing crimes solely based on personality profiles. Also prohibited are AI systems for creating or expanding facial recognition databases by indiscriminately extracting facial images from the internet or video surveillance footage, as well as AI systems for inferring emotions from individuals in workplace and educational settings unless this is done for medical or security reasons.
High Risks: High-risk AI systems encompass AI systems deployed in designated areas such as critical infrastructures, education, human resources, or law enforcement, provided certain criteria are met. AI systems with high risks are not prohibited, but they require strict compliance with regulations. This applies not only to companies developing these systems but also to those distributing or using them. This will also affect companies providing such AI systems to their employees for job facilitation.
High-risk AI systems must meet comprehensive requirements before they can enter the market. This includes adequate risk assessment and mitigation procedures, high-quality datasets, activity logging, detailed documentation, clear usage instructions, human oversight, as well as high robustness, safety, and accuracy. Comprehensive documentation containing relevant information about the system and its purpose is required to facilitate authorities' assessment of compliance.
Clear and understandable information must also be provided to users. Above all, adequate human oversight is crucial to minimize risk. Only when these criteria are met can a high-risk AI system be responsibly introduced to the market.
Limited Risk: Limited risk concerns the potential dangers associated with the opaque use of AI. The AI regulation sets clear transparency standards to ensure that people are always informed and can build trust. For example, users interacting with AI systems, such as chatbots, should be notified that they are communicating with a machine. This allows them to make an informed decision about whether to continue the dialogue or withdraw.
Providers are also obligated to ensure that AI-generated content is clearly identifiable as such. Additionally, AI-generated texts published must be clearly labeled as "artificially generated." This also applies to audio, image, and video content.
Minimal or No Risk: The AI regulation allows for the unrestricted use of AI with minimal or no risk, including applications such as AI-driven video games or spam filters.
The good news at this point: The vast majority of AI systems currently used in the EU likely fall into the category of "Minimal or No Risk." This also applies to customer service. According to current knowledge, there are currently no AI systems in customer communication that would fall into the category of "unacceptable risks."