#Non classé
04/11/2024
On August 1, 2024, the European Union (EU) adopted Regulation (EU) 2024/1689, commonly known as the “RIA” or “AI Act.” This groundbreaking text makes Europe a pioneer in regulating the use of Artificial Intelligence (AI).
For the first time, states are attempting to define a framework for the ethical, safe, and transparent development of Artificial Intelligence Systems (AIS).
In a similar approach to the GDPR, which defines data governance, the AI Act aims to guide how organizations should design, use, and supervise AI technologies, whether public or private.
But what are the challenges of this regulation? What obligations does it introduce for companies? And how can you prepare for its implementation?
Rcarré provides you with answers to help you align your practices with the AI Act and understand its impact on your business.
Since 2022, Big Tech companies have been embroiled in a series of scandals related to training their AI models on data from European citizens and businesses.
Meta (Facebook, Instagram), Google, and X (formerly Twitter) are all under investigation or facing sanctions for using their users’ personal data without legal basis.
Another striking example is that of the American company Clearview AI, which specializes in facial recognition. The company collected more than 60 billion photos from the web without authorization, including those of European citizens, in order to create its biometric database.
To limit these abuses and respond to a major sovereignty issue, the European Union had to take action. Thus, on August 1, 2024, the AI Act came into force with immediate application in the 27 member states of the community.
Complementing the GDPR, the text adapts its use cases to the rise of AI models.
Let’s discuss your AI needs now !
The AI Act considers artificial intelligence systems (AIS) to be products that must meet safety, reliability, and compliance requirements before entering the European market.
The AI Act was designed on the premise that it is necessary to regulate not the technologies themselves, but rather their uses, according to four levels of risk: unacceptable, high, limited, and minimal.
Unacceptable risks :
Article 5 of the AI Act prohibits psychological manipulation, surveillance, or biometric categorization of individuals (mass collection of images without authorization, known as “harvesting,” social scoring, profiling). Except in strictly regulated cases, these uses of AI are considered contrary to the fundamental rights of European citizens.
High-risk uses :
Articles 6 to 27 of the regulation regulate the use of AI that presents high risks.
The European Commission considers AI systems integrated into certain products or intended for certain sectors of activity that are already heavily regulated and for which marketing is usually conditional on validation by third-party bodies to be “high risk.”
Annex III supplements these articles by listing the areas in which the use of AI is considered high risk :
Limited risks :
Article 52 defines uses presenting limited risks, mainly related to AI systems intended for the general public: virtual assistants, chatbots, and other generative tools.
Minimal risk level :
The Commission does not impose any specific obligations on AI systems considered to be very low risk. It specifies that most AI systems currently on the market fall into this category, giving the example of images generated for a video game or AI-enhanced spam filters. However, Article 95 of the regulation encourages the adoption of voluntary codes of conduct and best practices “developed by individual providers or deployers of AI systems.”
The case of general-purpose AI models (GPAI)
GPAI (General Purpose AI systems or models) are versatile AI systems that may present a “systemic risk” according to the Commission’s criteria. Mistral AI, Google, and OpenAI’s LLMs are examples of this.
Difficult to classify due to their multiple uses, these AI models constitute a separate category in the risk classification framework. These systems are subject to transparency requirements and assessments, supervised directly by the European AI Office.
The answer is : yes, most likely.
The regulation defines the obligations of AI system providers: “a natural or legal person […] or any other body that develops or has developed an AI system […] and places it on the market.” (Article 3)
But it also defines the obligations of deployers, i.e., AI users: “a natural or legal person […] using an AI system under its own authority, except when that system is used in the context of a personal activity of a non-professional nature.” (Article 3, §4)
Your company will then have to comply with the requirements and controls associated with its use of AI (high, limited, or minimal risk).
The AI Act sets out requirements that are more or less restrictive, depending on the severity of the risk incurred by companies and organizations that use an AI system.
High-risk uses are subject to enhanced controls : compliance audits, cybersecurity measures, and the deployment of a risk management system throughout the AI system’s life cycle.
Suppliers of systems integrated into regulated products must obtain a “CE marking” and be listed in the European Union’s database of high-risk AIs.
With regard to data sovereignty and governance, complete traceability is expected : logs, data used, system version histories, and the thought processes that led to an AI decision.
Finally, Article 14 acts as a safeguard and introduces the concept of human oversight of high-risk AI systems, “in particular through appropriate human-machine interfaces, effective control by natural persons during their period of use.”
Low-risk uses are mainly subject to transparency requirements : users must be informed that they are interacting with AI and that the content generated for them is artificial.
Other uses of AI systems are not formally regulated. However, keep in mind that the GDPR (General Data Protection Regulation) still applies to the collection, processing, and storage of the personal data of your employees, customers, and partners.
In any case, documenting your uses and training your teams will help you align your practices with the requirements of control, transparency, and ethics promoted by the text.
To guide stakeholders in applying the regulation, in July 2025 the European Commission published an initial code of best practices for general-purpose AI, developed jointly by independent experts and more than 1,000 stakeholders (suppliers, microbusinesses and SMEs, academics, and civil society organizations).
In three chapters, entitled “Transparency,” “Copyright,” and “Safety and Security,” this guide seeks to help volunteers comply with the provisions of the regulation.
Assess your compliance with the AI Act now !
For companies and actors that do not comply with the AI ACT, administrative fines are provided for in Article 99 of the regulation. Here again, the response is graduated and adapted to the level of risk identified.
Under Article 5 of the European regulation, unacceptable use of AI systems can be punished with fines of €35 million or 7% of global turnover.
Fines of €15 million or 3% of turnover are provided for breaches of safety requirements related to high-risk uses.
The text specifies a fine of €7.5 million or 1% of turnover for companies guilty of providing false, incomplete, or misleading information to a supervisory authority.
In the case of SMEs and start-ups in breach, the lower amount will be applied.
The legislation considers these penalties to be safeguards, not barriers to innovation.
Member States will therefore be able to offer “regulatory sandboxes” for innovative companies and SMEs. Under the supervision of a regulatory authority, the compliance of products and services under development can be tested in these sandboxes before they are placed on the market.
The AI Act officially entered into force on August 1, 2024, with immediate effect for all member countries of the European Union.
However, to give stakeholders time to prepare, the legislator has provided for a gradual implementation in four stages.
Strengthening your compliance with the AI Act begins with adhering to a few key best practices and requirements introduced by the text: risk management, AI control, transparency, and awareness.
Start by conducting an assessment of your current situation: analyze how your employees and customers currently use AI, as well as how it is used in your production tools. Which systems are already integrated into your business processes? (HR, marketing, sales, or customer relations).
Once you have mapped this out, you will be able to identify the risks associated with each AI system used and identify any discrepancies with the requirements of the AI Act.
Where possible, use sovereign and controlled solutions. Control of AI is a pillar of the regulation. To strengthen the confidentiality of your data, you can host your AI systems on controlled servers and infrastructure that comply with European cybersecurity requirements.
Raise awareness among your teams: several passages in the regulation mention the importance of awareness and training on AI issues.
In Article 4, companies are invited to take “measures to ensure, to the extent possible, a sufficient level of AI control for their staff […] taking into account their technical knowledge, experience, education, and training.”
Rely on experienced technology partners. Approach these changes with greater peace of mind by seeking support. At Rcarré, an IT service provider committed to data protection and cybersecurity, our offerings are designed to strengthen your compliance with European AI regulations.
The publication of the AI Act marks a turning point in the regulation of digital practices in Europe.
For businesses, compliance is not just a box to tick on a form: it means aligning your practices with new practices and ensuring that your activities are always secure, sustainable, and compliant.
Rcarré supports you in this process by offering cybersecurity solutions, public and private cloud hosting, IT outsourcing, and consulting, including for regulated sectors (PSF).
Contact our experts today.