The European Parliament has overwhelmingly approved a landmark law that will regulate the use of artificial intelligence (AI) across the bloc. The law, which has been dubbed the "AI Act," is the first of its kind in the world and is designed to ensure that AI is used in a safe, ethical, and responsible manner.
The AI Act covers a wide range of AI applications, including facial recognition, predictive analytics, and autonomous systems. It classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.
Unacceptable risk systems are banned. High-risk systems include those that are used for medical diagnosis, autonomous driving, or critical infrastructure management. These systems must be subject to rigorous testing and certification, and they must be used in a way that respects human rights and fundamental freedoms. Limited risk systems include those that are used for spam filtering, fraud detection, or customer service chatbots. These systems must be transparent and users must be able to opt out of their use. Minimal risk systems include those that are used for games, music recommendations, or language translation. These systems do not need to meet any specific requirements.
The AI Act also establishes a new European Artificial Intelligence Board, which will be responsible for overseeing the implementation of the law.
The approval of the AI Act is a major milestone in the development of AI regulation. The law is expected to have a significant impact on the way that AI is developed and used in the EU, and it could serve as a model for other countries around the world.