The introduction of AI Act has triggered debate over its legal status and social benefits. While Silicon Valley has criticized the AI Act, arguing that it is simply a waste of money, the EU disagrees. The EU claims the legislation applies only to the riskiest AI applications. The EU estimates that it will only cover five to fifteen percent of AI applications. It will be interesting to see how this act will affect the AI industry and the benefits and risks it offers.
The draft of the AI Act has incorporated a risk-based approach and pyramid of criticality. When AI applications present minimal or no risk, a lighter legal regime is applied. As risk increases, stricter regulation is applied. Self-regulatory soft law impact assessments may accompany codes of conduct. Companies may also be subject to external audits for compliance with the AI laws.
While the European Commission’s AI Act is intended to protect the human population, the legislation also serves to regulate the technology that will benefit the public. The EU Artificial Intelligence Act sets high ethical and technical standards for AI systems. Its goal is to align AI with European values and make Europe a leader in “trustworthy” artificial intelligence. The legislation proposes to categorize AI applications into four categories, depending on their risk. These categories are low, medium, and high.
The Act also stipulates that high-risk AI systems undergo a prior conformity assessment. These systems must be compliant with the essential requirements as outlined in Title III, Chapter 2 of the Act. Once approved, the AI systems can be imported and distributed throughout the EU. These systems must adhere to the essential requirements relating to data governance, human oversight, robustness, and transparency.
The EU’s AI act fails to account for the emergence of a new ultra-powerful type of artificial intelligence. This means that any regulations will be outdated as the technology grows. Many of the world’s biggest tech companies are already training their models on colossal datasets, and they’re increasingly capable of completing a broad range of tasks, including those involving human judgment.
The AIA bans social scoring systems, a controversial practice that is widely used in China. While not yet achieving the levels of automation found in Black Mirror films, China’s social credit system is far from the type of automated system that many fear. The AIA also states that AI must protect basic human rights, including privacy, and it should not be used for social credit assessment.
The European Commission’s AI Act aims to address the risks of AI systems while encouraging innovation in the field. Kenneth Propp and Mark MacCarthy have called the proposal comprehensive and thoughtful and argued that this new law could serve as a basis for trans-Atlantic cooperation. It should also protect social security and other benefits of AI. But it is not yet clear whether or not this Act will ultimately succeed in achieving this goal.