Undoubtedly, the EU has become a world leader in regulating and managing the tech industry. We all remember the Facebook hearings about how the company handled people’s data and privacy and what Meta was using it for. The EU recently pushed Apple to replace the proprietary lightning port with the industry standard USB-C ports now seen in the iPhone 15.
The EU is truly interested in holding the world’s largest corporations accountable, unlike other governments, especially the US. The EU knows the kind of power it has and is not afraid of putting its profound strength to good use. The EU is now focusing on a controversial topic with significant global effects, and that is “AI.”
Legislators in Brussels have now struck a “provisional agreement” on the European Union’s proposed Artificial Intelligence Act (AI Act) following a round of intensive negotiations this week. The EU’s AI Act is poised to become the world’s pioneering and comprehensive set of regulations governing artificial intelligence. Other regions aspiring to implement similar legislation may look to it as an exemplary model.
According to the press release, negotiators have imposed specific standards and requirements on “high-impact” general-purpose AI (GPAI) systems, such as risk assessments, adversarial testing, and incident reports.
These provincial rules also require these systems to be transparent, which includes producing technical documentation and “detailed summaries about the content used for training their models”; thus far, businesses like OpenAI, the company behind ChatGPT, have refused to comply. Citizens should be able to file complaints against AI systems and request information about judgements made on “high-risk” systems that impact their legal rights.
Implications for AI Developers and Organizations
While the press statement did not go into depth on how it would all work or what the benchmarks would be used, it did mention a framework for fines if companies violated these guidelines. They vary depending on the offence and the size of the organisation. It can range from 35 million euros, or 7% of global revenue, to 7.5 million euros, or 1.5% of global revenue.
AI is not allowed in various applications, such as gathering facial images from CCTV videos, sorting based on “sensitive characteristics” like race, sexual orientation, religion, or political beliefs, recognising emotions at work or school, and creating “social scoring” systems.
The final two prohibited items are artificial intelligence (AI) systems that “exploit people’s vulnerabilities” or “manipulate human behaviour to circumvent their free will.” The guidelines also contain a list of protections and exceptions for the use of biometric systems by law enforcement, either in real-time or to look for evidence in recordings.
It is anticipated that an agreement will be finalized prior to the year’s conclusion. Even in that case, the law probably won’t take effect until 2025 at the latest. While other nations have helplessly watched as large corporations rule themselves without oversight and make all the decisions, the EU has made a very positive move.
The goal of the EU’s AI Act, which was first proposed in 2021, is to standardise the regulations governing AI technology among EU member states and define precisely what constitutes AI. That draft, however, was made before the advent of rapidly evolving generative AI tools such as ChatGPT and Stable Diffusion, which led to multiple changes to the law.
The EU’s provincial AI laws is like a breath of fresh air in a world that seems to be increasingly devoid of oxygen as the AI world continues to suffocate and disrupt a lot of industries. These laws seem to take preventive action against “AI” and its developers, who seem only concerned about their bottom lines and less about the real-world implications their technologies provide. It is scary to think of what the world could become if more laws like this weren’t implemented all over the world.
Last time, I spoke of how Amazon is bringing robots in to supplement their warehouse staff in handling packing tasks. I also indicated that while these robots may come in as support staff, it would only be a matter of time before they replaced and automated all the human staff. However, while many of us see working in a warehouse as either a part-time job or a short-term job, many people, especially immigrants, see it as a reliable and sometimes only source of income.
So, while warehouse robots may lower shipping and handling costs, their real-world consequences greatly outweigh their prospective benefits. Especially if AI models like Google’s Gemini, which many say is considerably superior to OpenAI’s GPT-4, begin to disrupt additional businesses around the world. Then perhaps our jobs will be next. However, with more progressive legislation, such as the EU’s, the influence of AI can be regulated and used to benefit society.
Despite the current provisional agreement, further discussions and votes by the Parliament’s Internal Market and Civil Liberties committees will be required.
Negotiations surrounding the regulations for “general-purpose” foundation AI models such as OpenAI’s ChatGPT and real-time biometric monitoring, including facial recognition, have sparked heated debates. The press conference to announce the agreement was postponed since these were apparently still being discussed this week.
EU parliament members want to ban the use of AI in biometric monitoring, even though governments have tried to make exceptions for the military, law enforcement, and national security. The delays are also said to have been caused by late suggestions from France, Germany, and Italy to permit creators of generative AI models to self-regulate. However, there is still a lot of progress being made on that front.
Source: The Verge