The U.N. Security Council will hold a first-ever meeting on the potential threats of artificial intelligence to international peace and security, to see major risks about AI’s possible use for example in autonomous weapons or control of nuclear weapons.
UK Ambassador Barbara Woodward on Monday announced the July 18 meeting as the centerpiece of its presidency of the council this month.
Europe has led the world in efforts to regulate artificial intelligence, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video, and audio that resemble human work.
The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere.
The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.
The Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first-ever rules for Artificial Intelligence with 84 votes in favor, 7 against, and 12 abstentions. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.
A risk-based approach to AI – Prohibited AI practices
The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring (classifying people based on their social behavior, socioeconomic status, and personal characteristics).
MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location, or past criminal behavior);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).