Regulation or Stifling Innovation? Why we Should Regulate Artificial Intelligence Systems

In 2024, the world’s first comprehensive Artificial Intelligence (AI) law was passed by the European Union (EU). The AI Act takes a risk-based approach to regulating how AI systems in the EU will be used and is expected to be fully adopted within the next 24 (and in some cases, 36) months.    

While the use of AI has tremendous benefits in healthcare, disaster preparedness, agriculture, finance, and many more industries, several risks are posed if AI is not monitored or used correctly. Discrimination and privacy breaches are among some of the most serious risks associated with AI systems. Hence, the rise of AI has led to an ongoing race for regulation around the world. The US created the Blueprint for an AI Bill of Rights and issued an Executive Order on safe, secure, and trustworthy AI; China created the Interim Measures for Generative AI; the UK has taken a pro-innovation approach to regulating AI; and global organizations such as the OECD and UNESCO have established principles and recommendations for trustworthy AI that various countries can adhere to. Although many countries around the world are taking steps to control the risks associated with AI, the EU is the first to pass legislation as comprehensive as the AI Act.

While some people may think that AI regulation is just another attack on innovation by governments, I beg to differ. There are valid reasons for wanting to control the use of AI and establish suitable consequences for its misuse, just as there are suitable consequences for other crimes that may be committed with or without technology. Let’s consider a few scenarios. Imagine someone recording footage of you saying positive things, only to publish a very convincing version of the footage where your speech was completely altered to say strictly derogatory things. Imagine that any random person could take your pictures on the street and upload them into a facial recognition system, then find out personal information about you including your last known address in real-time. Imagine that you’re applying for a job and your potential employer uses an AI system to assess the risk of you committing a crime based on data collected from your profile, which will determine whether you are hired. We live in a world where technology has become so advanced that none of these things are far fetched or in the future. These scenarios are already possible because of systems that already exist. In many cases, tools were not created to commit crimes, but could be useful in helping to solve them. For example, real-time facial recognition systems can be especially helpful to law enforcement to enable them to identify criminal suspects. This is why I think the EU’s risk-based approach to AI governance is a smart way to regulate AI usage.

The EU defines 4 major risks of AI systems:

  1. Unacceptable risk - Systems that explicitly violate the EU’s list of fundamental rights and values are prohibited or banned. This means that no one can use any AI system that falls under this category in the EU, unless they are in positions where special permission is granted such as law enforcement. Facial recognition and biometric categorization systems are some examples.

  2. High risk - Systems that have an impact on health, safety or fundamental rights can be used, but are considered high risk. The EU has determined a filter provision test to properly classify high risk systems and has compiled a database of toys, cars, and other AI tools that fall into this category. It will now be mandatory to register all AI systems used within sectors such as education, employment, law enforcement, migration and more in the EU Database.

  3. Transparency risk - Systems that pose risks of impersonation, manipulation or deception can be used, but are categorized as transparency risks. These include deep fakes, chatbots and Gen-AI content. The AI Act requires individuals and businesses to disclose that the content was generated by AI, design their models to prevent them from generating illegal content and publish summaries of copyrighted data used for training. Recently, we have seen updates to social media platforms like Instagram which prompt us to specify whether the content we are posting was generated by AI. This will soon become the standard globally as we combat the spread of misinformation and unethical behavior due to AI. 

  4. Minimal risk - Common AI systems such as spam filters and recommendation engines are low risk and can be more helpful than harmful. Spam filters can remove harmful email content from your inbox to protect you from hacking or label phone numbers to prevent you from answering phone calls from scammers. When you’re looking at a product on Amazon, and you see a section displaying what other people typically buy with this product or when Amazon gives you product suggestions based on your previous shopping history, they are also using AI to enable this behavior. This is low risk and such systems will not be subject to any further regulations in the EU.

Many people around the world are fascinated by what AI can enable them to do, and are strong advocates for integrating AI systems in every aspect of life. Personally, I have built recommendation engines, I have used AI-systems, and I am pro-innovation. However, I also believe that our technology tools should ultimately be used to improve the human condition. Sometimes (or more often than not), technology is used in ways that it was not intended to be used, so we must do our due diligence to ensure that it does not cause more harm than good. The best way to do that is to carefully assess the impacts, determine the potential risks and create clear, airtight legislation that protects everyone. I’m excited to see what the rest of the world will implement as we find a balance between advanced technology innovation and protecting our human rights. 

Sources:

https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

https://www.whitehouse.gov/ostp/ai-bill-of-rights/ https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

https://www.linkedin.com/pulse/chinas-generative-ai-rules-anders-c-johansson-floyc/

https://researchbriefings.files.parliament.uk/documents/POST-PN-0708/POST-PN-0708.pdf

https://oecd.ai/en/ai-principles

https://www.unesco.org/en/artificial-intelligence/recommendation-ethics


Previous
Previous

Three Keys to Successful Automation

Next
Next

How a Girl in ICT Became a Woman in Tech