Governments Put In Measures To Regulate AI Tools

Read Time:3 Minute

Governments around the world are grappling with the challenge of regulating the rapid advancement of Artificial Intelligence (AI) technology. OpenAI’s ChatGPT, backed by Microsoft, is one of the latest AI advancements that has made it increasingly complex for governing bodies to agree on effective laws for the use of AI. National and international governing bodies are taking several steps to regulate AI tools to ensure they are used safely. 

 

 

Australia is planning regulations that require search engines to draft new codes to prevent the sharing of child sexual abuse material created by AI. Australia will also prohibit the production of deepfake versions of the same material. 

 

 

In Britain, more than 25 countries signed a “Bletchley Declaration” at the first global AI Safety Summit at Bletchley Park. The declaration emphasizes the need for countries to work together and establish a common approach to AI oversight. Britain has pledged to triple its funding for the “AI Research Resource”, comprising two supercomputers. These will support research into making advanced AI models safe. Additionally, Britain is setting up the world’s first AI safety institute to understand the capabilities of new models and explore all the risks, from social harms like bias and misinformation to the most extreme risks. 

 

 

In China, temporary measures have been implemented requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products. The country has also published proposed security requirements for firms offering services powered by generative AI. This includes a blacklist of sources that cannot be used to train AI models. Wu Zhaohui, China’s Vice Minister of Science and Technology, has said that Beijing is ready to increase collaboration on AI safety to help build an international “governance framework”. 

 

 

The European Union (EU) has agreed on critical parts of new AI rules that will outline the types of systems designated “high risk”. The EU is inching closer to a broader agreement on the landmark AI Act, which is expected in December. European Commission President Ursula von der Leyen has called for a global panel to assess the risks and benefits of AI. 

 

 

France’s privacy watchdog is investigating complaints about ChatGPT. And, the Group of Seven (G7) countries have agreed on an 11-point code of conduct for firms developing advanced AI systems. The code aims to promote safe, secure, and trustworthy AI worldwide.

 

 

There are increasing concerns among data protection authorities and governments around the world regarding potential breaches by AI-powered tools. In Italy, the data protection authority is planning to review AI platforms and hire experts in the field after temporarily banning ChatGPT in March. In Japan, regulations are expected to be introduced by the end of 2023 that are likely closer to the U.S. attitude than the stringent ones planned in the EU. The country’s privacy watchdog has also warned OpenAI not to collect sensitive data without people’s permission. Poland’s Personal Data Protection Office is investigating OpenAI over a complaint that ChatGPT breaks EU data protection laws. Spain’s data protection agency has launched a preliminary investigation into potential data breaches by ChatGPT. 

 

 

Meanwhile, on an international level, the U.N. Secretary-General António Guterres has created a 39-member advisory body, composed of tech company executives, government officials and academics, to address issues in the international governance of AI. The U.N. Security Council held its first formal discussion on AI in July, addressing military and non-military applications of AI that “could have very serious consequences for global peace and security”

 

 

In the U.S., the government is seeking input on regulations for AI, with the launch of an AI safety institute to evaluate known and emerging risks of so-called “frontier” AI models. President Joe Biden has issued a new executive order to require developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the government. The U.S. Congress held hearings on AI and an AI forum featuring Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk, during which Musk called for a U.S. “referee” for AI. The U.S. Federal Trade Commission opened an investigation into OpenAI on claims that it has run afoul of consumer protection laws.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media Auto Publish Powered By : XYZScripts.com