Here’s how the EU plans to target advanced AI-models like ChatGPT with new regulations, laws – Firstpost

The European Union has come up with a groundbreaking agreement on how to regulate AI, targeting advanced models such as ChatGPT. These include some restrictions, transparency criteria, obligations for developers to focus on systemic risk models
In a significant milestone toward establishing the world’s inaugural comprehensive artificial intelligence (AI) regulation, the European Union (EU) has achieved a provisional agreement that imposes restrictions on the operations of cutting-edge AI models, including the highly advanced ChatGPT.
As outlined in a document obtained by Bloomberg, developers of general-purpose AI systems, characterized by their versatility across various applications, are mandated to adhere to fundamental transparency criteria. Notably, these requirements do not apply to models offered as free and open-source software.
The stipulations include the implementation of an acceptable-use policy, the maintenance of updated information on model training methodologies, the submission of a comprehensive data summary used in training, and the establishment of a policy to uphold copyright laws.
The regulations escalate for models identified as presenting a “systemic risk,” a determination based on the computing power employed during training. Models exceeding the threshold of 10 trillion-trillion or septillion operations per second fall into this category, with experts highlighting OpenAI’s GPT-4 as the sole model automatically meeting this criterion. The EU’s executive arm reserves the authority to designate other models based on factors such as dataset size, EU business users, and end-user registration.
Highly capable models must commit to a code of conduct while the European Commission refines more cohesive and enduring controls. Failure to sign onto the code compels models to demonstrate compliance with the AI Act to the commission. Notably, the exemption for open-source models does not apply to those posing systemic risks.
Additional obligations for these models include the disclosure of energy consumption, internal or external adversarial testing, assessment and mitigation of systemic risks, reporting incidents, implementation of cybersecurity controls, divulging information used in model fine-tuning, and adherence to energy-efficient standards if developed.
Despite the tentative agreement, final approval is pending from the European Parliament and the EU’s 27 member states. Previous reservations voiced by France and Germany about potentially stifling regulation for general-purpose AI models, thereby disadvantaging European competitors like Mistral AI in France and Aleph Alpha in Germany, still linger.
Spain’s Secretary of State, Carme Artigas, indicated that Mistral AI, currently in the research and development phase, may not be immediately subject to general-purpose AI controls. The evolving landscape of AI regulations remains a focal point as the EU navigates the complexities of balancing innovation and oversight.
(With inputs from agencies)
Join our Whatsapp channel to get the latest global news updates
Published on:
TAGS:
A report has accused the once renowned US sports magazine of using content and bios of fake authors on its website created by AI. Sports Illustrated has insisted that the report is ‘not accurate’ and said the articles were produced by a third party group – though the bios have since been deleted
The present code of conduct was established in 2016 after Facebook, Twitter, YouTube, and Microsoft pledged to address hate speech online in Europe within a day. Since then, more platforms have joined it, including LinkedIn, Instagram, and TikTok
Microsoft and OpenAI's partnership is facing some serious scrutiny in the UK and the US. Both of the tech companies are under investigation for harmig the competion in AI, and what exactly went on that led to Sam Altman’s dismissal
Network18 sites
Copyright © 2023. Firstpost – All Rights Reserved.

source

Jesse
https://playwithchatgtp.com