Global Powers Make 'Landmark' Pledge to AI Safety
Author : Digitalnewspoint Last Updated, Nov 2, 2023, 8:38 PM
Technology


Representatives from 28 countries and tech companies convened on the historic site of Bletchley Park in the U.K. for the AI Safety Summit held Nov. 1-2, 2023.

Day one of the summit culminated in the signing of the “landmark” Bletchley Declaration on AI Safety, which commits 28 participating countries — including the U.K., U.S. and China — to jointly manage and mitigate risks from artificial intelligence while ensuring safe and responsible development and deployment.

On the second and final day of the summit, governments and leading AI organizations agreed on a new plan for the safe testing of advanced AI technologies, which includes a governmental role in the pre- and post-deployment testing of models.

Jump to:

What is the AI Safety Summit?

The AI Safety Summit is a major conference held Nov. 1 and 2, 2023 in Buckinghamshire, U.K. It brought together international governments, technology companies and academia to consider the risks of AI “at the frontier of development” and discuss how these risks can be mitigated through a united, global effort.

The inaugural day of the AI Safety Summit saw a series of talks from business leaders and academics aimed at promoting a deeper understanding frontier AI. This included a number of roundtable discussions with “key developers,” including OpenAI, Anthropic and U.K.-based Google DeepMind, that centered on how risk thresholds, effective safety assessments and robust governance and accountability mechanisms can be defined.

SEE: ChatGPT Cheat Sheet: Complete Guide for 2023 (TechRepublic)

The first day of the summit also featured a virtual address by King Charles III, who labeled AI one of humanity’s “greatest technological leaps” and highlighted the technology’s potential in transforming healthcare and various other aspects of life. The British Monarch called for robust international coordination and collaboration to ensure AI remains a secure and beneficial technology.

Who attended the AI Safety Summit?

Representatives from the Alan Turing Institute, Stanford University, the Organisation for Economic Co-operation and Development and the Ada Lovelace Institute were among the attendees at the AI Safety Summit, alongside tech companies including Google, Microsoft, IBM, Meta and AWS, as well as leaders such as SpaceX boss Elon Musk. Also in attendance was U.S. Vice President Kamala Harris.

What is the Bletchley Declaration on AI safety?

The Bletchley Declaration states that developers of advanced and potentially dangerous AI technologies shoulder a significant responsibility for ensuring their systems are safe through rigorous testing protocols and safety measures to prevent misuse and accidents.

It also emphasizes the need for common ground in understanding AI risks and fostering international research partnerships in AI safety while recognizing that there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

U.K. Prime Minister Rishi Sunak called the signing of the declaration “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI.”

In a written statement, Sunak said: “Under the UK’s leadership, more than twenty five countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research.

“The UK is once again leading the world at the forefront of this new technological frontier by kickstarting this conversation, which will see us work together to make AI safe and realize all its benefits for generations to come.” (The U.K. government has dubbed advanced artificial intelligence systems that could pose as-yet unknown risks to society as “frontier AI.”)

U.K. Prime Minister Rishi Sunak hosted the UK AI Summit at Bletchley Park. Image: Simon Dawson / No 10 Downing Street

Experts’ reactions to the Bletchley Declaration

While the U.K. government repeatedly underscored the significance of the declaration, some analysts were more skeptical.

Martha Bennett, vice president principal analyst at Forrester, suggested that signing of the agreement was more symbolic than substantive, noting that the signatories “would not have agreed to the text of the Bletchley Declaration if it contained any meaningful detail on how AI should be regulated.”

Bennett told TechRepublic via email: ​”This declaration isn’t going to have any real impact on how AI is regulated. For one, the EU already has the AI Act in the works, in the U.S., President Biden on Oct 30 released an Executive Order on AI, and the G7 International Guiding Principles and International Code of Conduct for AI, was published on Oct 30, all of which contain more substance than the Bletchley Declaration.”

However, Bennett said the fact that the declaration wouldn’t have a direct impact on policy wasn’t necessarily a bad thing. “The Summit and the Bletchley Declaration are more about setting signals and demonstrating willingness to cooperate, and that’s important. We’ll have to wait and see whether good intentions are followed by meaningful action,” she said.

How will governments test new AI models?

Governments and AI companies also agreed on a new safety testing framework for advanced AI models that will see governments play a more prominent role in pre- and post-deployment evaluations.

The framework, which builds on the Bletchley Declaration, will ensure governments “have a role in seeing that external safety testing of frontier AI models occurs,” particularly in areas concerning national security and public welfare. The aim is to shift the responsibility of testing the safety of AI models away from tech companies alone.

In the U.K., this will be performed by a new AI Safety Institute, which will work with the Alan Turing Institute to “carefully test new types of frontier AI” and “explore all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely.”

SEE: Hiring kit: Prompt engineer (TechRepublic Premium)

Renowned computer scientist Yoshua Bengio has been tasked with leading the creation of a “State of the Science” report, which will assess the capabilities and risks of advanced artificial intelligence and try to establish a unified understanding of the technology.

During the summit’s closing press conference, Sunak was questioned by a member of the media on whether the responsibility for ensuring AI safety should primarily rest with the companies developing AI models, as endorsed by Professor Bengio.

In response, Sunak expressed the view that companies cannot be solely responsible for “marking their own homework,” and suggested that governments had a fundamental duty to ensure the safety of their citizens.

“It’s incumbent on governments to keep their citizens safe and protected, and that’s why we’ve invested significantly in our AI Safety Institute,” he said.

“It’s our job to independently externally evaluate, monitor and test these models to make sure that they are safe. Do I think companies have a general moral responsibility to ensure that the development of their technology is happening in a safe and secure way? Yes, (and) they’ve all said exactly the same thing. But I think they would also agree that governments do have to play that role.”

Another journalist questioned Sunak about the U.K.’s approach to regulating AI technology, specifically whether voluntary arrangements were sufficient compared to a formal licensing regime.

In response, Sunak argued that the pace at which AI was evolving necessitated a government response that kept up, and suggested that the AI Safety Institute would be responsible for conducting necessary evaluations and research to inform future regulation.

“The technology is developing at such a pace that governments have to make sure that we can keep up now, before you start mandating things and legislating for things,” said Sunak. “It’s important that regulation is empirically based on the scientific evidence, and that’s why we need to do the work first.”

What are experts’ reactions to the AI Safety Summit?

Poppy Gustafsson, chief executive officer of AI cybersecurity company Darktrace, told PA Media she had been concerned that discussions would focus too much on “hypothetical risks of the future” — like killer robots — but that the discussions were more “measured” in reality.

Forrester’s Bennett held a markedly different opinion, telling TechRepublic that there was “a bit too much emphasis on far-out, potentially apocalyptic, scenarios.”

She added: “While the (Bletchley) declaration features all the right words about scientific research and collaboration, which are of course crucial to addressing today’s issues around AI safety, the very end of the document brings it back to frontier AI.”

Bennet also pointed out that, while much of the rhetoric surrounding the summit was of cooperation and collaboration, individual nations were charging ahead with their own efforts to become leaders in AI.

“If anybody was hoping that the Summit would include an announcement around the establishment of a new global AI research body, those hopes were dashed. For now, countries are focusing on their own efforts: Last week, UK Prime Minister Rishi Sunak announced the establishment of ‘the world’s first AI Safety Institute.’ Today (Nov. 1), US President Biden announced the establishment of the US Artificial Intelligence Safety Institute.”

She added: “Let’s hope that we’ll see the kind of collaboration between these different institutes that the Bletchley Declaration advocates.”

SEE: UN AI for Good Summit Explores How Generative AI Poses Risks and Fosters Connections (TechRepublic)

Rajesh Ganesan, president of Zoho-owned ManageEngine, commented in an email statement that, “While some may be disappointed if the summit falls short of establishing a global regulatory body,” the fact that global leaders were discussing AI regulation was a positive step forward.

“Gaining international agreement on the mechanisms for managing the risks posed by AI is a significant milestone — greater collaboration will be paramount to balancing the benefits of AI and limiting its damaging capacity,” Ganesan said in a statement.

“It’s clear that regulation and security practices will remain critical to the safe adoption of AI and must keep pace with its rapid advancements. This is something that the EU’s AI Act and the G7 Code of Conduct agreements could drive and provide a framework for.”

Ganesan added: “We need to prioritize ongoing education and give people the skills to use generative AI systems securely and safely. Failing to make AI adoption about the people who use and benefit from it risks dangerous and suboptimal outcomes.”

Why is AI safety important?

There is currently no comprehensive set of regulations governing the use of artificial intelligence, though the European Union has drafted a framework that aims to establish rules for the technology in the 28-nation bloc.

The potential misuse of AI, either maliciously or via human or machine error, remains a key concern. The summit heard that cybersecurity vulnerabilities, biotechnological dangers and the spread of disinformation represented some of the most significant threats posted by AI, while issues with algorithmic bias and data privacy were also highlighted.

U.K. Technology Secretary Michelle Donelan emphasized the importance of the Bletchley Declaration as a first step in ensuring the safe development of AI. She also stated that international cooperation was essential to building public trust in AI technologies, adding that “no single country can face down the challenges and risks posed by AI alone.”

She noted on Nov. 1: “Today’s landmark Declaration marks the start of a new global effort to build public trust by ensuring the technology’s safe development.”

How has the UK invested in AI?

On the eve of the UK AI Safety Summit, the UK government announced £118 million ($143 million) funding to boost AI skills funding in the United Kingdom. The funding will target research centers, scholarships and visa schemes and aims to encourage young people to study AI and data science fields.

Meanwhile, £21 million ($25.5 million) has been earmarked for equipping the U.K.’s National Health Service with AI-powered diagnostic technology and imaging technology, such as X-rays and CT scans.



Source link

24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com

You May Like This