top of page
  • David Hubert

Busy week for A.I. governance!

The end of October has seen several big actions on A.I. governance with the White House publishing a presidential Executive Order on 30 October, followed straight away by the announcement of the G7 Code of Conduct. At the same time, the EU is working relentlessly to adopt its A.I. Act by the end of the year and the UN Secretary-General announced the creation of a new A.I. Advisory Board. On top of it all, (some) world leaders, tech executives and experts were gathering in the U.K. on 1 and 2 November to attend the A.I. Safety Summit led by Prime Minister Rishi Sunak.


So what are these initiatives, how do they compare and how will they impact A.I. in the future?





Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence


The executive order - which mainly does not have legal force - directs a series of federal agencies in sectors such as housing, health and national security to create standards and regulations for the use and oversight of AI. These include guidance on the responsible use of AI in areas like criminal justice, education, health care, housing, and labor. The agencies and departments are tasked with developing guidelines that AI developers must adhere to as they build and deploy this technology, and dictate how the government uses AI. The order creates new reporting and testing requirements for the AI companies behind the largest and most powerful models.


In a nutshell:

  • The Order requires that developers of frontier A.I. share their safety test results with the government

  • The National Institute of Standards and Technology (NIST) is tasked with developing standards, tools and tests for safe and secure A.I.

  • It addresses the risks of using AI to engineer dangerous biological materials by tasking agencies that fund life-science projects to develop standards for biological synthesis screening.

  • The Department of Commerce is tasked with developing guidance for content authentication and watermarking to clearly label AI-generated content.

  • Equity and Civil Rights: Provide guidance to keep AI algorithms from being used to exacerbate discrimination. Create best practices for investigating and prosecuting civil rights violations related to A.I. Creation of best practices on the use of AI in sentencing.

  • Promotes A.I. for healthcare and A.I. for education.


The Executive Order has been generally well received, albeit with the caveat that this is not a new piece of law and that Congress needs to get its act together to create binding legislation.



Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (G7 Code of Conduct on A.I.)


On the same day Joe Biden sign his Executive Order, the G7 — consisting of the U.S., E.U., Britain, Canada, France, Germany, Italy and Japan — introduced a voluntary AI code of conduct. The new 11-point framework aims to help guide developers in responsible AI creation and deployment.


The group of global leaders called on organizations to commit to the code of conduct, while acknowledging that “different jurisdictions may take their own unique approaches to implementing these guiding principles.”


The Code of Conduct lays out 11 principles which it calls on organizations to abide by:


  1. Take appropriate measures throughout the development of advanced AI systems to identify, evaluate, and mitigate risks across the AI lifecycle.

  2. Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.

  3. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency.

  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia

  5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.

  6. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content

  8. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  9. Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education

  10. Advance the development of and, where appropriate, adoption of international technical standards

  11. Implement appropriate data input measures and protections for personal data and intellectual property


Whereas this is only a voluntary code of conduct with no enforcement instruments, the fact that it exists is encouraging. It is the foundation for greater international regulation and cooperation on A.I. governance.




European Union A.I. Act


The European Union (EU) is working on a new legal framework that aims to significantly bolster regulations on the development and use of A.I. In 2021, the European Commission published its proposal on an A.I. Act which has been debated since. When it is eventually adopted by all parties (either by the end of 2023 or early 2024), it will become the world's first comprehensive legal framework for artificial intelligence.


The proposed legislation focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.


A risk-based approach:


The AI Act proposes a risk-based approach and horizontal regulation. It classifies AI systems into 4 categories of risk: prohibited, high-risk, limited-risk and minimal-risk (figure 1).


  • Prohibited AI systems are those that violate human dignity, such as those that manipulate human behavior or exploit vulnerabilities. These systems are banned from being developed, placed on the market or used in the European Union.

  • High-risk AI systems are those that pose significant risk to health, safety, or fundamental rights, such as those used for biometric identification, recruitment, credit scoring, education, or healthcare. High-risk AI systems must comply with strict rules on data quality, transparency, human oversight, accuracy, robustness and security. They must also undergo a conformity assessment before being placed on the market or put into service.

  • Limited-risk AI systems are those that pose some risk to users or consumers, such as those that generate or manipulate content or provide chatbot services. Limited-risk AI systems must provide users with clear information about their nature and purpose and allow users to opt out of using them.

  • Minimal-risk AI systems are those that pose no or negligible risk, such as those used for entertainment or personal purposes. Minimal-risk AI systems are subject to voluntary codes of conduct and best practices.

The AI Act aims to establish a governance structure for the implementation and enforcement of its rules. This includes a European AI Board (EAIB) that will provide guidance and advice on various aspects of the AI Act, such as harmonized standards, codes of conduct and risk assessment methods.




UN high-level Advisory Body on Artificial Intelligence


On 26 October, the U.N. Secretary-General announced the creation of a new Artificial Intelligence Advisory Body on risks, opportunities and international governance of artificial intelligence. That body will support the international community’s efforts to govern artificial intelligence. Bringing together up to 38 experts in relevant disciplines from around the world, the Body will offer diverse perspectives and options on how AI can be governed for the common good, aligning internationally interoperable governance with human rights and the Sustainable Development Goals. The advisory body comprises experts from government, private sector and civil society, and will engage and consult with initiatives and international organizations, to bridge perspectives across stakeholder groups and networks.




A.I. Safety Summit 2023


The summit, hosted by U.K. Prime Minister Rishi Sunak, brings together international governments, leading AI companies, civil society groups and experts in research to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.


The 5 objectives discussed at the summit are:


  • a shared understanding of the risks posed by frontier AI and the need for action

  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks

  • appropriate measures which individual organisations should take to increase frontier AI safety

  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance

  • showcase how ensuring the safe development of AI will enable AI to be used for good globally

The Summit will not produce a formal regulatory body on AI. But the U.K. government hopes it will produce a consensus on the risks posed by unrestricted AI development and the best way to mitigate them. It is notable though that whereas the Summit had been planned for a while, the U.S., U.N. and G7 have all scrambled to publish their initiatives ahead of it, robbing it of the opportunity to make any real significant announcement.



bottom of page