IGF 2023 Open Forum #81 Cybersecurity regulation in the age of AI

    Time
    Wednesday, 11th October, 2023 (01:15 UTC) - Wednesday, 11th October, 2023 (02:15 UTC)
    Room
    WS 6 – Room E
    Issue(s)

    New Technologies and Risks to Online Security

    Panel - 60 Min

    Description

    Cybersecurity Regulation in the Age of AI: from dilemmas to practical solutions

    With the rise of AI-powered cybersecurity tools and techniques, there is a growing concern that malicious actors could use these tools to carry out more potent cyber-attacks. As AI increasingly integrates into our daily lives there is a need to ensure that AI technologies are developed and used safely. Cyber robustness, which refers to the ability of AI systems to withstand cyber-attacks and maintain their functionality in the face of malicious activity, is one of the key principles for trustworthy AI developed by the OECD. The overlap between AI and cybersecurity raises a number of challenges, some of which could be addressed through effective regulation. Thus, in order to ensure that AI systems are cyber robust, it is important to establish clear standards and balanced regulations.

    The question faced by cybersecurity regulators across the globe is what such standards and regulations should include. In this session we hope to contribute to global discussions on AI robustness regulation by practical suggestions for cybersecurity.

     

    Organizers

    Israeli National Cyber Directorate (INCD)
    INCD - the government of Israel. WEOG

    Speakers

    Dr. Bushra Al Blooshi, Head of Research and Innovation Dubai Electronic Security Center, UAE.

    Mr. Ryan Rahardjo, Government Affairs Team, Google.

    Mr. Daniel Loevenich, AI, General Policy and Strategy, BSI, Germany.

    Mr. Avrahan Zaruk, Head of technological division, INCD, Israel.

    Mr. Hiroshi Honjo, Ciso of NTT data, Nippon Telegraph and Telephone Corporation.

    Ms. Gallia Daor, Policy Division, OECD.

    Ms. Daria  Tsafrir, Legal Advisor, INCD, Israel - Moderator.

     

    Onsite Moderator

    Mrs. Daria Tsafrir, INCD, Israel

    Rapporteur

    Mr. Cedric Sabbah, MOJ, Israel

    SDGs

    9.1

    Targets: By promoting a robust, accountable and regulated AI systems supported by a multi stakeholders global approach we can contribute to develop a reliable, sustainable ecosystem in which AI systems are cyber resilient and maintain their functionality to support economic development and human well-being.

    Key Takeaways (* deadline at the end of the session day)

    1.Focus on Producers rather than on end users 2.Sectoral Regulation, data protection and risk based approach.

    Call to Action (* deadline at the end of the session day)

    1. Cooperation and Multi stakeholders approach - harmonized AI certification schemes. 2.Flexible standards in order to promote the use of new technologies.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    Open forum

    Octobrt 11th, 10:15 (Kyoto)

    Session Report

    With the rise of AI-powered cybersecurity tools and techniques, there is a growing concern that malicious actors could use these tools to carry out more potent cyber-attacks.  As AI increasingly integrates into our daily lives there is a need to ensure that AI technologies are developed and used safely.  

    The overlap between AI and cybersecurity raises a few challenges, some of which could be addressed through effective regulation. Thus, to ensure that AI systems are cyber robust, it is important to establish clear standards and balanced regulations. The question faced by cybersecurity regulators across the globe is what such standards and regulations should include.

    Key Issues Raised

    1. Is the current cybersecurity toolkit sufficient to deal with threats to AI systems or to the data used for it?
    2. How can cybersecurity regulation help promote an ecosystem in which AI systems are cyber resilient and maintain their functionality in the face of cyber-attacks?
    3. What should governments be doing in the regulatory space to improve cybersecurity of AI systems? Is AI too dynamic for regulation?
    4. The risks of over regulation.

    Presentation summary

    Mr. Zaruk  (Israel) - there are 3 points of connection – resilience of AI models, using AI for defense, and defending against AI-based attacks. Resilience of AI models – the INCD focuses on common libraries models but need tailored model for AI algorithms. The same way we do with other IT domains. INCD has established a National lab with Ben Gurion University, it has online and offline platform, for self-assessment of ML models, coordinated with academic world, govt and tech giants. A 2nd domain is using AI for defense – most tools and products use some of AI.  We understand the power of AI and what it can offer. So, the INCD promotes innovation in that field. Their role as regulator is not to interfere, but rather to assist the market.  AI helps scale for critical tasks – we use AI to assist and mediate between human and machine.

    The last domain, maybe the most complex, is to defend against AI-based attackers.

    Dr. Al Blooshi (Dubai) – we want to enable CI to use new technologies AI model security vs security of consumers – these are totally different at the end of the day it's like any software that we used in the past, but the difference is how it's deployed and used. Re: security of AI consumers (i.e., end-users). We focus on end-users instead of producers. Here too we need to look at how it's used, data privacy, in what context?

     Policy standards around AI models – OECD principles, NIST AI security standard, EU policies recently – this is progress! We need to develop basic principles and best practices – secure by design, supply chain security: these remain relevant, but we should have one additional layer on top - Ai-specific, then there should be a sector-specific layer (transport, health, banking) – we need to work with the regulators of these sectors. Strongly believe in risk-based approach. Having too much control will limit innovation and use while too little control will not enable across-the-board security. We are developing an AI sandbox for govt; Also have clear guidelines on cloud privacy, which includes AI. No need to reinvent the wheel.

    There are competing international models, we need harmonized AI certification scheme, put out something in cooperation with the WEF.

    Mr. Hiroshi Honjo (Japan) - OECD and NIST frameworks that help define the risks. We look at privacy. LLM are getting data from somewhere. The question is where is the data from, who owns it? Like cross-border issue of data flows – which laws and regulations apply to that data, like cloud data. Risk of data being compromised, Risk management.

    Harmonization - For a private company, lack of harmonization has high costs.

    Ms. Gallia Daor (OECD) - In 2019 OECD was first intergvt org to adopt AI principles. They describe what trustworthy AI is, 5 principles, 5 recommendations. This includes the principle of robustness, security and safety – throughout the lifecycle of AI+ systematic risk management approach.

    Since then, we've given tools for countries to help implement them - we have AI policy observatory + metrics, trends, etc.; also work on gathering expertise – over 400 experts from different countries and disciplines + catalog of national AI tools.

    OECD's work on digital security – foundational level: principles for risk management; strategic level for countries; market level -how we can work on misaligned incentives; technical level – vulnerability treatment, good practices for disclosure, protecting vuln. The intersection between the two – we need to focus on digital security of AI systems (e.g. data poisoning) and also how AI systems can be used to attack (e.g .genAI can be used for large-scale attacks).

    Fragmentation is a problem; int'l orgs can help in that regard -mapping different standards and frameworks, finding commonalities, convening different stakeholders, and advancing the metrics and measurements.

    Daniel Loevenich  (Germany) -  Germany is looking at EU perspective on AI. EU standards orgs do a good job focusing on the AI Act standardization request. Germany is looking fwd to implementing procedures and infrastructures based on our conformity assessment.

    Technical system works – especially for embedded AI – we address them with engineering analysis; in case of a distributed IT system, we have special AI components or modules (e.g cloud-based services) – we need to look at this as part of supply chain security. We do that by mapping out these applications/sector-based risks which may be regulated by standards, down to technical requirements for AI modules. Lots of stakeholders are competent and responsible to address these risks.

    The overall issue is to build a uniform AI evaluation and conformity assessment framework. This is a European approach, and it is the key issue in the AI standardization roadmap. So, what do we do next?

    Based on existing cyber conformity assessment infrastructure, we try to address these special AI risks as an extension to existing frameworks.

    We want to promote the use of technologies, don't want to prescribe, prefer to recommend.

    Standards are good because they give companies flexibility. Would like to offer 3 schools of thought:

    1 -technical (and sector-agnostic); 2 - sector specific; 3 – values-based.