Innovation Institute

Response to Government- Establishing a Pro-Innovation Approach to Regulating AI

The Institute of Innovation and Knowledge Exchange (IKE Institute) response to the Government’s policy paper announced by BEIS, DCMS and the Office for AI consultation on Establishing a Pro-Innovation Approach to Regulating AI:

IKE Institute’s Key Messages

  • The IKE Institute and its Innovation Council welcomes the government’s announcement to adopting a pro-innovation approach to AI regulation recognising that both the opportunities and challenges presented by Artificial Intelligence (AI) are fundamentally global and multisectoral in nature. In practice, all AI-related Database Management Systems (DBMSs), data integration, data quality, master data, and data governance platforms and tools work across a spectrum of stakeholders. Therefore, any regulatory framework should be proportionate and inclusive to attract wider participation (European and Global levels), and encourage competition from the innovation not from regulation.
  • There would be merit in considering how to incentives regulators to take risks. This will require those offering new innovative products and services to clearly explain the societal benefit of their offering, assuming it works, and identify the major risks that cause the benefits to be lost  or harm created. Regulators can then potentially make broader judgements about acceptability (although public opinion always creates unhelpful pressure) as failures are considered unacceptable, and positive outcomes are often perceived to be too boring to make social media.
  • Regulation in the field of Artificial Intelligence can help to:
    1. Create a lexicon of shared understanding and common interpretation of descriptions of systems, components and their functions within an AI Ecosystem;
    2. Establish communities of best practice drawn and stronger collaborations from multiple sectors to develop and exchange Use-Cases;
    3. Identify better ways to establish Trust in AI systems, and assess methods to minimise threats and risks to AI Systems;
    4. Developers should have greater accountability for monitoring the “long term performance” of their AI-based systems in the field;
    5. Develop  better measures for the Data Quality, Governance and Trust Worthiness (e.g. across distributed cloud platforms);
    6. Ensure consistency in Data Sovereignty and Interoperability Standards, and agree on the Assessment of Machine Learning (ML) Classification Performance (e.g. AI Edge Devices and Sensors);
    7. Continue to evolve innovative technologies and allow for “rapid experimentation and discovery” without compromising performance or risk;
    8. Underpin the drive for sustainability and decarbonisation;
    9. Elevate the focus on AI Ethics: Privacy and security, reliability and safety, fairness and inclusiveness, transparency and accountability across economic and social settings and contexts;
    10. Drive educators to emphasise AI skills and competency readiness in a coherent manner;
    11. Grow the innovation profile of AI-based solutions to strengthen productivity and create social and economic value.
    12. Continue to Incentivise innovation and allow for the formation of new AI-driven ecosystems that demonstrate value.

Responses to the Consultation Questions

Q1:     What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?

  • Having a Common Terminology for AI Concepts to Ensure that the Organisations involved in Artificial Intelligence are using the same language. Ensuring a lexicon of shared understanding and common interpretation of descriptions of systems, components and their functions within an organisation’s AI ecosystem is key to enabling acceleration of AI readiness.
  • There needed to be an increased recognition of the importance of human supervision and control to detect anomalies (Human In The Loop). This necessitates the need for developers to have greater accountability for monitoring the long term performance of their systems in the field.

Q2:     Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?

  • Having a context-driven approach is important as it is essential not to look at an AI algorithm in isolation. Quite often it is not a case of trying to fix or regulate an algorithm, but rather it is about the Use Case and the wider system that it is a part of (e.g. materiality in AI based mission and safety critical systems; bounded autonomy in medical or military systems)
  • Some regulators are better than others, and although an agnostic view of data is the ideal situation, there are some regulatory environments (e.g. the chemical industry), where the data and regulation is used to limit innovation rather than progress it.
  • Establishing communities of best practice drawn from multiple sectors to share Use-Cases and identify potential issues of multi sector data integration (IoT Devices and sensors, AI Edge; compliance mandated by safety critical market verticals) can improve the overall shared understanding, maturity and readiness of AI across multiple sectors.

Q3:     Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?

  • Yes. However, such principles must not become categorical rules, but they should act as “guardrails”. Regulators can apply there guardrails in making their judgements about the acceptability of new innovative AI-based products and services. The onus should be on the producers of the innovative AI-based products and services to clearly explain the benefits of their offering, and identify any major risks that could cause the benefits to be lost or harm created. Clarity around product liability remains a grey area (e.g. If an algorithm is used inappropriately, is the accountability with the algorithm designer, or the user that taught the system to do bad things). In the context of systems integration, this matter becomes even more unclear.

Q3:     Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?

  • Many of the UK Learned Societies and Professional institutions can offer coordinated cross-sectoral views. To optimise effectiveness, any chosen regulatory framework should focus on such key areas of concern such as:
    • Common description of what constitutes a generic AI System
    • Data Quality outlining typical accepted verification and validation methods
    • Data Governance (including guidelines on Data Sovereignty, Security, Cybersecurity and Privacy) data performance metrics
    • Interface protocols (Common API Frameworks allowing visibility)
    • Trustworthiness (guidelines on methods to minimise threats and risks to AI Systems), and consistent method to uncover software subversion.

Q4:     Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?

  • Cloud service providers are located around the world. A common cross-countries Stewardship Framework is needed to:
    • Assure data quality and adherence to basic acceptable thresholds  (e.g. Governance, ethics and Observability, as applicable)
    • Enable organisations to build comprehensive analytics applications from multiple vendors. Again, common APIs will help to improve both Data Management Applications and Analytics Applications.
    • Address automated closed-loop generative data management in Relational DBMS and Non-relational DBMS.
  • In terms of international markets, the UK has a unique legal approach with “ALARP” – i.e. the risks of the system should be “As Low As Reasonably Practical”. The source case law is about Health and Safety in Mines – is there merit in a legal backstop that would explicitly limit liability, provided that recognised standards/guidelines have been followed? This is particularly important in innovative AI systems because there are no benchmarks for AI-ALARP.

Q5:     Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

  • There are many discrete sources of insights which are not necessarily directly applicable.  Clearly, the UK has a great opportunity to establish a benchmark for monitoring and optimising effectiveness of its approach (establish a maturity level and associated metrics) and continue to Incentivise innovation.

IKE Institute 26 August 2022.

Prof Sa’ad Sam Medhat
Dr Rosie Bryson
Prof Alvin Wilby
Prof Nick Colosimo
Prof Phil Kennedy
Stuart McDowall

 

 

Archive

2022

Aug (2)
Jul (1)
May (2)
Mar (1)
Feb (1)

2021

Nov (1)
Oct (3)
Sep (1)
Jul (1)
May (1)
Apr (1)
Feb (1)
Jan (1)

2020

Dec (1)
Nov (1)
Sep (2)
Jun (1)
Mar (3)
Feb (1)

2019

Dec (1)
Sep (1)
Aug (1)
Jul (1)
Apr (1)
Feb (1)

2018

Sep (1)
Jun (1)
Mar (2)
Feb (2)

2017

Dec (1)
Nov (1)
Sep (1)
Aug (2)
Jul (1)
Jun (1)
Mar (2)
Feb (1)
Jan (3)

2016

Oct (1)
Sep (1)
Aug (1)
Jul (1)
May (2)
Apr (2)
Jan (1)

2015

Dec (4)
Nov (3)
Sep (3)
Jun (2)
May (2)
Apr (3)
Mar (5)
Feb (2)
Jan (1)

2014

Dec (5)
Nov (5)
Oct (4)
Sep (1)
Jul (7)
Jun (5)
May (7)
Apr (3)
Mar (9)
Feb (11)
Jan (2)

2013

Dec (6)
Nov (7)
Oct (9)
Sep (2)
Aug (2)
Jul (9)
Jun (12)
May (13)
Apr (7)
Mar (6)
Feb (11)
Jan (7)

2012

Dec (5)
Nov (1)
Oct (3)
Sep (1)
Aug (3)
Jul (3)
Jun (5)
May (5)
Apr (5)
Mar (7)
Feb (5)
Jan (5)

2011

Dec (3)
Nov (3)
Oct (4)
Sep (5)
Aug (3)
Jul (2)
Jun (1)
May (5)
Apr (1)
Mar (6)
Feb (6)
Jan (5)

2010

Dec (4)
Oct (1)
Aug (4)
Jul (1)
Jun (3)
May (4)
Feb (3)
Jan (3)

2009

Dec (3)
Nov (8)
Oct (3)
Sep (1)
Aug (1)
Jul (3)
Apr (1)
Mar (5)