Towards an AI powered UK in financial services

10 Oct 2019 | Client publication/article

TheCityUK launched a new report today which looks at how AI is impacting UK-based financial and related professional services.

Ben Kingsley, co-head of Slaughter and May’s Emerging Tech group, was part of a work stream that oversaw production of the report by Accenture as part of the International Regulatory Strategy Group.

The report explores the opportunities afforded by AI in these sectors, highlights the perceptions around barriers to adoption and adds to the ongoing debate about the role for government and regulatory bodies in shepherding AI adoption and innovation.

Adoption

While the report found that the financial and related professional services industry is ready to reap the benefits of AI, it highlights the perceived barriers to adoption (which range from a lack of consensus surrounding the definition of AI technologies to negative public perception of AI). It also notes that the approach to adoption and implementation varies according to the industry segment and type of customers serviced by the particular firm, with many firms still at the early stages of adoption. To help with AI adoption decisions, the report recommends that businesses need to balance speed, agility and scale. They should focus on the basics (such as ensuring they can define and deliver value – which includes deciding what to focus on and using off-the-shelf tools that meet their needs) so that they can scale when needed, having prioritised advanced analytics, governance, ethics and talent.

Best practice

The report identifies four key themes for firms to consider when developing and implementing AI solutions, and recommends related best practices. The themes are:

  1. Fairness, transparency and consumer protection – implement robust checks and controls. These should underpin auditable AI systems which are designed within an ethical AI framework and aligned with core corporate values;
  2. Data privacy and security – create a set of data ethics principles for collecting, processing, aggregating and sharing customer data to build trust in the AI, and update risk frameworks to incorporate contingency plans for incorrect outcomes (caused, for example, by inaccurate historic data). Partner with regulatory bodies to explore new data sharing infrastructures such as Data Trusts (a concept the Government is currently exploring as part of its AI strategy);
  3. Governance – improve tech/digital literacy and capabilities across your organisation and at Executive board level to ensure accountability. There needs to be a shift in corporate oversight – it is not sufficient to delegate AI to a CTO or equivalent. This builds on a general theme we are seeing in the emerging tech space where regulators and those providing guidance in areas such as AI and cyber are increasingly looking to boards and their corporate governance structures to manage these new enterprise risks; and
  4. Ecosystem resilience – strong traceability of data and algorithms is needed for in-house, customised and standardised AI systems. Develop business continuity and resilience plans in the event of failure/threat and ensure there is sufficient human oversight and control of AI systems to reduce risks (existing and new) that could result in systemic threats.

AI regulatory framework

The report concludes that the UK’s current regulatory framework remains fit for purpose regarding AI and states that their respondents “broadly endorse” the position taken by the UK House of Lords Select Committee on AI that “blanket AI-specific regulation, at this stage, would be inappropriate” (see Will the UK Regulate AI? for more information). However, the report suggests equally that UK policymakers can help encourage AI growth. Its suggestions include that:

  1. The UK government can foster an innovation-friendly regulatory environment by adapting existing regulatory solutions where possible (such as the Senior Managers and Certification Regime) and by adopting a principle and outcome based approach to regulation which avoids imposing overly restrictive rules; and
  2. Policy makers can adopt a risk-adjusted approach to AI supervision (i.e. tailoring guidance to sectors and AI use cases) and develop targeted regulatory remedies to address any new, AI specific risks that emerge (for example adapting frameworks for customer’s seeking recourse for AI derived outcomes).

Speaking at this morning’s launch of the paper Ben Kingsley said “TheCityUK report confirms that there’s much to be gained by exploiting the powers of AI, across all aspects of finance and professional services, but in doing so all of us must aspire not only to ‘do new’ but to do better, and to do good.”

Contacts

Ben Kingsley (partner), Natalie Donovan (professional support lawyer)

Ben is co-head of, and Natalie a PSL in, our Emerging Technology Group. The group supports clients in financial services and other sectors to assess and mitigate the risks of implementing emerging technologies such as AI into their business.


Publications and seminars landing page