AI & ML Platforming for Government.

Overview

As part of a broader data transformation programme, machine learning & artificial intelligence platforming and tools were introduced to modernise data science & analytics capabilities. Introducing AI & ML platform capabilities into a government environment means meeting the highest information security requirements due to the potential sensitivity of data and cybersecurity threats overall.

Key challenges

The challenges of bringing AI platforming into a government environment included:

Outcome

To overcome these challenges it was important to take an iterative approach. This meant leading hands-on technical delivery with a small innovation team while coordinating organisational work. The innovation team was able to identify immediate, transitional, and final (business-as-usual) [things].

This collaborative approach achieved consensus across HR, Architecture, Management, Finance and HR that allowed for updated ISO 27001 information security practices to be accepted by security teams and taken forward for certification.


Challenges in depth

Managing risks from the AI & ML Ecosystem’s rapid pace of change

Institutional organisations such as finance & government have relied on stringent, but slow information security processes. Risk is often mitigated by offloading risk to enterprise security vendors.

But with ML & AI, tools change at an ever-increasing pace, often going from the academic and open-source environments directly into the enterprise.

The lifecycle from release to commercialisation, and finally going through governance teams that are often backed up can mean that tools are outdated by the time they reach the hands of data scientists. In the worst cases ‘temporary workarounds’ to dealing with bureaucracy can end up creating shadow IT – security nightmares.

Managing this conflict can leave organisations having to choose between security and good governance or falling behind.

Solutions

The first step in managing continuously emerging risks is recognizing that static, gatekeeping processes don’t scale. The second is understanding that information security risks are usually identified at the source before they’re propagated across the industry and finally industrialised. When dealing with innovative technology, your own people might have the best information. The key is managing it effectively.

A security-in-depth approach allows you to manage ever-emerging risk. This means combining technology, processes, education, and cross-organisational cooperation.

First, a cross-organisational team of security, architecture, and data science teams was put together to identify known and emerging risks.

Second, processes were developed, along with roles with appropriate levels of training for risk management against those known risks at various levels. By devolving information and some security responsibilities with clear authority and accountability, the organisation was able to multiply its capability. Along with clear authority and roles, people had a framework for asking questions to the right people at the right time. Most importantly, as new risks or situations outside of agreed boundaries emerged, there were clear lines of escalation. This allows an organisation to go fast when things look normal, and take a step back and slow down when dangers appear.

Finally, tools for automation such as code vulnerability scanning were put into a trial to understand how they would perform against the agreed risk posture. This ‘tools last’ approach ensured that the organisation wasn’t just buying technology and hoping for the best, but rather making informed decisions and managing risks in the best way possible.

Managing data risk in multi-tenancy environments

For many high-security environments such as banks and government, the process of introducing new technology is a one-way street where security and architecture review boards must vet new technologies in a once-and-done process. Systems and the data they carry are often tied very closely together during security and risking processes.

With platforming technology, this can be near impossible. Tenants will have varied requirements, contexts, and use-cases. You can never know what the data is going to be ahead of time. In ISO 27001-speak: the asset cannot be defined, and therefore risk cannot be managed.

Managing risks and controls in this world can be an administrative nightmare for an organisation.

Solutions

As with the above AI & ML risks, half of the solution wasn’t technical. We separated the data impacting process from the technology process. Generic risks to the platform could be agreed, impacted and implemented with the understanding that there was just a baseline cost for many of the controls being put onto the platform.

Separate from that, we were able to agree that the costing for implementation of the highest level of controls (where PII was required for exploratory analysis) could be shared across a smaller portion of the organisation, even if it was across teams. Because we were developing platforming technology, we were able to design two flavours of platform instances – one for general use and one that was more secure.

Treating information security as a product requirement rather than an administrative barrier can mean the difference between moving quickly and securely or sitting in an endless cycle of uncertainty.

Information Security, Organisational Planning & HR

It’s not enough to create an ISMS or even to achieve a certification like ISO 27001 or SOC2. Information security management systems need to be kept up-to-date and cover new systems and processes. It’s so important that the new (2022) update to ISO 27001 has added a Clause - 6.3 that focuses on planning for change. [ ISO also requires roles & responsibilities, asset owners, etc. where does RISK sit?]

But doing and planning for change can turn into a problem once. HR teams are usually not well-equipped to develop new roles. Architecture & Dev teams are usually too busy implementing to think about technical delivery.

Security teams aren’t always thinking about skills development and organisational changes. In the worst cases they bring about a lot of questions, but not a lot of answers in this process.

Solutions

A sound delivery strategy for AI platforming requires looking at all of the variables together – technology, people, and processes. Having hands-on experience with all three is a requirement when there’s no reference point. Bringing people together and providing leadership and developing consensus about what works now and what will be done later is the last piece of the puzzle.

Working with the delivery team during a pilot programme identified various roles and responsibilities that could be handled initially by a small group within an innovation team. Rather than working in a strategic or project management silo, new roles and skills that would be required were identified by doing. Skills gaps between the delivery team of consultants, leaders within the organisation, and the broader available workforce were identified this way, and planned alongside the HR team, providing feedback so that they could start their work ahead of the final stages of delivery.

( communicating in the language of the enterprise – skills, roles, progression paths )