Huawei’s Cybersecurity Chief calls for more focus on AI security
Interview with John Suffolk, Global Cybersecurity and Privacy Officer, Huawei.
He went from the top of the British Government to the top of the world’s biggest tech innovators.
John Suffolk was Her Majesty’s Chief Information Officer, and now reports into the CEO of Huawei, Ren Zhengfei, as his Global Cybersecurity and Privacy Officer. “I have the ability to work with and influence all the very, very big decision-makers,” he reveals.
Suffolk uses his role to set out the threats, challenges and trends that Huawei must adapt to as one of the world’s biggest companies. He sat down with GovInsider to discuss a future powered by Artificial Intelligence, and how global governments can keep up.
An AI powered future
Artificial Intelligence is “a very broad spectrum”, Suffolk says, “from autonomous killing robots to something simple at the other end - we don’t see it as a separate product”. Huawei will be leaning heavily on the technology to deliver all of its own services, backed up by a $15bn research budget. “We see AI being integrated in our existing products for five years”, he adds.
Take smartphones as an example - Suffolk lifts up the latest model from the desk. “We have an AI chip in here, and you can take pictures and the AI is beginning to say it knows what the object is,” he says. That means it could browse the internet to tell you where you could buy a similar product, or give you tourism advice based on a picture of a local landmark.
It will also power Huawei’s telco networks. “5G infrastructure is actually quite complicated”, Suffolk says, and it used to take months to set up a stable network. Now their system has AI built-in and setup only takes a couple of weeks.
Even power consumption can be managed, he says, saving that all important electricity bill. “We built AI to monitor the traffic going through all the equipment,” he says, which can save up to 20 percent of electricity bills by cutting down usage when there is low traffic going through the equipment.
What if the system develops a bug - or a hack is attempted? AI can also help here. “AI is looking at all the things going on and saying: ‘there is some odd behaviour here’,” Suffolk notes. If you can work out the route cause, you can flag this up for immediate investigation and save up to 80 percent of the normal response time to bugs and other incidents.
Ultimately, Huawei views AI as a “pragmatic operational thing” - it will deliver better value to their customers.
Complexity as a challenge
As technology evolves, it increases complexity and opens up greater risks. “Complexity is the mother of evil, it’s as simple as that,” says Suffolk. Governments need to ensure that they know what is happening in their supply chain; their infrastructure; their interconnected apps and hardware.
Senior officials must “step back, look at the whole picture and ask yourself: ‘Where is my risk here?’,” Suffolk advises. “If you’re going to get dragged down into the detail, you’re not going to see the big picture”. Security teams should understand where the greatest areas of risk are and know what the likely hotspots are.
They also need “basic hygiene,” he adds. “If you’re not going to do patching, you might as well give the data away.” But half of all data breaches are due to a lack of these basic actions, Suffolk warns.
AI security and regulation
Laws and legal frameworks often have to play catch up with technology as it advances, especially in the AI industry, Suffolk says. Huawei believes that policy makers have to act now to regulate the use of ICT and AI, so tech players can maximise growth opportunities and prevent unexpected consequences. The tech giant has published a whitepaper on issues facing AI security, and what governments can do to protect their people’s privacy in an age of AI.
Yet, there cannot be good regulatory policies without strong government support. Suffolk believes that nations leading the way in digital innovation are well-positioned to lead the way in establishing AI regulation. Singapore, for example, is a prime candidate for kickstarting the discussion on how governments can implement order in cyberspace, he adds.
The all important legacy
Suffolk’s former employer - the British Government - has been pushing for a new concept for hardware manufacturers called Security by Design. This would see a greater emphasis on security built into everything from smartphones to wifi base stations.
Huawei agrees with this approach. The problem, Suffolk says, is that there is a lot of older technology around that has greater security risks and is still used in critical infrastructure. “The car that you bought 10 years ago is not as safe as the car that you buy today”.
It’s crucial, then, that manufacturers allow for their products to be easily upgraded - without throwing out the whole piece of kit. Instead, they build in spare capacity like memory or processing power in a chip so that it can cope with newer software. “One of the things we’re very proud of is that we don’t say to customers, every 18 months if you want an upgrade you need to buy a new PC because the chips don’t fit the new software”.
Suffolk’s own legacy is a lifetime of work spent on cyber security. While in the United Kingdom, he brought together the GCHQ intelligence service, the Cabinet Office, and the Department of Trade to discuss how to deal with new vendors and understand their technologies.
What was born was an Evaluation Centre in Banbury that allows intelligence agents to vet Huawei technology and understand how it works - spotting any cyber risks and certifying it as safe to use across the United Kingdom.
Now he’s at Huawei and advising the CEO on how to ensure he makes his products as secure as possible. The more he sees in Asia, the more inspired he is by the dynamism and innovation. “What I see is innovation across every technology platform: from drones to payment processes, from artificial intelligence to health and nanotech - you just see this activity going on everywhere. And I think that’s hugely exciting.”
Download Huawei’s whitepaper on 'Thinking Ahead About AI Security and Privacy Protection' here.