Quantum computing can help secure the future of AI systems

Muhammad Usman, CSIRO

Artificial intelligence algorithms are quickly becoming part of everyday life. Many systems that require high security either already rely on machine learning or will soon do so. These systems include facial recognition, banking, military target applications, and robots and autonomous vehicles, to name a few.

This raises an important question: how secure are these machine learning algorithms from malicious attacks?

In an article published in Nature Machine Intelligence, my colleagues from the University of Melbourne and I discuss a possible solution to the vulnerability of machine learning models.

We propose that integrating quantum computers into these models could yield new algorithms with high resilience to adversary attacks.

The Dangers of Data Manipulation Attacks

Machine learning algorithms can be remarkably accurate and efficient for many tasks. They are particularly useful for classifying and identifying image features. However, they are also very vulnerable to data tampering attacks, which can pose a serious security risk.

Data manipulation attacks – which are very subtle manipulations of image data – can be launched in a number of ways. An attack can be launched by mixing corrupted data into a training dataset used to train an algorithm, causing it to learn things it shouldn’t.

Manipulated data can also be inserted during the testing phase (after the training is complete), when the AI ​​system continues to train the underlying algorithms during use.

Humans can launch such attacks even from the physical world. Someone could put a sticker on a stop sign to fool the AI ​​of a self-driving car into thinking it’s a speed limit sign. Or on the front lines, troops may wear uniforms that can trick AI-based drones into identifying them as landscape features.

In any case, the consequences of data tampering attacks can be severe. For example, if a self-driving car uses a compromised machine learning algorithm, it can incorrectly predict that there are no people on the road — even though there are.

This example shows an algorithm that correctly identifies people based on an image input. However, if some pixels are changed during an enemy attack, the algorithm can no longer identify the people. Jan Hendrik Metzen et. al., Author Provided How Quantum Computing Can Help

In our article, we describe how the integration of quantum computing and machine learning could lead to secure algorithms, called quantum machine learning models.

These algorithms were carefully designed to exploit special quantum properties that would allow them to find specific patterns in image data that are not easily manipulated. The result would be resilient algorithms that are also secure against powerful attacks. Nor would they need the expensive “adversary training” currently used to teach algorithms how to resist such attacks.

In addition, quantum machine learning could enable faster algorithmic training and higher accuracy in the learning functions.

So how would it work?

Today’s classic computers store and process information in the form of “bits” or binary digits, the smallest unit of data that a computer can process. In classical computers, which obey the laws of classical physics, bits are represented as binary numbers—specifically, zeros and ones.

Get breaking science stories straight to your inbox.

Quantum computing, on the other hand, follows principles of quantum physics. Information in quantum computers is stored and processed as qubits (quantum bits), which can exist as 0, 1, or a combination of both at the same time. A quantum system that exists in several states at the same time is called a superposition state. Quantum computers can be used to design clever algorithms that exploit this property.

While there are significant potential benefits to using quantum computing to secure machine learning models, it could also be a double-edged sword.

On the one hand, quantum mechanical learning models will provide crucial security for many sensitive applications. On the other hand, quantum computers could be used to generate powerful adversary attacks that could easily fool even cutting-edge conventional machine learning models.

Going forward, we need to seriously think about how best to protect our systems. An adversary with access to early quantum computers would pose a significant security threat.

limitations to be overcome

The current evidence suggests that we are still several years away from the reality of quantum machine learning due to the limitations of the current generation of quantum processors.

Today’s quantum computers are relatively small (less than 500 qubits) and their error rates are high. Errors can occur for a number of reasons, including poor qubit manufacture, errors in the control circuitry, or loss of information (called “quantum decoherence”) through interaction with the environment.

Still, we’ve seen tremendous advances in quantum hardware and software in recent years. According to recent quantum hardware roadmaps, quantum devices manufactured in the coming years are expected to have hundreds to thousands of qubits.

These devices should be able to run powerful quantum machine learning models to help protect a variety of industries that rely on machine learning and AI tools.

Globally, governments and private sectors alike are increasing their investment in quantum technologies.

This month, the Australian government launched the National Quantum Strategy, which aims to expand the country’s quantum industry and commercialize quantum technologies. According to CSIRO, Australia’s quantum industry could be worth around A$2.2 billion by 2030.

Muhammad Usman, Senior Research Scientist and Team Leader, CSIRO

This article has been republished by The Conversation under a Creative Commons license. Read the original article.