Singh Ready to Prove Deep Neural Networks Can Be Safe and Reliable with NSF CAREER Award | computer science

Five years ago, Illinois computer science professor Gagandeep Singh began branching out from this initial research focus to work on the reliability and security of deep neural networks (DNNs).

Gagandeep Singh

Despite enjoying previous projects that have focused on a problem considered unsolvable for more than 40 years – the design of a precise and scalable numerical program analysis – Singh has identified DNNs as a focus area of ​​growing importance in the artificial intelligence community wins. This is because many academics have realized the limitations of deep learning after years of excitement and promise.

In fact, his research background could serve as a stabilizing force for the future of DNNs, considering that he has developed mathematically rigorous solutions that also offer practical systems useful to society.

Certainly, NSF appreciated Singh’s experience as he reflected on his recent proposal, titled “Proof Sharing and Transfer for Boosting Neural Network Verification,” which won the NSF CAREER Award in early February.

“Verifying the security and robustness of DNNs is one of the key issues in modern machine learning,” Singh said. “DNN verification is an inherently difficult problem, and scaling to large realistic DNNs is the biggest challenge. Modern methods five years ago could only verify DNNs with a few hundred neurons, while some of the latest verifiers can handle DNNs with up to a million neurons.

“Despite these advances, existing verifiers are inherently inefficient when used in industrial DNN development pipelines, where the verifier must be run hundreds of thousands of times for different networks and specifications.”

Singh elaborated on the issue by noting that the inefficiency he found is that “the verifier starts from scratch for each new pair of networks and specifications.”

READ :  Nvidia Makes A 'Quantum Leap' At GTC 2022: Ada Lovelace RTX GPUs, Omniverse Cloud, DRIVE Thor Superchip For Autonomous Vehicles - NVIDIA (NASDAQ:NVDA)

Working with students through his FOCAL [email protected], Singh noted that this process could benefit from incremental verification supported by evidence release and transmission.

The resulting promise of this work makes Singh think big.

“DNNs are currently the dominant AI technology and could potentially have a transformative impact on society and the economy. However, these gains will only be realized if they are perceived as safe and reliable,” Singh said. “To build trust, we need formal guarantees for DNN behavior in unseen scenarios. The inefficiency of existing verifiers hampers their use in real-world environments.

“I anticipate that the methods and systems developed as part of this project will accelerate the adoption of formal verification within the DNN development and deployment pipelines across multiple industries, including agriculture, computing, finance, and healthcare.”

Singh’s confidence in the work ahead stems from acceptance of past work.

Since receiving ACM’s SIGPLAN Doctoral Dissertation Award, Singh has placed great emphasis on building his methodology on mathematically rigorous and sound theory. His work in the past has also developed efficient and easy-to-use systems that other researchers are happy to build upon.

Taking a similar approach to this NSF CAREER Award-funded project should provide further proof that he’s on the right track with his work – a sentiment both friends within academia and events outside of it are beginning to understand.

“The NSF CAREER Award is prestigious and highly competitive; I am very happy about this as the award is a validation of our efforts,” said Singh. “Since then, my friends outside of the computer have also known about the NSF CAREER Award on a personal level, even more than the SIGPLAN Doctoral Dissertation Award. So now more than ever they are convinced that I am doing something good!”

READ :  What are the benefits of quantum computing?