The use of Verification and Validation (V&V) in the development of AI/ML software

Ron McFarland PhD
5 min readMay 22, 2024

--

In light of the recent discovery of a major bug in Intel’s Neural Compressor software that was identified (Ramesh & Ross, 2024) this month, there are methods and techniques that AI/ML software developers can use to limit the security risks when developing code. The recent identified vulnerability in the Neural Compressor software can allow hackers to execute arbitrary code on systems that run the impacted software. This article discusses this vulnerability and briefly addresses the use of Verification and Validation techniques to limit the security exposure when developing AI/ML code.

Overview

In essence, The Neural Compressor software allows businesses the ability to lower the memory requirements for AI models while lowering their computational expenses and cache error rate using Neural Compressor software. Additionally, the software aids in improving systems’ inference performance. Organizations implement AI apps on various hardware platforms, including mobile devices with low processing power, using the open-source Python framework.

On Intel’s computers running impacted versions, a maximum-severity issue in the artificially intelligent model compression program can give hackers an opportunity to execute code. The Neural Compressor vulnerability, rated 10 on the CVSS scale, has an update from the tech titan that should be implemented immediately. The details are identified in this article.

Intel’s Max Severity Flaw Affects AI Model Compressor Users

Intel’s artificial intelligence model compression software has a maximum-severity problem that may enable cyber criminals/hackers to run arbitrary code on enterprise systems that are running affected versions. The Common Vulnerability Scoring System (CVSS) has assigned this weakness, also referred to as the Neural Compressor flaw, a high score of 10, signifying its high severity and impact.

What is the Neural Compressor?

A technology that helps businesses lower the memory requirements for AI models while simultaneously lowering the computational cost and cache miss rate of neural networks is the Neural Compressor software. This program is used to deploy AI applications on a variety of hardware platforms, including mobile devices with minimal CPU capacity, and helps achieve greater inference performance.

Details of the Flaw

The vulnerability, identified as CVE-2024–22476, is the most serious of the 41 security alerts that Intel published in the week prior. The vulnerability starts when user input is not properly validated or sanitized. Because of this, hackers can remotely take advantage of the vulnerability without requiring special access or user input. The vulnerability significantly affects the availability, integrity, and confidentiality of data.

.

Other Vulnerabilities

In addition to CVE-2024–22476, Intel also revealed another vulnerability, identified as CVE-2024–21792, in the Neural Compressor software. This is a time-of-check, time-of-use vulnerability with a moderate severity level that could allow hackers to access unapproved data. However, the hacker needs local, authorized access to a vulnerable system to exploit this vulnerability.

Impact on AI Applications

The effect of the vulnerability can be exacerbated by businesses who use this software as essential building blocks for AI solutions that they develop and support. For example, a month ago, researchers from Wiz discovered vulnerabilities on HuggingFace, a well-known AI application developer, that have since been fixed. These vulnerabilities allowed attackers to alter as well as add malicious models to the registry of the company.

The need for Software Verification and Validation

The discipline of Artificial Intelligence (AI) is a dynamic one that is constantly evolving, pushing existing limits, and testing the understanding of technology. As AI advances, we must guarantee that these systems are stable, dependable, and secure. In the context of AI development, the role of Software Verification and Validation (V&V) is more vital than ever (Buede & Miller, 2024).

Verification and Validation address two crucial components of software development and quality assurance necessitate the use of viable methods. Verification is the process of confirming, throughout the development stage, that a product satisfies defined requirements. “Are we building the software correctly?” is asked. In contrast, validation is assessing software during or after the development phase to ensure it meets the requirements. “Are we building the right software per user specifications?” is addressed.

Because AI systems are complex and intricate by nature, V&V serves a critical role in its development. AI models’ decision-making processes might be difficult to understand since they can often be opaque. V&V procedures contribute to the accuracy and dependability of these models, promoting confidence in the outcomes that the software produces. This is particularly important in industries like healthcare, finance, and autonomous cars since poor forecasts or choices could have fatal repercussions.

AI systems are vulnerable to several dangers, such as overfitting, bias, and adversarial attacks, especially given the context of this article. Early in the development phase, V&V methods can assist in identifying and mitigating risks, decreasing the possibility of adverse outcomes later. To proving conformity with strict rules, V&V procedures are essential in sectors including infrastructure industries, such as in the chip industry. V&V offers evidence that the AI systems have undergone extensive testing and validation and satisfy all applicable safety and quality standards.

Even though V&V is essential to the development of AI, there are still certain difficulties. Given the dynamic and non-deterministic nature of AI systems, traditional V&V techniques might not be suitable. Even if AI systems were vetted and validated at the time of deployment, they may evolve in ways that were not originally expected because they typically continue to learn and adapt after deployment.

Certain AI models can be challenging to test and verify in terms of their internal decision-making processes due to their “black box” nature. Given these challenges, V&V is obviously necessary for AI development, though not a perfect solution. Extensive tactics and tools, such as adversarial training, robustness testing, and explainable AI techniques, are being actively developed by researchers and developers to enable efficient verification and validation (V&V) of AI systems, which can limit the overall risks in the AI/ML software development realm.

Conclusion

These vulnerabilities’ discovery emphasizes the significance of strong cybersecurity defenses, such as software validation and verification (V&V), as well as the necessity of frequent software upgrades and patches. Companies and individuals using Intel’s Neural Compressor software should ensure they are using the latest version to provide some measure of protection against these types of exposures and vulnerabilities.

References

Buede, D. M., & Miller, W. D. (2024). The engineering design of systems: models and methods. John Wiley & Sons.

Ramesh, R., & Ross, R. (2024, May 20). Intel’s Max severity flaw affects AI model compressor users. Government Information Security. https://www.govinfosecurity.com/intels-max-severity-flaw-affects-ai-model-compressor-users-a-25275#:~:text=A%20maximum%2Dseverity%20bug%20in,10%20on%20the%20CVSS%20scale.

About the Author

Ron McFarland, Ph.D., CISSP, is a Chief Technology Officer at Highervista LLC in Flagstaff, AZ and formerly a Senior Cybersecurity Consultant CMTC (California Manufacturing Technology Consulting) in Long Beach, CA. He received his doctorate from NSU’s School of Engineering and Computer Science, an MSc in Computer Science from Arizona State University, and a Post-Doc Graduate Research Program in Cyber Security Technologies from the University of Maryland. He taught Cisco CCNA (Cisco Certified Network Associate), CCNP (Cisco Certified Network Professional), CCDA (Design), CCNA-Security, Cisco CCNA Wireless, and other Cisco courses. He was honored with the Cisco Academy Instructor (CAI) Excellence Award in 2010, 2011, and 2012 for excellence in teaching. He also holds multiple security certifications, including the prestigious Certified Information Systems Security Professional (CISSP). He writes for Medium as a guest author to provide information to learners of cybersecurity, students, and clients.

CONTACT Dr. Ron McFarland, Ph.D.

· Email: highervista@gmail.com

· LinkedIn: https://www.linkedin.com/in/highervista/

· ·YouTube Channel — Smart Cybersecurity: https://www.youtube.com/@RonMcFarland/featured

--

--

Ron McFarland PhD

Cybersecurity Consultant, Educator, State-Certified Digital Forensics and Expert Witness (California, Arizona, New Mexico)