Cloud native EDA tools & pre-optimized hardware platforms
By: Dana Neustadter, Product Marketing Manager, Security IP, 草榴社区
Artificial intelligence (AI) is bringing new waves of innovation and business models, powered by new technology for deep learning and a massive growth in investment. As AI becomes pervasive in computing applications, so too does the need for high-grade security in all levels of the system. Security needs to be integral in the AI process. The protection of AI systems, their data, and their communications is critical for users’ safety and privacy, as well as for protecting businesses’ investments. This article describes where security is needed throughout AI environments as well as implementation options for ensuring a robust, secure system.
AI applications built around artificial neural networks (ANNs, or simply “neural nets”) include two basic stages – training and inference (Figure 1). The training stage of a neural network is when the network is “learning” to do a job, such as recognizing faces or street signs. The resulting data set for the configuration (weights representing the interaction between neurons) of the neural net is called a model. In the inference stage, the algorithm embodied in the model is deployed to the end application.
Figure 1: The training and inferencing stages of deep learning and AI
The algorithms used in neural net training often include data that require privacy, such exactly how faces and fingerprints are collected and analyzed. The algorithm is a large part of the value of any AI technology. In many cases, the large training data sets that come from public surveillance, face recognition and fingerprint biometrics, financial, and medical applications, are private and often contain personally identifiable information. Attackers, whether organized crime groups or business competitors, can take advantage of this information for economic reasons or other rewards. In addition, the AI systems face the risk of rogue data injection maliciously sent to disrupt neural network’s functionality (e.g., misclassification of face recognition images to allow attackers to escape detection). Companies that protect training algorithms and user data will be differentiated in their fields from companies that suffer from the negative PR and financial risks of being exploited. Hence, it is highly important to ensure that data is received only from trusted sources and protected during use.
The models themselves, represented by the neural net weights, and the training process, are incredibly expensive and valuable intellectual property to protect. Companies that invest to create and build these models want to protect them against disclosure and misuse. The confidentiality of program code associated with the neural network processing functions is considered less critical. However, access to it can aid someone attempting to reverse engineer the product. More importantly, the ability to tamper with this code can result in the disclosure of all assets that are plaintext inside the security boundary.
In addition to protecting data for business reasons, another strong driver for enforcing personal data privacy is the General Data Protection Regulation (GDPR) that came into effect within the European Union on May 25, 2018. This legal framework sets guidelines for the collection and processing of personal information. The GDPR sets out the principles for data management protection and the rights of the individual, and large fines may be imposed on businesses that do not comply with the rules.
As data and models move between the network edge and the cloud, communications need to be secure and authentic. It is important to ensure that the data and/or models are protected and are only being communicated and downloaded from authorized sources to devices that are authorized to receive it.
Security needs to be incorporated starting from product concept throughout the entire lifecycle. As new AI applications and use cases emerge, devices that run these applications need to be capable of adapting to an evolving threat landscape. To address the high-grade protection requirements, security needs to be multi-faceted and “baked-in” from the edge devices incorporating neural network processing system-on-chips (SoCs), right through to the applications that run on them and carry their data to the cloud.
At the outset, system designers adding security to their AI product must consider a few security enablers that are foundational functions that belong in the vast majority of AI products, including AI, to protect all phases of operation: offline, during power up, and at runtime, including during communication with other devices or the cloud. Establishing the integrity of the system is essential to creating trust that the system is behaving as intended.
Secure bootstrap, an example of a foundational security function, establishes that the software or firmware of the product is intact (“has integrity”). Integrity assures that when the product is coming out of reset, it is doing what its manufacturer intended – and not something a hacker has altered. Secure bootstrap systems use cryptographic signatures on the firmware to determine their authenticity. While predominantly firmware, secure bootstrap systems can take advantage of hardware features such as cryptographic accelerators and even hardware-based secure bootstrap engines to achieve higher security and faster boot times. Flexibility for secure boot schemes is maximized by using public key signing algorithms with a chain of trust traceable to the firmware provider. Public key signing algorithms can allow the code signing authority to be replaced by revoking and reissuing the signing keys if the keys are ever compromised. The essential feature that security hinges on is that the root public key is protected by the secure bootstrap system and cannot be altered. Protecting the public key in hardware ensures that the root of trust identity can be established and is unforgeable.
The best encryption algorithms can be compromised if the keys are not protected with key management, another foundational security function. For high-grade protection, the secret key material should reside inside a hardware root of trust. Permissions and policies in the hardware root of trust enforce that application layer clients can manage the keys only indirectly through well-defined application programming interfaces (APIs). Continued protection of the secret keys relies on authenticated importing of keys and wrapping any exported keys. An example of a common key management API for embedded hardware secure modules (HSM) is the PKCS#11 interface, which provides functions to manage policies, permissions, and use of keys.
A third foundational function relates to secure updates. Whether in the cloud or at the edge, AI applications will continue to get more sophisticated and data and models will need to be updated continuously, in real time. The process of distributing new models securely needs to be protected with end-to-end security. Hence it is essential that products can be updated in a trusted way to fix bugs, close vulnerabilities, and evolve product functionality. A flexible secure update function can even be used to allow post-consumer enablement of optional features of hardware or firmware.
After addressing foundational security issues, designers must consider how to product the data and coefficients in their AI systems. Many neural network applications operate on audio, still images or video streams, and other real-time data. These large data sets can often have significant privacy concerns associated with them so protecting large data in memory, such as DRAM memory, or stored locally on disk or flash memory systems, is essential. High bandwidth memory encryption (usually AES based) backed by strong key management solutions is required. Similarly, models can be protected through encryption and authentication, backed by strong key management systems enabled by hardware root of trust.
To ensure that communications between edge devices and the cloud are secured and authentic, designers use protocols that incorporate mutual identification and authentication, for example client authenticated Transport Layer Protocol (TLS). The TLS session handshake performs identification and authentication, and if successful the result is a mutually agreed shared session key to allow secure, authenticated data communication between systems. A hardware root of trust can ensure the security of credentials used to complete identification and authentication, as well as the confidentiality and authenticity of the data itself. Communication with the cloud will require high bandwidth in many instances. As AI processing moves to the edge, high-performance security requirements are expected to propagate there as well, including the need for additional authentication, to prevent the inputs to the neural network from being tampered with, and to ensure that AI training models have not been tampered with.
Building an AI system requires high performance with low-power, area-efficient processors, interfaces, and security. Figure 2 shows a high level architecture view of a secure neural network processor SoC used in AI applications. Neural network processor SoCs can be made more secure when implemented with proven IP, such as DesignWare? IP.
Figure 2: A Trusted execution environment with DesignWare IP helps secure neural network SoCs for AI applications
草榴社区 EV6x Embedded Vision Processors combine scalar, vector DSP and convolutional neural network (CNN) processing units for accurate and fast vision processing. They are fully programmable and configurable, combining the flexibility of software solutions with the high performance and low power consumption of dedicated hardware. The CNN engine supports common neural network configurations, including popular networks such as AlexNet, VGG16, GoogLeNet, YOLO, SqueezeNet, and ResNet.
草榴社区’ highly secure tRoot hardware secure module with root of trust is designed to easily integrate into SoCs and provides a scalable platform to offer diverse security functions in a trusted execution environment (TEE) as a companion to one or more host processors, including secure identification and authentication, secure boot, secure updates, secure debug and key management. tRoot protects AI devices using unique code protection mechanisms that provide run-time tamper detection and response, and code privacy protection without the added cost of additional dedicated secure memory. This unique feature reduces system complexity and cost by allowing tRoot’s firmware to reside in any non-secure memory space. Commonly, tRoot programs reside in shared system DDR memory. Due to the confidentiality and integrity provisions of its secure instruction controller, this memory is effectively private to tRoot and impervious to attempts to modify it originating in other subsystems in the chip or from the outside.
草榴社区 DesignWare Security Protocol Accelerators (SPAccs) are highly integrated embedded security solutions with efficient encryption and authentication capabilities to provide increased performance, ease-of-use, and advanced security features such as quality-of-service, virtualization, and secure command processing. The SPAccs offer designers unprecedented configurability to address the complex security requirements that are commonplace in today's multi-function, high-performance SoC designs by supporting major security protocols such as IPsec, TLS/DTLS, WiFi, MACsec, and LTE/LTE-Advanced.
AI is poised to revolutionize the world. There are incredible opportunities that AI brings, some being realized right now, others yet to come. Providers of AI solutions are investing significantly in R&D capital, and the models derived from the training data (unique coefficients or weights) represent a big investment that needs to be protected. With new regulations like GDPR in place and serious concerns about privacy and confidentiality of people’s data, and huge investments in intellectual property in neural network architecture and model generation, companies providing AI solutions should be leading the charge to put in place secure processes around their AI products and services.
草榴社区 offers a broad range of hardware and software security and neural network processing IP to enable the development of intelligent secure solutions and power the applications of the new AI era.
For more information: