Cloud native EDA tools & pre-optimized hardware platforms
By: Michael Thompson, Product Marketing Manager, 草榴社区
Artificial intelligence (AI) is the enabling of a machine to perceive its environment and respond in a way that increases its usefulness to us. With the recent developments in AI, it has become a hot topic. But AI has been around longer than you may think; the term has been around since the mid-1950s when it was first introduced as an academic discipline. Now with the advancements in processor technology, AI is moving from mainframes to embedded applications while evolving at a rapid pace. 草榴社区 is on the leading edge of this technological transformation where we are starting to see AI’s full potential. In a few years, as the technology matures, AI will increase productivity, change how we access information and perform repetitive tasks, and profoundly change our lives.
John McCarthy, an American computer scientist, coined the term “artificial intelligence” at a Dartmouth conference in 1956. Today, AI is an umbrella term that encompasses a broad range of processing tasks from search in the cloud, robotics, speech recognition and translation, expert systems, and more. AI can be generally classified as either ‘weak AI’ (also known as narrow AI) which means it has the capability to solve specific tasks, or ‘strong AI’ (otherwise known as artificial general intelligence) which is the ability of a machine, when faced with an unfamiliar task, to find a solution. Strong AI is not available yet and all the existing AI systems are based on weak AI that is designed for a specific task.
Research on AI is focused on creating machines that can learn and solve new challenges, and strong AI will start finding its way into applications in a few years. Much of what we hear about AI today comes from companies like Google, Yahoo, Microsoft, and Amazon where they are developing applications for the cloud. These capabilities are important, but what’s not as well known is that these AI capabilities are moving rapidly from large server farms to embedded and deeply embedded applications that run on the devices we have in our pockets.
It is not surprising that some people feel that AI is not really intelligence, but rather a sophisticated manipulation of data and our emotions by computers that only appears to be intelligent. The argument is not without merit, but is based on a limited interpretation of what intelligence is.
As it relates to humans, intelligence can be defined as the skilled use of reason, especially relational reasoning. But on a broader scale, intelligence can be defined as the ability to perceive the environment and take actions that maximize the chance of successfully completing a goal; this is what artificial intelligence is today. AI will continue to advance and in time will encompass the ability to reason and likely eclipse human intelligence in the process. However, the technology is not there yet due in part to the fact that human intelligence and the ability to reason are very complex processes.
When we think about AI, we tend to conjure up visions of sophisticated humanoid machines, but in reality, today’s AI machines are similar to that of an Amazon Echo. A capability that combines good voice recognition (perception) with fast processing (decision making), and an action (response) such as answering your question, playing music or switching on the lights. Most AI applications use the same process of perception, decision making, and response.
Perception can take on many forms. It can come from sensors, cameras, a database, a spoken request or other sources. The decision making is done by processors and can be broken up between the cloud and a local processor to increase performance. The response is some action that is taken and it can be audio, mechanical, a database update or many others depending on the function that is being performed.
There are varying levels of AI in use today. For example, chess games use a basic level of AI. The program analyzes where you move your chess piece, then through brute force computation, the computer looks at hundreds of thousands of possible positions, and from this determines its next move based on the series of moves with the highest probability of winning. Speech recognition has also been available for some time, but it is rapidly evolving with support for natural language recognition and language translation. Natural language recognition takes a higher level of computation because it requires an understanding of what is being spoken. Language translation takes it a step further because it requires an understanding of word structure in the language that the words are being translated to. This takes more than brute force computation and requires the computer to be programmed to understand the context and domain in which the language is used. Today, the concept of AI includes understanding language, interpreting complex data, machine vision, intelligent routing in content delivery networks, and autonomous vehicles.
Machine vision is an area where AI is being used and significant advancements using neural network technology has dramatically increased its accuracy. Neural networks mimic the way that our brain learns and uses learned information to recognize patterns. Over the past five years, the refinement of this technology has improved to the point where machines can now achieve higher levels of accuracy in image recognition and other tasks than humans can. Research continues in machine vision and new algorithms are being developed that are not only faster and more accurate, but are also much simpler. Figure 1 is an example of machine vision used to perform scene segmentation and object identification using a convolution neural network algorithm.
Figure 1: Example of machine vision scene segmentation from
But vision isn’t the only use of neural networks. Image captioning, text generation, character recognition, language translation, radar, audio, and many other applications are being addressed. For example, NASA is using neural network technology to analyze data from telescopes to find new planets. Not only is the system more accurate than humans, but it can analyze the data many orders of magnitude faster. Using this system, NASA recently found an eighth planet revolving around a star (Kepler-90) that is 2545 light-years away – the first known solar system outside of our own with eight planets.
We hear a lot about artificial intelligence in the news today compared to a few years ago. This is because the applications that are using AI are moving closer to mass-market consumers. AI is moving from academia and mainframe computers to embedded applications, which includes devices we use every day. This shift is driven by advances in process technology, microprocessors, and AI algorithms. It takes a lot of processing power and memory to run AI applications. This is less of an issue in mainframe computers because of the abundance of resources. To move AI into portable and embedded applications, performance and memory capacity need to increase while power consumption is significantly reduced. This is difficult to achieve because as both processor performance and memory size increase, so does power consumption. Fortunately, as semiconductor processes progress to smaller geometries, the circuit area and power consumption are shrinking. This allows designers to put the larger memories and the advanced processors that are required for AI on a chip.
The implementation of AI in embedded applications is being facilitated by advances in microprocessor capabilities combined with current process technologies, enabling processors that offer very small size at performance levels that were unattainable a few years ago. For example, the DesignWare? ARC? HS44 with a superscalar pipeline delivers up to 5500 DMIPS per core (16FF worst case) and can fit into 0.06mm2 of silicon, while using less than 50?W/MHz power consumption. It is also easily scaled to even higher performance with dual-core and quad-core versions. The ARC HS family cores can be used for the application host, communication, control and pre- and post- processing that are part of the tasks for AI applications. Figure 2 shows an AI development platform that can be used for various AI applications.
Figure 2: Artificial intelligence platform using ARC HS processor – from NARL Taiwan http://www.cic.org.tw/aisoc/aisoc.jsp
While processors like the ARC HS can be used for some AI tasks, specialized processors are available for specific AI tasks, and these offer the highest performance for embedded AI applications. For example, GPUs have been used for machine vision applications, but these are being replaced by newer specialized vision processors, like 草榴社区’ DesignWare EV6x Embedded Vision Processors. The EV6x family, including the EV61, EV62, and EV64 members, can be configured with a programmable Convolutional Neural Network (CNN) engine to perform object detection and classification for up to 4K HD video streams. The EV6x family features integrated heterogeneous processing units (Figure 3) that can be configured with up to 3520 MACs delivering up to 4.5 Tera MACs per second with support for the full range of CNN algorithms including AlexNet, GoogLeNet, ResNet, SqueezeNet, and TinyYolo. This is an almost unimaginable level of processing power that can be integrated into
Figure 3: DesignWare EV6x Embedded Vision Processors include up to four vision CPUs and an optional CNN engine
Not only are these specialized processors increasing in performance but the algorithms that run on the processors are becoming more accurate. Figure 4 shows recent CNN graphs for classification with dramatic improvements in accuracy and capabilities. The error rate of 3.6% for ResNet is better than any human, including experts, can perform on image recognition. These algorithms can also run on vision processors, like the DesignWare EV6x processors, and can be designed into an SoC and used in embedded applications, like a surveillance camera or a mobile device.
Figure 4: Algorithmic advancement of object classification with CNNs
As amazing as AI is today, it will be interesting to see how it develops over the next five to ten years. Certainly, cars will be able to drive themselves, personal assistants will be more intelligent, and natural language translation will be so seamless that those of us who travel will wonder how we ever got by without it. New applications for AI that we haven’t dreamed of yet will be enabled by the ongoing improvement of microprocessors, AI algorithms, and process technology. 草榴社区 continues to be on the leading edge of artificial intelligence and we are just starting to see the capabilities that it offers. From increasing productivity to even changing the way we travel, AI will have a profound impact on our lives.