Teaching Drone to Fly Like an Insect
Although unmanned aerial vehicles (UAV) are becoming increasingly common in such fields as communications and agriculture, their use is hampered by their small size and limited battery capacity. However, a multi-disciplinary team at National Tsing Hua University(NTHU) led by Professors. Tang Kea-tiong of the Department of Electrical Engineering and Lo Chung-chuan of the Department of Life Sciences has recently developed an artificial intelligence (AI) chip that mimics the optical nerves of the fruit fly, allowing drones to automatically avoid obstacles while remaining in an ultra-power-saving mode. The chip also has potential applications in such areas as unmanned vehicles, smart glasses, and robotic arms.
Most of the UAVs currently in use rely on the transmission and reflection of electromagnetic waves or infrared rays to detect and avoid obstacles, but this consumes a lot of power, and when numerous devices are operating in the same area they tend to interfere with each other. An alternative approach to avoiding obstacles is to use optical lenses to capture and analyze images, but the amount of information to be processed is too large to be done quickly, and this approach also consumes a lot of power.
Intrigued by the fruit fly’s uncanny ability to avoid obstacles, Tang, a specialist in neuro-imitation systems, figured that it might be possible to replicate the optical nerve of this tiny insect and adapt it to AI applications. Thus he began to collaborate with Lo on the development of a high-efficiency AI chip that mimics the visual system of the fruit fly.
The first task was to solve the problem of information overload. Dynamic vision has about 30 frames per second—far too many for individual processing in a power-efficient manner. According to Tang, the lenses currently used in cameras and mobile phones have millions of pixels, whereas the eye of a fruit fly has only about 800 pixels. When the fruit fly’s brain processes such visual signals as contour and contrast, it utilizes a kind of attention mechanism which automatically filters out whatever information is unimportant, such as large stationary objects, and only pays attention to moving objects and those objects it is liable to collide with.
By imitating this attention mechanism, the research team has developed an AI chip which makes it possible to use hand gestures and an image sensor to operate a drone. For example, when the operator displays all five fingers, the drone flies forward, and when he displays two fingers, it stops. Because the computer only needs to distinguish the rough contours of the hand, and ignores such details as color and fingerprints, the process is highly efficient.
First the drone is taught to focus on what’s most important, and then it’s taught how to judge distance and the likelihood of a collision with an approaching object. For this purpose, Lo conducted a detailed investigation on how the fruit fly detects optical flow, for which purpose he made extensive use of the maps of the fruit fly’s neural pathways produced by the Brain Research Center at NTHU. Lo explained that optical flow is the relative trajectory left in the field of vision by nearby moving objects, and used by the brain to determine distance and to avoid obstacles.
Lo added that small drones cannot carry heavy or energy-intensive devices, making sonar and radar impractical for avoiding collisions. Although analyzing the optical flow can also be used for this purpose, it requires complex mathematical algorithms and a high-speed CPU, making it unsuitable for a small drone. By contrast, the AI chip developed by his research team can perform the same operations by using only a few dozen “neurons,” while consuming only a microwatt of electricity.
Tang said that the AI chip developed by his research team also represents a major breakthrough in the area of in-memory computing. Computers and mobile phones first move data from the memory to the CPU’s central processing unit, and once it’s processed, the data is moved back to the memory for storage, and this process is what consumes up to 90% of the energy and time of the AI deep-learning process. By contrast, the AI chip developed by the NTHU team mimics neuronal synapses, allowing it to perform calculations in the memory, which greatly improves efficiency.
The research team was established in 2017 and includes Tang, who developed the artificial neural system; Hsieh Chih-cheng, who developed the smart lenses; Chen Hsin, who developed the bionic system; Lu Ren-shuo, who devised the chip framework; Sun Min, who was responsible for model design; Chang Meng-fan, who developed the memory circuit; and Lo, who specializes in neural models.
Prof. Tang & Prof. Lo showing the AI chip they have jointly developed.