RESEARCH

We investigate innovative computing systems and technologies that deliver high-performance, energy-efficiency, and adaptivity for various application domains and social needs. In particular, we develop algorithms and techniques for run-time decision-making targeting various optimization goals such as power/energy-efficiency, fault-tolerance/reliability, or compute performance, and targeting different architectural platforms, including high-performance computing nodes, reconfigurable systems, and power-constrained edge computing systems and technologies. Our recent research focuses on developing a new generation of brain-inspired computing technologies based on digital neuromorphic circuits implemented using conventional CMOS technology.

Neuromorphic Cognitive Systems

Biological neural networks witnesses increasing attention both to gain a better understanding of the brain and to explore novel biologically inspired computation. Spiking neural networks attempt to mimic the information processing in the mammalian brain based on parallel arrays of neurons that communicate via spike events. Different from the typical multi-layer perceptron networks, where neurons fire at each propagation cycle, the neurons in such model fire only when a membrane potential reaches a specific value. We are researching a new computing model by combining concepts of neuroscience and machine learning with computer architecture and microelectronics that go beyond the current stored-program computing model. In particular, we investigate how a network of spiking neurons compute and communicate information, with different plasticity mechanisms such as short-term/long-term potentiation, intrinsic plasticity, attention, and neurogenesis. Currently, we are investigating the following: (1) Algorithms and novel computing methods for AI (Novel application-specific ISA design for neural networks; Adaptive/Autonomous online learning algorithms), (2) Architecture of neuromorphic computing systems (Silicon Neurons; Low-power memory systems; Learning circuits; Reconfigurable neuromorphic architectures), and (3) Communication circuits/networks for neuromorphic systems (On-chip Networks; High-speed and low-power AER; High-speed, Fault-tolerant spike routing mechanisms).
more

Hardware and Applications for AI: Adaptive Intelligence for Ultra-low-power Solutions

Artificial intelligence (AI) has many applications in today's society, including robots intelligence, traffic control, data analytics, image recognition, and speech understanding. The growing size and complexity of AI Algorithms require high-performance computation with novel memory systems. Traditional machine learning methods were generally built for single centralized node(s) with easy access to a massive data-set and a large computing and storage power (i.e., data center). This centralized approach is found to be not an efficient method for several emerging applications that require energy-efficiency or high-reliability (i.e.,  controlling an autonomous vehicle, sending instructions to a surgery robotic, or flying a monitoring drone). Boosting the performance of deep learning needs several techniques, including reducing data-movement, scaling precision, or exploiting the sparsity in the network. Our research effort in this area investigates on-device and distributed AI from algorithmic and hardware perspectives, targeting several applications, such as autonomous vehicles, robotics, drones, and so on.
more

Reliable 2D/3D Network-on-Chip for Manycore SoCs and Neuromorphics 

Future System-on-Chip (SoC) will contain hundreds of components made of processor cores, DSPs, memory, accelerators, and I/O all intergared into a single die area of just a few square millimeters. Such complex system/SoC will be interconnected via a novel on-chip interconnect closer to a sophisticated network than to current bus-based solutions.This network must provide high throughput and low latency while keeping area and power consumption low. Our research effort is about solving several design challenges to enable such new paradigm in massively parallel many-core systems. In particular, we are investigating fault-tolerance, 3D-TSV integration, photonic communication, low-power mapping techniques, low-latency adaptive routing. Currently, we are also trying to port the outcome of this new technology to our other ongoing research project about spiking neuro-inspired chips.
more
Adaptive Systems Laboratory
Computer Engineering Division
School of Computer Science and Engineering
The University of Aizu
Aizu-Wakamatsu 965-8580, Japan
Contact:
Abderazek Ben Abdallah
Office phone: 0242-37-2574 (3224)
Email: benab@u-aizu.ac.jp
Copyright (c) ABA Lab. All Rights Reserved.