Robust Algorithms, Architectures & Efficient Learning Methods for Heterogeneous & Sparse Data
Deep Neural Networks have shown tremendous progress in many real-world applications (i.e., object recognition, autonomous vehicles, etc.). To improve data processing systems' performance, designers use large-scale models on dedicated hardware platforms such as FPGAs, GPUs, or ASICs. Designers need a long time to collect datasets, train, and design accelerators to keep the trained models private. However, with the growing complexity of DL acceleration, there are severe vulnerabilities in these AI accelerators' hardware implementations. An attacker who does not know the details of structures and designs inside these accelerators can effectively reverse engineer the neural networks by leveraging various side-channel information. Our goal is to study and develop resilient algorithms and hardware for robust trustworthy Edge-AI computing systems for various emerging applications (i.e., Edge, IoT, NoV).

Reliability and Fault-Tolerance 
The significant heterogeneity in modern processors/SoCs, which mix many logic layers with memory layers and integrate dozens of processing elements interconnected via a sophisticated complex interconnect, increases the fault's probability in a system. Such a situation is especially relevant for systems working in harsh environments where different kinds of interference may induce several phenomena that can jeopardize the whole system's behavior. The variety of faults are mainly due to cross-talk, electromagnetic interference, the impact of radiations, oxide breakdown, and so on. Therefore, a single failure (corrupted message delivery, time requirements unsatisfactory, etc.) in a module or even a single transistor caused by one of these factors may compromise the entire system's reliability. We research adaptive information processing system-on-chips that guarantee several features, such as fault-tolerance, reliability, availability, usability, and security. 

Emerging On-chip/Off-chip Interconnects (si-Photonics, Hybrid, 2D/3D)
The complex integration of semiconductor devices, empowered by emerging interconnect and material innovations, has provided us with tools to connect, analyze, control, and efficiently make decisions. Such complex semiconductor devices/SoCs will contain hundreds of components made of processor cores, DSPs, memory, etc., all interconnected via a novel on-chip interconnect closer to a sophisticated network than current bus-based solutions. This network must provide high throughput and low latency while keeping area and power consumption low. Our research effort is about solving sev eral design challenges to enable such a new paradigm in massively parallel many-core systems. In particular, we are investigating fault-tolerance, 3D-TSV integration, photonic communication, low-power mapping techniques, Clow-latency adaptive routing. We are also investigating the interconnect scalability challenge in large-scale neuromorphic architectures to develop efficient interconnects that enable complex connections between neurons to incorporate correct spike timing into the design.
School of Computer Science and Engineering
The University of Aizu
Aizu-Wakamatsu 965-8580, Japan
Ben Abdallah Abderazek
Office phone: 0242-37-2574 (3224)
Contents ©2020 BIC Lab