Next: Computer Communications Laboratory Up: Department of Computer Previous: Computer Devices Laboratory

Computer Logical Design Laboratory


/ V. I. Varshavsky / Professor
/ Rafail A. Lashevsky / Professor
/ V. V. Smolensky / Research Associate

In 1999, the research themes of the laboratory were "Threshold Logic and Beta-Driven Artificial Neuron", "Artificial Neuron Based on Transistors with Floating Gate" and "Current Sensor for Asynchronous Circuits".

In the first theme, the problems of beta-driven CMOS artificial neuron networks (ANN) along with the ways of increasing their implementability were studied. The studies showed the possibility of teaching such a neuron to calculate threshold function with a threshold more than 200. Some new circuit solutions and learning algorithms for increasing the threshold and decreasing the learning time were suggested. Especially important was a synapse circuit that provides teaching neuron to non-isotonic threshold functions.

As a results of the second theme investigation, the possibility of designing on-chip learning artificial neurons (ANN) with analog input weights stored in dynamic analog memory was shown. The sum of the weights can be as big as many hundreds, and calculation time less than 10ns. A circuit solution which solves the problem of on-chip learning structure (how to eliminate the influence of supplied voltage deviation) was found. One of interesting implementations of such an approach is multiple valued analog memory.

As a results of studying the problems of current sensor (CS), its behavior and the area of its efficient usage, a CS structure was suggested in which only the current through week transistors is measured. The area where such an approach can be efficiently used was shown.

With a top-down education, undergraduate and graduate school students are involved in all the steps of research. They learn the real style of design in the environment "System-Circuit-Process", the importance of creating design methodology for circuit optimization and prepare themselves to working in our time of rapidly developing microelectronics and computer science.


Refereed Proceeding Papers

  1. Varshavsky, V., Marakhovsky, V., Beta-CMOS implementatilon of artificial neuron. SPIE's 13th Annual International Symposium on Aerospace/Defense Sensing, Simulation, and Controls. Applications and Science of Computational Intelligence II, pp.210--221, SPIE, Orlando, Florida, Apr. 1999.

    The improved version of digital-analog CMOS implementation of an artificial neuron is discussed. This neuron is learnable to logical threshold functions, being functionally powerful and highly noise-stable. It is built on the basis of a previously suggested circuit consisting of synapses, beta-comparator and output amplifier. Every learnable synapse contains 5 minimum transistors and a capacitor for storing the result of the learning. It has been shown that higher non-linearity of the beta-comparator in the threshold zone can sharply increase the threshold of the realized functions and noise-stability of the neuron. To increase the minimum leap of voltage at the beta-comparator output in the threshold zone which is attainable during the teaching, it is suggested to use an output amplifier with threshold hysteresis. For this aim, the neuron has three output amplifiers with different thresholds. The output of the amplifier with the middle value of threshold is the output of the neuron; the outputs of the other two amplifiers are used during the teaching. The way is suggested of refreshing the voltages (found during the teaching) on the capacitors during the evaluation process. The results of SPICE simulation prove that the neuron is learnable to most complicated threshold functions of 10 and more variables and that it is capable to maintain the learned state for a long time. In the simulation, transistor modes MOSIS BSIM3v3.1 0.8mkm were used.

  2. Varshavsky, V., Marakhovsky, V., Learning Experiments with CMOS Artificial Neuron. Lecture Notes in Computer Science 1625, Computational Intelligence Theory and Applications. Proceedings of the 6th Fuzzy Days International Conference on Computational Intelligence, Bernard Reusch. pp.706-707, Springer, Dortmund, Germany, May 1999.

    Learning experiments with digital/analog artificial neuron by SPICE-simulation for limiting values of the parameters (such as the sum of input weights and/or threshold) requires long time that largely depends on the length of the learning (test) sequence. We suggest to use as test functions a class of threshold functions the minimum forms of which can be represented in accordance with Gorner's scheme (Gorner's functions). These functions have shortest teaching sequences and provide the neuron parameters close to the biggest ones for a given number of variables. For Gorner's function of n variables the values of the variable weights and threshold form the Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233 ...) with the length of a unit learning sequence equal to n+1.

  3. Varshavsky, V., Marakhovsky, V., Beta-CMOS Artificial Neuron and Implementability Limits. Lecture Notes in Computer Science 1607, Engineering Applications of Bio-Inspired Artificial Neural Networks. Proceedings of International Work-Conference on Artificial and Natural Neural Networks (IWANN'99), Jose Mira, Juan V. Sanchez-Andves, pp.117--128, Spain, June 1999.

    The paper is focused on the functional possibilities (class of representable threshold functions), parameter stability and learnability of the artificial learnable neuron implemented on the base of CMOS beta-driven threshold element. A neuron beta-comparator circuit is suggested with a very high sensitivity to input current change that allows us to sharply increase the threshold value of the functions. The SPICE simulation results confirm that the neuron is learnable to realize threshold functions of 10, 11 and 12 variables with maximum values of threshold 89, 144 and 233 respectively. A number of experiments were conducted to determine the limits in which the working parameters of the neuron can change providing its stable functioning after learning to the functions for each of these threshold values. MOSIS BSIM3v3.1 0.8mkm transistor models were used in the SPICE simulation.

  4. Varshavsky, V., Marakhovsky, V., The Simple Neuron CMOS Implementation Learnable to Logical Threshold Functions. Proceedings of International Workshop on Soft Computing in Industry'99 (IWSCI'99), pp.463-468, IEEE, Muroran, Hokkaido, Japan, June 1999.

    The problem of hardware implementation of artificial neuron in CMOS technology is discussed. The neuron is constructed on the base of beta-driven threshold element consisting of beta-comparator and output amplifier. The beta-comparator circuit is improved to provide a very high sensitivity to current change that allows to sharply increase the function threshold value. The full circuit of the learnable synapse consists of 5 transistors and capacitor. The neuron contains three amplifiers with different input threshold that improves its learnability. SPICE simulation confirms that the neuron is learnable to realize the most complex threshold functions of 10 and more variables. Some simulation results of teaching process of the neuron to implement the certain threshold function of 10 variables with threshold equal to 89 are given.

  5. Varshavsky, V., Marakhovsky, V., Tsukisaka, M.; Kato, A., Current Sensor --- Transient Process Problems. Proceedings of the 8th International Symposium on Integrated Circuits, Devices & Systems (ISIC-99), pp.163-166, IEEE, Singapore, Sep. 1999.

    The idea of using a current sensor to produce the signal of transient process completion in asynchronous CMOS circuits has been a subject of both hopes and disappointments for many years. In the paper, the behavior of current in the transition mode is discussed. It is shown that, unlike the voltage transient process, the duration of current transient process depends on the number of concurrently switching gates k, increasing approximately by ln k. Generally, it makes doubtful the efficiency of using current sensors in circuits with high concurrency. A class of circuits with high concurrency (circuits with weak transistors) is discussed for which efficient usage of current sensors is suggested.

  6. Varshavsky, V., Marakhovsky, V., The Simple Neuron CMOS Implementation Learnable to Logical Threshold FunctionsImolementability Restrictions on the Beta-CMOS Artificial Neuron. Proceedings of the 6th International Conference on Electronics, Circuits and Systems (ICECS'99), pp.401-405, IEEE, Pafos, Cyprus, Sep. 1999.

    The paper is focused on the functional possibilities, parameter stability and learnability of the artificial learnable neuron implemented on the base of CMOS beta-driven threshold element. A neuron beta-comparator circuit is suggested with a very high sensitivity to input current change that allows us to sharply increase the threshold value of the functions. The SPICE simulation results confirm that the neuron is learnable to realize threshold functions of 10, 11 and 12 variables with maximum values of threshold 89, 144 and 233 respectively. A number of experiments were conducted to determine the limits in which the working parameters of the neuron can change providing its stable functioning after learning to the functions for each of these threshold values. MOSIS BSIM3v3.1 0.8mkm transistor models were used in the SPICE simulation.

  7. Varshavsky, V., Marakhovsky, V., Non-Isotonous Beta-Driven Artificial Neuron. Proceedings of SPIE. Applications and Science of Computational Intelligence III, pp.250--257, Orlando, Florida, Apr. 1999.

    In the paper we discuss variants of digital-analog CMOS implementation of artificial neuron taught to logical threshold functions. The implementation is based on earlier suggested beta-driven neuron circuit consisting of synapses with excitatory inputs, beta-comparator and three output amplifiers. Such a circuit can be taught only to threshold functions with positive weights of variables, which belong to the class of isotonous Boolean functions. However, most problems solved by artificial neural networks either require inhibitory inputs. If the input type (excitatory or inhibitory) is known beforehand, the problem of inverting the weight sign is solved trivially by inverting the respective variable. Otherwise, the neuron should have synapses capable of forming the weight and type of the input during the learning, using only increment and decrement signals. A neuron with such synapses can learn an arbitrary threshold function of a certain number of variables. Synapse circuits are suggested with two or one memory element for storing positive and negative input weights. The results of SPICE simulation prove that the problem of teaching non-isotonous threshold functions to a neuron has stable solutions.



Next: Computer Communications Laboratory Up: Department of Computer Previous: Computer Devices Laboratory


www@u-aizu.ac.jp
July 2000