Next: Distributed Parallel Processing Up: Department of Computer Previous: Mathematical Foundation of

Language Processing Systems Laboratory


/ Harvey Abramson / Professor
/ David S.L. Wei / Associate Professor
/ Kyung-Goo Doh / Assistant Professor

This year was the foundation year for the Language Processing Systems Laboratory. As such, the achievement consisted in continuing existing lines of research by each of the lab members (as exemplified in the publication and achievement list) and beginning two group projects, namely Towards a Programming Language Designer's Workbench based on Logic Programming and Action Semantics, and Granularity Optimization and Scheduling on Massively Parallel Computer Systems, which are stated in what follows.

Towards a Programming Language Designer's Workbench based on Logic Programming and Action Semantics

Language development usually proceeds in the following stages: design, specification, prototyping, and implementation (with no strict ordering). However, their specifications are usually written in English: such specifications can be very misleading for both implementors and users since the designer's original intention may be misunderstood. Even when a programming language is formally specified (normally based on denotational or operational semantics frameworks), it is still difficult to pass on the designer's intention to others because it rarely gives hints on how to implement the language. With this in mind, we are developing a Programming Language Designer's Workbench based on action semantics. The workbench allows a language designer to design and implement a language in one setting. Thus, the language designers can actually confirm that the designed language is in fact the one he intended. This consistency check is indeed important because it eliminates potential design errors and possible misunderstanding among designers and implementors.

The project has been carried out in three differrent directions: (1) developing theories necessary to realize programming language designer's workbench; (2) designing methodology related to actually building up the general-purpose compiler generator based on action semantics framework; (3) studying semantics of many constructs of programming languages (imperative, functional, logic, object-oriented, etc.) to see how action semantics specifications affect the understanding of semantics of programming languages as well as the design of new languages.

Granularity Optimization and Scheduling on Massively Parallel Computer Systems

The high communication overhead in existing massively parallel processing (MPP) machines imposes a minimum threshold on program granularity below which performance degrades significantly. Consequently, to obtain maximum performance, a fine-grain parallel program may have to be restructured to produce an equivalent coarse-grain program by coalescing many fine-grain tasks into a single task. Manual ``fine-tuning'' of a parallel program is, unfortunately, too excessive a burden to place on the shoulders of an algorithm designer, be (s)he novice or expert. Not only does (s)he have to deal with the difficult problem of exposing the parallelism in a given application, but (s)he also needs to worry about the equally difficult problem of limiting the parallelism in the algorithm to minimize communication overhead. Moreover, the latter problem requires of the designer deep knowledge of the characteristics of the target MPP machine, e.g., number of processing nodes, CPU speed, local memory size, message transfer rate, and network topology. Finally, the fine-tuned program, while it would run with maximum performance on the chosen machine, would in all likelihood perform poorly on another machine with different architectural characteristics. The designer would have to rewrite the program to tune it to the new machine.

The thesis of this proposal is that the task of exposing the parallelism in a given application should be left to the algorithm designer, who has intimate knowledge of the application characteristics. On the other hand, the task of limiting the parallelism in a chosen parallel algorithm is best handled by the compiler for the target MPP machine. The primary goal of this research is to develop CASS, a Clustering And Scheduling System that can be integrated with existing or future compilers for parallel machines to provide facilities for automatic granularity optimization and task scheduling. We have so far developed two efficient clustering and scheduling algorithms, namely CASS1 (without task duplication) and CASS2 (with task duplication). Experimental results have been obtained, showing that both of our algorithms have better performance than previously best known algorithms.

Refereed Journal Papers


  1. M. Palis, S. Rajasekaran, and D. Wei. Packet routing and pram emulation on star graphs and leveled networks. Journal of Parallel and Distributed Computing, 20(2):145-157, 1994.

    Abstract: We consider the problem of permutation routing on a star graph, an interconnection network which has better properties than the hypercube. In particular, its degree and diameter are sublogarithmic in the network size. We present optimal randomized routing algorithms that run in steps (where D is the network diameter) for the worst-case input with high probability. We also show that for the -way shuffle network with nodes, there exits a randomized routing algorithm which runs in time with high probability. Another contribution of this paper is a universal randomized routing algorithm that could do optimal routing for a large class of networks (called leveled networks) which includes the star graph. The associative analysis is also network-independent. In addition, we present a deterministic routing algorithm, for the star graph, which is near optimal. All the algorithms we give are oblivious. As an application of our routing algorithms, we also show how to emulate a PRAM optimally on this class of networks.


  2. K.-G. Doh and D. Schmidt. Action semantics-directed prototyping. Computer Languages, 19(4):213-233, 1993.

    Abstract: We present a methodology for compiler synthesis based on Mosses-Watt's action semantics. Each action in action semantics notation is assigned specific ``analysis functions,'' such as a typing function and a binding-time function. When a language is given an action semantics, the typing and binding-time functions for the individual actions compose into typing and binding-time analyses for the language; these are implemented as the type checker and static semantics processor, respectively, in the synthesized compiler. Other analyses can be similarly formalized and implemented. We show a sample language semantics and its synthesized compiler, and we describe the compiler synthesizer that we have developed.

Refereed Proceeding Papers


  1. David S. L. Wei. Dynamic tree embeddings in d-way shuffles and star graphs. In Proc. of Joint Symposium on Parallel Processing, pages 57-64, Tokyo, May 1994. Information Processing Society of Japan.

    Abstract: In this paper we give affirmative answers to the conjectures given in [LNRS89]. We show that an arbitrary -ary tree with nodes can be dynamically embedded in an -way shuffle of processors with dilation 1 and, with very high probability, load . We also show that an arbitrary -ary tree with nodes can be dynamically embedded in an -star graph of processors with dilation 2 and, with very high probability, load . For trees with nodes, the embeddings are optimal.


  2. S. Rajasekaran and David S. L. Wei. Selection, routing and sorting on the star graph. In Proc. of 7th International Parallel Processing Symposium, pages 661-665. IEEE Computer Society, IEEE, April 1993.

    Abstract: We consider the problems of selection, routing and sorting on an -star graph (with nodes), an interconnection network which has been proven to possess many special properties. We identify a tree like subgraph (which we call as a ` chain network') of the star graph which enables us to design efficient algorithms for the above mentioned problems. We present an algorithm that performs a sequence of prefix computations in time. This algorithm is used as a subroutine in our other algorithms. We also show that sorting can be performed on the -star graph in time and that selection of a set of uniformly distributed keys can be performed in time with high probability. Finally, we also present a deterministic (non oblivious) routing algorithm that realizes any permutation in steps on the -star graph. There exists an algorithm in the literature that can perform a single prefix computation in time. The best known previous algorithm for sorting has a run time of and is deterministic. To our knowledge, the problem of selection has not been considered before on the star graph.


  3. Harvey Abramson. Why Johnny Can't Read the Screwiest Writing System in the World and How to Help Him Learn: On the Necessity of Japanese-English Hyperdictionaries. In Proceedings - PACLING '93, Vancouver, April 1993.

    The study of Japanese for a non-native is made difficult by the extreme complexity of its writing system which involves two phonetic syllabaries (hiragana and katakana) and a large collection of characters derived from Chinese characters (kanji). The complex writing system prevents the early and easy learning of reading, and hence blocks an enormous source of linguistic knowledge. Because of the large number of kanji in common use, the (e.g., English speaking) student of written Japanese must make use of several dictionaries during the course of study: at least one each of a character dictionary, a Japanese-English dictionary, and an English-Japanese dictionary; and eventually Japanese dictionaries in Japanese, including specialised dictionaries of place names, personal names, onomatapoeic words, dialect dictionaries, etc. There is a lot of information which is common to these dictionaries, but because the information in each dictionary is arranged in a single linear order making access in other orders impossible, a deskful of bulky dictionaries is necessary. Also, most if not all Japanese-English dictionaries are written for the adult native speaker of Japanese and are unreadable by beginning learners. Current hardware and software technology makes it possible, however, to introduce the notion of hyperdictionary, a generalized multi-function dictionary, so that a single multilingual database could provide all the information necessary for learning Japanese, and indeed for general reference. Access to the information could be further improved by making use of some common techniques of computational linguistics, in particular, morphological and syntactic analysis, as a front end to (possibly hand-held electronic) hyperdictionaries. The extended notion of dictionary is applicable to other language pairs and individual languages of course and would be a useful aid to students, translators and interpreters.


  4. Harvey Abramson. Prolog based aids for the study of Japanese. In Proceedings - Practical Applications of Prolog Conference, London, March 1994.

    Japanese is considered a very difficult language for Westerners to learn. Part of the cause of this difficulty is a vocabulary with an almost empty intersection with that of Western languages, a grammar with a quite different structure compared to familiar Western languages, a highly complicated writing system, and vast cultural differences which are reflected in the language. Another major source of difficulty, however, is the lack of the kind of teaching and study aids which simplify (for Westerners) the study of other languages. In this paper we apply a logic programming formalism currently under development for general computational morphology to the development of some aids to the study of Japanese. Specifically, we discuss a drill program for the morphology of the Japanese verb and adjective, and also the use of a morphological processor for Japanese (derived from the above mentioned logic formalism) as a sophisticated interface to CD-ROM Japanese and Japanese-English dictionaries. This work is part of an ongoing project to develop generalised multi-media dictionaries. Although we have concentrated on Japanese, the methods described here are general, and are applicable to producing study aids for any natural language.


  5. K.-G. Doh. Action semantics: a tool for developing programming languages. In International Conference on Information Science and Technology, pages 432-439, Seoul, Oct. 1993. Korean Information Science Society.

    Abstract: We introduce action semantics - a recently developed framework for specifying formal semantics of programming languages. We illustrate the clarity, modularity, and extensibility of action semantics by specifying various constructs of functional programming languages. Following Schmidt, we start from a core language and apply Landin and Tennent's extension principles to build up our functional languages. Finally, we survey the current status of research in action semantics.

Chapters in Books


  1. M. Palis and David Wei. Massively Parallel Parsing Algorithm for Natural Language, in Parallel Processing for AI, editors: Kanal, Kumar, Kitano, and Suttner. North-Holland, Chap. 14, pp.365-407, 1994.


  2. Harvey Abramson. Logic Programming and Grammars. The Encyclopedia of Language and Linguistics. Pergamon Press, London, 1994.

Others


  1. David S. L. Wei. Biographee, 4, 1993.

    Biographee, Marquis Who's Who in the World.



Next: Distributed Parallel Processing Up: Department of Computer Previous: Mathematical Foundation of


a-fujitu@edumng1.u-aizu.ac.jp
Fri Feb 10 09:19:38 JST 1995