Abstract | In the late 1970's and early 1980's there was considerable interest in the use of so-called systolic architectures for the design of cost-effective, high throughput and high performance VLSI architectures. At the time there was a great deal of governmental pressure to produce high performance custom devices to overcome traditional computational bottlenecks such as memory processor bandwidths. Systolic devices broke the established concepts of separating algorithms and architectures to achieve general purpose and thus cost-effective devices. Although the systolic concept, of pumping data around a regular array (or lattice) of primitive cells to produce “memory-less” algorithms where data arrived where it was needed just at the right time, was a powerful idea. The difficulties associated with fabrication made them less cost-effective than first envisaged. Of course there were a number of successes such as the Pattern Matcher, the GAPP convolver for medical and image processing, bit serial arrays, Edit Distance array among others. However a growing credibility gap emerged and researchers began concentrating on the theory of automatically synthesizing so-called regular arrays (a generalization of systolic arrays) and on generic (or programmable) arrays. In hindsight it is apparent that this credibility gap resulted mainly from the lack of a suitable medium for rapid prototyping. The emerging technology of field programmable gate arrays (FPGAs) is beginning to meet the needs of regular array designers, especially with the newer architecture support for through pipelining and higher density cells together with the high level CAD tools for generating netlists and source-source program transformations. This paper identifies the key ingredients of array synthesis and how they can be integrated into existing work on hardware-software co-design. |
---|