Chapter 1: Introduction
Custom integrated circuit design is normally based on a hierarchical specification
structure. High-level modules are composed of submodules, which are formed by smaller
submodules, and so on. At the end of this tree-like structure are modules formed only
with transistors. These modules are called leaf cells. They are subcircuits of a
complexity comparable with SSI (Small Scale Integration) components such as one-bit
adders, flip-flops or multiplexers.
The traditional way of creating the layout of ASIC (Application Specific Integrated
Circuit) custom chips requires a human designer to interact with a CAD (Computer Aided
Design) layout program. It uses a cell based layout methodology. This methodology is good
for ASIC designs because the layout process (mainly placement and routing) can be
automated to a large extent, making the turn-around time shorter and manufacturing
reliability high.
A major drawback of this methodology lies in the design and maintenance of the cell
libraries for every upgrade of the manufacturing processes. Additionally, as the number
and the variations of cells are both limited, some required cells may not exist in the
libraries. Circuit performance will then have to be sacrificed
[10].
As manual layout is a slow and expensive process, due to the large amount of detail that
has to be handled, automatic physical design generation tools have obvious advantages.
If they are capable of generating layout for a great range of SSI circuits for different
manufacturing process, they can take the place of cell libraries. As they would produce
leaf cells that would fit exactly the design needs and be process independent, they would
address the two drawbacks of the cell based methodology.
1.1 The Agents system
The Agents system is a set of programs designed to automatically generate full
custom CMOS, BICMOS, bipolar and mixed digital/analogue leaf cells' layout. The system is
formed by four server programs: the Placer, Router, Database and Broker.
The Placer places components in a cell and uses the Router to wire them; the Router wires
the circuits sent to it; the Database keeps all the information that is dependent upon the fabrication process, such as the design rules, and the Broker makes the services of the other servers available and manages the available resources, in this case the servers, to meet the demands of their clients.
These servers communicate over a computer network using TCP/IP Internet Protocol
[52]. The Placer server receives from its client, via the
network, the description and netlist of the circuit to be generated using EDIF
(Electronic Design Interchange Format) [55]. EDIF was
chosen because it is a standard language used to represent electronic designs widely used
in commercial CAD tools. Furthermore, because EDIF is a Lisp-like language, it is easy and
fast to parse and extend. After finishing, the Placer uses EDIF again to output the layout
of the circuit to its client.
The output layout is not virtual grid based, it is a real layout. The system does not use
a virtual grid system either for placement or routing, all operations are performed on
mask layout. The Agents system is very flexible in relation to the chip
technologies it can handle, they include CMOS, BICMOS and bipolar. In addition, it can
handle small analogue cells inside digital designs.
In the literature, agents are defined as software components that communicate with their
peers by exchanging messages in a communication language
[31]. They are characterized by their ability to
communicate and cooperate with each other. The four servers, that form the Agents
system, are agents that can run in parallel on different machines to solve cooperatively
a placement-routing problem. They were implemented as a distributed system using a
client/server model to enhance flexibility, portability and exploit parallelism.
The concept of agents as software components is at the heart of the Agents system,
for this reason they even lend their name to the system. The concept is not only used at
the higher level, in the four servers Placer, Router, Broker and Database. It is also used
at a lower level, inside the Router and Placer servers, where small relatively simple
agents work together to accomplish complex tasks. These small agents are responsible for
all the reasoning done by the two servers. They hold the basic inference routines and the
particular knowledge needed for a particular application. They employ a reasoning model
based on Goals, Problem Spaces, States and Operators. This design philosophy is that
competence should emerge out of the collective behaviour of a large number of relatively
simple agents.
In addition and integrated to the small agents inference machines, the system uses the
genetic algorithm for the placement optimization task. Genetic algorithms are a class of
computational model that mimic natural evolution to solve problems in a wide variety of
domains [65]. Genetic algorithms are particularly suitable
for solving complex optimization problems and for applications that require adaptive
problem solving strategies. They are used by the Agents system to improve
components placement before routing.
The major aim of the system is to integrate all its parts and technologies in a way that
the best of each can be exploited. It tries to be ready to exploit future trends in
computing, such as the spread of parallel machines, and to offer an innovative solution
for the layout generation problem.
1.2 Previous Work
Much work has been done to automate layout generation. The following is a brief
description of some of this work, divided in two groups: Knowledge based systems
(basically rule based systems) and systems based on intensive search algorithms, such as
Lee's algorithm, simulated annealing or simulated evolution.
1.2.1 Knowledge based systems
Design systems in this group use mainly heuristic expert knowledge, in the form of
production rules, to guide their search in finding solutions for the layout problem. They
may use algorithmic solutions for some small tasks, but it is the knowledge system that
is in charge of the overall behaviour of the system.
- Talib [1] is a ruled based NMOS module compiler with more
than 2100 rules. It treats algorithmic-based procedures as subtasks while supervising them
with a knowledge-based control system. Talib relies on its control knowledge to decide
when and how to perform a specific subtask properly. Talib demonstrates how to use
clusters of circuits with known layouts to complete an NMOS cell layout, and how to take
input boundary conditions into account. Most of Talib's design knowledge is based on
empirical rules used by human experts when working in design examples. Talib is able to
generate a compact layout for small circuits, and its rule based approach makes it easy
to add new knowledge.
- Topologizer [2] is a ruled-based CMOS layout generator.
It uses rules specified by an expert designer to produce a symbolic layout from the
descriptions of the transistor connections and the environment in which the cell resides.
The placement rules include moving transistors between locations, exchanging their
positions, and rotating them. The routing expert consists of a prerouter and a refinement
router. The prerouter produces "rough routing" by assigning a unique track to each pair
of terminals to be connected. The refinement router then improves the rough routing by
applying a set of rules to eliminate bad routing, like U-turns elimination, row sharing,
etc. The output from Topologizer is a symbolic file of CMOS layout. By using MULGA, a
symbolic layout system [3], Topologizer's outputs can be
translated into a mask layout.
- LES (Layout Expert System) [4] is a random logic module
layout generator targeted for one-metal polysilicon gate CMOS technology, in hierarchical
VLSI designs. It applies rules and algorithms based on a multi-row layout style. LES
takes a top-down strategy that generates leaf cells after, rather than before, placement
and global routing are done. No detailed routing is needed because the cells are laid to
fit their environment. LES consists of seven expert systems organized in a blackboard
architecture: Analyses, Architecture, Placement, Characterization, Layout, Evaluation
and Optimization experts.
- AREAL (Automated Reasoning Expert for Analogue Design) [5]
is a system for generation of analogue layout circuits. It uses knowledge and geometric
reasoning to prune the design space. It express topological and geometric constraints,
deduced form analogue and connectivity information, in the form of boolean relations.
This constraints are preserved and imposed throughout the solution using a
boolean-constraint-solver to reduce the design space. This space is then explored by a
controlled branch-and-bound process to find an optimal solution.
1.2.2 Search intensive algorithms
Design systems in this group use mainly algorithmic techniques, such as Lee's algorithm,
simulated annealing or simulated evolution, to guide their search in finding solutions
for the layout problem. They may have some rules embedded in them, but the knowledge in
these rules is secondary to the overall task of the system.
- The PI (Placement and Interconnect) System [6] is a
placement and interconnect system for custom NMOS or CMOS (single metal layer) designs.
The system basically places circuit cells and assigns routing channels, it is not a leaf
cell generator. The cells' placement is done using heuristics that take in consideration
the connectivity among neighbouring cells and in relation to the power grid. PI relies
on channel router programs [7] to do its routing. The
program will assign the channels and decide their dimensions and the channel routers will
do the detailed routing. The program is written in LISP.
- ESP (Evolution based Standard cell Placement) [8] is a
program package designed to perform standard cell placement including macro-block
placement capabilities. It uses the method of simulating an evolutionary process in order
to minimize the cell interconnection wire length. While archiving comparable results to
popular Simulated Annealing algorithms, ESP usually requires less CPU time.
- Excellerator [9] is a program which generates CMOS cell
layouts from a circuit's specification. The program generates CMOS layout that is
"gate-matrix-like", but not constrained by strict "gate-matrix" design rules. It supports
different layout shapes and port locations constraints. Multi-row transistor placement is
undertaken by identifying groups of serially connected transistors, and then positioning
and ordering the groups. A routing technique based on a recursive version of the A-star
search algorithm is used. Routing priority can be given to critical nets.
- LiB [10] [74] uses a branch-and-bound search strategy to find an optimal chaining of transistors. It folds large transistors into multiple columns to meet the cell height constraint. The whole cell is divided into five routing regions: Two regions are in the diffusion islands (the PMOS and NMOS diffusion rows), and the other three are rectilinear shaped routing channels (between the power rail and PMOS row, between the PMOS and NMOS diffusion rows, and between the NMOS diffusion row and the ground rail). The program uses a graph theoretical method to select nets (subnets) for routing in the diffusion island. A global routing algorithm assigns the remaining nets to the three rectilinear channels. For the detailed routing, SILK [11], a router based on a simulated evolution algorithm, is used.
- PROCORE (PRObabilistic COnflict REsolver) [12] is a system based on simulated annealing optimization techniques. Simulated annealing is used to provide a framework for controlling ripup and reroute transformations within an implementation of Lee's algorithm. Intersections between nets are removed individually by rerouting one net (selected randomly) involved in an intersection such that it does not cross the other intersecting net (crossing other nets is still permitted). The resulting transformation is then either accepted or rejected based on the evaluation of a global cost function.
1.3 Flexibility
The three key issues affecting the decisions taken during the design of the Agents system were flexibility, innovation and speed. It was decided that the first two, flexibility and innovation, were going to be more important than speed. The Agents system does not aim for high speed and does not attempt to be particularly fast.
What differentiates the Agents system from previous ones is its emphasis on a flexible design, that can offer more freedom of design, and its use of new ideas, such as the concept of agents as software components. The system tests new trends and concepts in computing to discover how they can be used in layout generation.
Nowadays flexibility is becoming more important than speed, as machines get faster and greater flexibility in use, upgrade, or setup, becomes more important than speed. Flexibility is achieved at performance level using scalability (the quality of a program to adapt to the resources of the hardware it is running on can offer), and at integration level by being a system which is portable and easy to integrate. More important, flexibility is delivered to the user as a richer set of layout options, the Agents system can take BICMOS and CMOS technologies, can mix small analogue cells with a digital design and it has fabrication process independence.
1.3.1 Object Oriented Programming
Object oriented programming (OOP) techniques are well suited for system modelling. Systems, based on an object-oriented approach, are easier to understand because they are more closely related to reality. They have a small semantic gap between the model and reality. OOP makes the systems more flexible because the program can be broken up into many small and independent entities (objects) that can be refined and evolved independently [13]. Because of this independence they can be used later in other similar designs, improving code reuse [22].
1.3.2 Scalability
An important trend in the computer industry today is for system architectures that are fully scalable, that is, when the number of processors in a system is increased, system performance should scale up proportionally [14]. The DEC's Alpha, Motorola's Power-PC and Intel's Pentium architectures, the more important new microprocessors' architectures, are all scalable [14] [15].
For software, scalability basically means that a program should adapt to take advantage of the computational resources available. Such a program would run in no more than the humble resources a PC computer can offer, but if more power is made available, for example by adding to the setup a Sun Sparc workstation, this program would use the extra resources to improve performance.
In the Agents system, scalability is achieved mainly by the use of the client-server model. In this model the server program provides a service that the client program requests and they communicate over a network. If only one modest computer is available all servers and clients will run slowly on it. If a more powerful computer is available, the entities can run faster and be in a greater number. If many computers of many types are available connected by a network, many servers and clients can work in parallel using all power the computer network can give to improve performance.
That scalability of performance, depending on how much computer power is available, leads to a scalability of application. One can trade design quality with response time. One can tune the computer power made available to the program, based on how much of these two variables is needed. If high design quality is not needed or a slow response time is tolerable, a small amount of computer resource can be made available for the application. If, on the other hand, one needs the best quality as fast as possible, a large amount of computing power will deliver the expected results.
Design quality is defined in terms of the area a layout occupy, the length of its wires and the number of vias it uses. The smaller the occupied area, wire length and number of vias, the better the design quality.
1.3.3 Portability
Portability measures how easy it is to port a program to a new environment, which could be a new operational system or a new processor architecture. To achieve high portability the Agents system employs three strategies:
- The program is written in C++, a very popular and widely available language.
- It was concurrently developed on three operational systems of the Unix family: SunOs 4.X, Solaris2.X and Linux 1.X. And on two different computer architectures: The Sun SparcStations and the Intel 386 family. This avoided operational system or hardware dependencies being built inside the program.
- Two different C++ compilers were used: The SunSoft C++ and the GNU g++. The SunSoft compiler offered compatibility with AT&T C++ 3.0, the de facto standard for C++. GNU g++ is available on a wide range of platforms, from Cray super-computers to Comodore Amigas. Compiling with both gave the program both qualities at the same time. It helped, as well, to solve compiler bugs (it is very unlikely that the same bug would plague two different compilers) and to avoid the use of compiler dependent features.
1.3.4 Ease of embedding
This system is easy to embed in other systems because it uses a combination of client-server communication with a standard circuit description language (EDIF).
A server can be seen as a black box that has a contact port where you can send commands and receive results. The way to access this port is standard (in this case the standard is the TCP/IP protocol).
In the Agents system, to ease the process further, this port accepts data in ASCII format and the circuit descriptions are in EDIF (a standard circuit description language). The only thing a programmer, trying to embed Agents as part of his or her own system, needs to know apart from the standards are the commands accepted by the server. These commands are few and simple and follow the same syntax as EDIF commands.
The more a program is easy to port to and embed in another systems, the greater are the chances it will be actually used. There is no point in writing a very good program that nobody will use.
1.4 Innovation
The main conceptual innovation brought into the Agents system is the agents concept itself, the division of the system into smaller entities that can work independently, but together, to solve cooperatively a placement-routing problem. One of the advantages of such a division is that it can better exploit the kind of parallelism available in a client/server system working over a network. In such systems there is a relatively small number of powerful processors, in comparison to the massive amount of processors found in big parallel computers, this makes it more suitable for running independent medium size programs in opposition to a large number of very small ones.
The agents concept was used in Agents at two levels: at the system level, the four servers, that form the system, can run in parallel on different machines, and at a program level, where agent objects are used. Agent objects are objects with an expert system embedded in them. They are in charge of control and coordination tasks inside the program.
At program level, the agent objects innovate because they are small expert systems acting independently in opposition to the more common solution of having a big monolithic expert system with thousands of rules. Complex behaviour comes from the interaction of the small expert systems and not from a bulk set of rules.
Another innovative solution is the use of the genetic algorithm for layout optimization. Genetic algorithms have emerged, in the last few years, as practical, robust optimization and search methods. They use mutation and cross-over as search mechanisms and selection to direct the search towards the prospective target in the search space [17]. In Agents a routine based on the genetic algorithm is used to improve the quality of the component placement.
1.5 Speed
The Agents system has 18400 lines of C++ and Lisp (Squeme) code. It is a reasonably big program for one person to code. To reduce the program's complexity, and thus make it manageable by just one coder, some compromises had to be made. During the program's design, whenever such a compromise had to be made, flexibility and innovation were protected at the expense of speed. This is in tune with the decision to consider flexibility and innovation more important issues than speed during the program's design.
During the time it takes to write any big program, some years, the available hardware will probably double its power. If one aims mainly for speed one runs the risk of being overtaken by the natural rate of improvement of hardware performance. At the end, one can have a program that goes faster, but other well established more flexible programs running on new hardware may run fast enough to make the switching for a new program not interesting. Usually it makes more sense to have new machines twice as fast than to upgrade to a different program that would run things in half the time.
In addition, as scalability is one of the goals of the Agents system, if speed is really an issue, when using this program for some particular application, its scalability will allow speed increases by making more computer power available. In this case, more hardware will buy the necessary extra speed.
1.6 Main original contribution
The Agents system [18] [19] aims to demonstrate that software agents can be used to generate flexible VLSI layout. Such a system can show that VLSI layout generation can be distributed over a network of computer to take advantage of the available computer power to generate good layout faster. Agents can show that, by using a client/server model, layout generation tools can adapt to the trend in mainstream computing towards networks of relatively low-cost workstations (in opposition to big isolated computers).
1.7 Structure of the work
The following chapters explore and discuss the Agents system in detail. They begin exploring the key technologies used in the system:
- Chapter 2 introduces the concept of object oriented programming and shows the basic class structure of the system.
- Chapter 3 presents the concept of software agents and the problem-space computational model. It explores the idea of distributed reasoning and how the Agents system implements small software agents.
- Chapter 4 discusses the client/server model and how big software agents can be implemented as servers over a network. The Broker and Database servers are discussed.
- Chapter 5 discusses layout optimization techniques and introduces the genetic algorithm.
After the key technologies have been explored the next three chapters present the core implementation of the system:
- Chapter 6 shows how the placement server (Placer) is implemented.
- Chapter 7 shows how the routing server (Router) is implemented.
- Chapter 8 presents and analyses some results obtained using the system.
Finally a conclusion chapter summarizes the work and presents some ideas about future research.
- 1.1 - The Agents system
-
- 1.2 - Previous Work
-
- 1.2.1 - Knowledge based systems
-
- 1.2.2 - Search intensive algorithms
-
- 1.3 - Flexibility
-
- 1.3.1 - Object Oriented Programming
-
- 1.3.2 - Scalability
-
- 1.3.3 - Portability
-
- 1.3.4 - Ease of embedding
-
- 1.4 - Innovation
-
- 1.5 - Speed
-
- 1.6 - Main original contribution
-
- 1.7 - Structure of the work
-