Systems to Silicon

The tension between time-to-market, general-purpose architecture, flexibility and density introduced the need for different design methodologies for integrated circuits. “Full custom” allows companies to develop chips from scratch while “semi-custom” prescribes a library of pre-defined blocks that can be assembled.

When I started to work at Motorola Semiconductor the new HDC 100K gate arrays – which represented a significant step in microchip technology – heralded a time of complete electronic systems shrunken onto a microchip with reasonable design effort and cost.

HDC chips are prefabricated arrays of digital logic gates uncommitted to any logic circuit. Customers use these logic gates to design their functionality, which after completion is mapped onto one or more gate array interconnection layers by sophisticated design tools. The connection layers are introduced in the final metallization process by the chip factory to create the final customer specific chip.

The design of such complex gate arrays stressed the available design tools to their limit. Routing, the tasks of finding the proper path to connect gates, was black magic because of the added constraints of signal delays and clock timing. In-memory representation of designs and persistence on disks quickly exceeded typical workstations and initially required mainframes.

One of the biggest challenges was the power consumption limit set by the maximum allowable operating temperature of a HDC chip. In most cases a system on a chip only worked if all components are on the same chip (physical on/off-chip signals delay, clock), however too many gates switching at the same time could destroy a chip.

Determining the power consumption of a design in an accurate and time efficient way became the make or brake of many customer projects. We addressed this problem by introducing a tool POWCAL that used a library of energy dissipation models to represent a design, determine their toggle frequent through logic simulation, and calculate total power consumption automatically.

Silicon Compilers

I have been always interested to work at problems that have to do with scale either from a technology or a business perspective. VLSI (Very Large Scale Integrated) semiconductor circuits fit this category perfectly.

I started to work on algorithms to synthesize VLSI chips from formal specifications as researcher in Siemens AG in Munich. Automation of the design of application-specific integrated circuits (ASICs) promised to open the door for more economic hardware solutions, but in general it is a hard problem to solve even today.

By focusing on solutions with embedded processor cores we made significant progress and developed a “silicon compiler” called SMART (Synthesis of Modular Architectures with Test Support).

The toolset was used to develop smart sensors for automotive applications such as measurement of air volume in the intake of combustion engines (MAF sensors). Smart sensors combine analog logic to capture physical signals with a processor core to process these signals. Smart sensors are standard in cars of today and typically integrate with the digital communication bus that connects all electronic systems of a car.

An interesting aspect of our approach was the use of rudimentary processor architecture SIC (Simple Instruction Computer) that pre-dated the area of RISC (Reduced Instruction Set) processor.

SIC was optimized for the automatic synthesis from a specification while RISC-processors claim to fame was the heavy usage of compiler technology to optimize code written in a higher-level programming language. In both cases the simplified/reduced instruction set of the processor enabled more efficient optimizing transformations in the compiler and therefore reduced time to market/to execute a program.