International Symposium on Quality Electronic Design (ISQED)

Views on Design for Quality & Manufacturing
Summary of Past ISQED Keynote Speeches from Industry Leaders

Industry Must Turn Its Attention to Quality

Dr. Ali A.Iranmanesh
ISQED Founder and Chairman

February 20, 2000

During the past 20 to 30 years, we have witnessed a phenomenal increase in the level of device scaling as well as in semiconductor manufacturing quality. This has enabled the industry to provide ever more complex electronic products. However, the advancement in semiconductor technology has drastically surpassed the progress in the capability and quality of EDA tools and design methodologies. The result is a fast-growing disparity between the available capability and what can be realized in design practice. The industry's focus should therefore be not just in the traditional areas of performance, timing, area and power specification, but also in product yield, reliability, and manufacturability. Design-for-reliability, design-for-yield, and design-for-manufacturing should be integral parts of the modern design methodology. These, together with time-to-market considerations and performance, form the foundation of the Quality Electronic Design. To achieve these targets, close coordination and cooperation among various disciplines involved in the design, development, and manufacturing of integrated circuits and systems are essential. In the late 20th century, the quality revolution in manufacturing led to phenomenal progress in semiconductor technology. The early 21st century will usher in the age of industry maturity, where these principles are successfully applied toward EDA tools, design methodologies and design processes.

Slap It Together And Ship It!

Dr. Aart J. de Geus
Chairman & CEO, Synopsys Inc.

March 21, 2000

In today’s world of e-commerce and dot-com instant business successes, time to market constraints have taken the upper hand in almost all product decisions. In that scenario, what happens to the role of quality in the design of semiconductors and electronic systems? In his keynote, Aart de Geus addressed the trade-offs of “market” needs vs. “quality” needs and showed that what appears to be a trade-off may not be one at all.

The Practical Side of Quality

John East
CEO, Actel Corporation

March 21, 2000

My practical definition of quality is getting it right the first time, on time. The downsides of poor quality work need no explanation. Unfortunately, though, the consequences of being late to market can doom any potential market advantage. The only sure win comes when the product is both high quality and on time. To help assure on-time delivery of working ICs, I advocate “two-handed management.” This means with one hand, do the job as best you can using the tools and techniques available, but with the other hand, take steps to see similar jobs are done better and faster the next time. An example of twohanded management in the distant past was the development of various simulation techniques. The two-handed manager of the future will look for silicon with advanced capabilities in the areas of “observability,” “tweakability” and incremental specification techniques as well as inherent improvements in speed, power and cost.

Design for Quality and Manufacturing

Prakash Agrawal
CEO, NeoMagic

March 21, 2000

This presentation will discuss from a CEO’s perspective the process needed to design a quality chip for manufacturing. It will cover the milestones necessary for bringing a successful chip to market. Discussion highlights will focus first on a well thought out analysis of market requirements, taking into account the product roadmaps and feature requirements of your major customers, the competition, the potential market size, and the delivery schedule necessary to hit the window of opportunity to sell the new product. Next, it will focus on how to proceed with a thorough evaluation of your company’s internal variables, such as your technology roadmap, cost analysis, capability of strategic partners, capacity requirements, and return on investment. Finally, it will give tips on evaluating the results of matching the market requirements with your company’s internal capabilities. It will mention some of the well-known design tools and practices used in the industry that can help you assure the built-in quality necessary to meet manufacturing standards and market needs.

Ramping New IC Products in the Deep Sub-micron Age

John Kibarian
CEO, PDF Solutions

March 21, 2000

It is well known that the majority of the potential profits are early in a products life. This is especially true in product segments such as system on a chip, graphics accelerators, microprocessors, and memory. The spoils in these segments go the company who gets its product to market first. At the same time, the investments required to produce the next generation products is going up at an accelerated pace. As a result, companies are sharing the investment by working with more third party suppliers. Today, a chip will be designed with 3rd party EDA tools and using commercial IP. It is often manufactured in commercial foundries, and tested and assembled a separate company. When the product is not meeting yield and performance, how are the issues resolved? Eventually, these yield issues are resolved, but often not before the profitable part of the product’s lifecycle is complete. In this presentation we describe new methodologies, tools and services which can help turn designs into products. We will summarize the key technical issues which make performance and yield targets difficult to meet given the product’s lifecycle constraints and demonstrate how these new methodologies can greatly change the production ramp. Examples of these methods applied to advanced products such as microprocessors, embedded DRAM, and system on a chip, and DRAM will be provided.

Platform-based Design: A Path to Efficient Design Re-Use

Prof. Alberto Sangiovanni-Vincentelli
Prof., UCB

March 22, 2000

System design is undergoing a series of radical transformations to meet performance, quality, safety, cost and time-to-market constraints introduced by the pervasive use of electronics in everyday objects. An essential component of the new system design paradigm is the orthogonalization of concerns, i.e., the separation of the various aspects of design to allow more effective exploration of alternative solutions. Since the mask set and design cost for Deep Sub-Micron implementations is predicted to be overwhelming, it is important to find common architectures that can support a variety of applications. In this talk, we will explore methods for selecting families of software and hardware architectures that allow a substantial design re-use and some paradigms for embedded system designs that are likely to become the pillars of future tools and flows.

Embedded-Quality for Test

Yervant Zorian
Chief Technology Advisor, LogicVision

March 22, 2000

The basic concept of embedding test functions onto the very IC design is a simple one. However, the complexity offered by the emerging system-on-chip and the very deep micron technologies has created difficult challenges and quality risks. A new wave of embedded, quality insurance functions, are needed to address this complexity level. This talk will discuss such design for quality trends and solutions and will analyze their impact not only on go/no-go test, but also on a set of expanded quality insurance functions to support debug, measurement, diagnosis and repair.

Deep Submicron ULSI Design Paradigm: Who is writing the future?

Prof. Kamran Eshraghian
Prof., Edith Cowan University

March 22, 2000

The concept of “technology generation” attributed to Gordon Moore has created a plausible method for predicting the behavior of technology road map that has seen world’s production of silicon CMOS to exceed 75% of electronic related materials. A feature of such progress is characterized by the complexity factor that predicts the emergence of a new generation of technology every three years. A reasonable method of comparison would be to observe the parallel between CMOS based systems with those of biologically inspired systems. Deep submicron, synonymous with Ultra Large Scale of integration, suggests that by the year 2010 the number of transistors/chip will be in the order of 0.5x109, with an intrinsic clock speed of 3GHz. At this level of integration the classic MOS transistor would have only a few ‘electrons’ in the channel to direct. Thus, the reality of Quantum MOS (QMOS) transistor becomes a plausible possibility. In the mean time the question remains as to how are we going to cope with the design and quality of the new system complexity. ULSI design requires a shift in the design paradigm from current evolutionary thinking for system integration, to more of revolutionary approaches as depicted by attributes of “brain architecture”.

Future Platform for Mobile Communication

Hajimi Sasaki
Chairman of the Board, NEC

ISQED 2001, March 27, 2001

This keynote would explore three driving forces in the IT revolution that are actualizing an Information Society: first, the Internet global, ever expanding nature and second, the ability to create the ultimate personal information tool. And last, at the heart of these forces is the cutting-edge semiconductor device. Especially in mobile where products are composed primarily of semiconductors, we see that the creation of advanced semiconductor devices controls to a large degree the superior nature of the mobile product. Mobile products must balance many constraining criteria such as size and weight against functionality such as low power consumption. There are also a wide array of technologies involved such as low power consumption circuit design, flash memory and RF power device. Additionally, intellectual property has become even more important. Moreover, the harmonization of semiconductor technology and peripheral technologies such as small-scale, light-weight packaging technology, long life rechargeable batteries and flat panel displays has become an important factor.

Delivering Quality Delivers Profits

Joe Costello
CEO, think3

ISQED 2001, March 27, 2001

The future of electronics is SoC design. SoC design complexity is accelerating due to rapid change on multiple dimensions: design content, deep sub-micron (DSM) electrical and physical effects, and the sheer scale of SoC projects. At the same time, market windows are dramatically decreasing. These fundamental technology trends and economic forces underscore the need to rethink conventional design methodology and conventional business practices for SoC design delivery. An SoC design foundry, combining a fast and scalable mixed-signal SoC design methodology with innovative design technology and electrical engineering expertise, enables not only the timely delivery of SoC designs, but also robust design quality through electrically correct silicon engineering.

The Expanding Use of Formal Techniques in Electronic Design

Dr. Raul Camposano
CTO/GM, Synopsys, Inc.

ISQED 2001, March 27, 2001

Although Electronic Design Automation (EDA) tools allow some tolerance for features having only limited scope or not working in all cases, there is no tolerance for error in their final results. Since the beginning, EDA tools have included so-called "formal" techniques to ensure such error-free results. More and more, formal verification tools are being adopted as a necessary part of mainstream design flows to tackle the exploding verification challenge. In this keynote address, we will focus on some of these formal techniques; in particular, equivalence checking, property checking, and the combination of simulation with formal techniques -- all of which play an important role in creating zero-defect results in state-of-the-art electronic design.

IC Design Methodology in the Foundry Era: Introducing ‘Heads-Up’ Design”

Edward C. Ross
President, TSMC, USA

March 27, 2001

The emergence of the foundry as a primary semiconductor manufacturing resource has created a seachange in the way EDA companies interact with manufacturers. Since the key concern for many foundry customers is time-to-volume, EDA companies are now focused not just on system-level design, but on “heads-up” design, e.g., bringing to designers the ability to build whole systems at the speed of thought. Dr. Ross discusses emerging trends in the EDA, IP, library and design center communities, wherein deep collaboration with foundries is producing a variety of Internet-based solutions that are revolutionizing IC design methodologies.

Quality of Design from an IC Manufacturing Perspective

Prof. Wojciech P. Maly
Professor, Carnegie Mellon University

March 28, 2001

There are many credible sources (including the ITRS) now seeing cost of IC manufacturing as a potentially negative factor that may affect the future of the IC industry. There are also a number of answers to the growing-cost-of-manufacturing challenge. One of them is IC design for efficient manufacturing -- measured by such indices as yield, time-to-volume, etc. The first objective of this presentation is to analyze publicly discussed visions for the IC industry and derive from them manufacturability conditions that must be met for these visions to materialize. We will focus our discussion on the recent version of the ITRS. It will be shown that ITRS predictions cannot be fulfilled by design or manufacturing approaches alone. Only by solving complex trade-offs on the design-test-manufacturing interface one may provide a chance to overcome the rising-cost-of manufacturing problem -- the main stumbling block on the ITRS horizon. The second objective of the presentation is to propose a redefinition of the notion of the quality of IC design, so it can accommodate manufacturability measures as primary design goals in addition to traditional die size, performance and time-to-first-silicon design quality indices. Such a re-definition is possible and maybe necessary contribution of the IC design community in addressing the rising-cost-of manufacturing problem.

Embedded Test Leads to Embedded Quality

Dr. Vinod Agrawal
CEO, Logic Vision

March 28, 2001

The concept of embedded test, wherein physical test engines are built right on to the semiconductor chip, has a very strong quality value throughout the lifecycle of the chip. These embedded testers can be reused throughout the lifetime of the chip from silicon debug, to characterization, to production testing (both wafer probe and final test), to board prototyping, to system integration and then finally to the diagnosis in the field. More than 50 semiconductor and system companies world-wide are already using embedded test in their complex chips, to gain significant quality, cycle time and economic competitive advantage. This talk will explore how embedded test is becoming a standard choice for IC and system developers.

Quality on Time

Aki Fujimura
COO and President, Simplex. (Simplex was acquired by Cadence in 2002)

March 28, 2001

How is it that a group of talented, highly motivated, hard-working software engineers consistently produce low-quality software, late? It is the speaker's view that schedule management and quality management go hand in hand. The traditional thinking that quality and schedule are tradeoffs is exactly the approach to engineering management that starts the downward spiral resulting in organizations that can never deliver quality software nor on-time delivery. The talk discusses the notion that schedules are probability distributions, and presents several practical quality and schedule management techniques.

Quality of SoC designs through quality of the design flow: Status and Needs

Philippe Magarshack
Vice President, Central R&D Group and Director, Design Automation, STMicroelectronics

March 28, 2001

It is now universally recognized that System-on-Chip (SoC) is the appropriate product solution to meet the demand of cost and volume for many electronics markets. The increasing pressures coming from shrinking market windows, accelerating process roadmaps and increasing mask costs, render necessary that SoC be correct at first silicon. This is becoming a considerable challenge due to the complexity of systems that can be built on the same chip: current process capabilities are approaching 100 million devices. Additionally, this level of integration comes at the price of renewed parasitic effects, such as crosstalk, voltage drop and electro-migration. A complex design flow is necessary to solve these conflicting trends, combining executable specifications, isolating function from communication, exploring architectures and trading off speed, power, area and schedules, and finally a fast route to implementation, be it in software running on embedded processors, dedicated digital hardware, or dedicated analog cells. The successive levels of abstraction of the system description warrant the need for extensive verification of the SoC, both at functional level, and at the timing, power and reliability levels. Building such a design flow calls for mixing very good point tools, coming from established EDA vendors as well as start ups and academia. But above all, it requires well-defined and structured interfaces between tools at key hand-off points in the design flow. Standard design languages and Application Programming Interfaces (API's) are fundamental to the success of SoC.

IP REUSE QUALITY: “Intellectual Property” or “Intense Pain”?

John Chilton
Sr. VP and General Manager Synopsys, Inc.

March 19, 2002

As systems on a chip become more complex, reuse of third-party intellectual property (IP) becomes more necessary to meet time-to-market deadlines. However, issues surrounding IP quality are very much unresolved. Poor IP quality is the key reason why many IP users feel that “IP” is actually an acronym for “Intense Pain.” . There are major inconsistencies surrounding basic quality, including fully synchronous design, registered inputs and outputs for IP blocks, and completion of full specifications before design. All these inconsistencies contribute to difficulties in using the IP and integrating it into a chip design. One of the key reasons why quality is still such an issue within the IP community is the issue of “reuse” versus “salvaging.” Much of the IP sold over the last few years wasn’t really designed for reuse. Instead, it was designed for use in a single chip, then later repackaged (i.e., salvaged) as IP. There has also been tremendous interest in creating IP repositories—fancy Java based, Web-accessed, and multi-featured custom products meant to hold the wealth of IP. Along the way, though, we forgot to create enough fully reusable IP to warrant these repository investments. Although the challenges in the IP business may seem daunting (and there are many more besides just those that concern quality), they are well worth the effort when you consider the rewards. There’s a tremendous need for IP to address the growing productivity gap, which represents a great opportunity for the third-party IP industry.

Why Integrated Yield Management is a Necessity

Y. David Lepejian
President, CEO and Chairman HPL

March 19, 2002

Improving semiconductor yield is a multi-facetted process that must include design, manufacturing, and test. An integrated approach enables companies to rapidly reach higher levels of revenue and profitability. Incorporating design-for-yield concepts early, improving the quality of the test programs, and applying new technology to accelerate the measurement and correction of failure sources in the production process combine to have powerful effect upon company profits, product quality, and time to volume.

Design Success: Foundry Perspective

Jim Kupec
President, UMC USA

March 19, 2002

Leading edge foundries are rolling out new process technologies every two years with today’s advance processes capable of producing a quarter billion transistor on a thumb nail sized chip. The growth of the fabless business model has enabled many companies to organize and build value with the strength of their design capabilities. Quality is often reflected by the continued success of design practices resulting in market success. The many styles of design implementations provided by a large number of companies sharing a common process helps provide a Darwinian view of quality practices. The interaction with design flows, libraries, special purpose IP, memory types are important considerations. This talk will address the trade-offs and successful design technologies used in foundries.

What you don’t know CAN hurt you: Designing for survival in a sub-wavelength environment

Dr. Y.C. (Buno) Pati
President and CEO, Numerical Technologies

March 19, 2002

The semiconductor industry’s promise to deliver an endless array of chip designs to match the voracious appetite for smaller, faster, cheaper devices is in danger of ringing hollow. We could make this commitment with confidence up to recently. But, lately we’ve hit the wall. We’re crashing through the sub-wavelength barrier and we’re feeling our way toward designing and manufacturing chips in a challenging new environment without benefit of some key process technologies. Now, to survive and thrive, chipmakers are turning to phase shifting—just a novel, clever concept a few short years ago—as a critical and necessary enabler of producing integrated circuits at dimensions of 0.13 micron and below. Inevitably, chip designers are following suit, not just to match the chipmakers in their march to smaller feature sizes, but to polish their own competitive edge with high-performance chip designs that are easy to produce. They’re breaking out of a somewhat isolated mold, knowing that shrinking design times and increasing layout complexity call for new tools and expertise. Most acknowledge that the success of their designs, and indeed, their future viability depends on quickly adopting the tools and expertise that their chip making customers are using so effectively.

The Role of ICs in the Creation of a Connected World and the importance of Product Quality

Atiq Raza
Chairman and CEO, Raza Foundries Inc.

March 20, 2002

Human Beings being social have had a need to communicate. The modem chapter in enabling large-scale communication has been aided by intelligence in the transport, distribution, protection, traffic management, decoding, analyzing and displaying of communication content. The intelligence has been embedded in an explosive confluence of Software, Systems and Integrated Circuits. This has resulted in the most amazing transformation of the way we live our lives, work, and engage in all other necessary and capricious activity. It has also created a huge economic footprint on the Gross Domestic Product of the United States of America. With a massive transformation that has occurred in such a short time, this throbbing network across the planet has to operate reliably because of the precious payload it carries.

Wireless Systems-on-a-Chip Design

Prof. Bob Brodersen
Dept. of EECS, University of California, Berkeley

March 20, 2002

There is a fundamental shift that is occurring in the implementation of wireless systems. Not only is the underlying technology shifting to mainstream CMOS technology, but the applications and specifications of the supported links is also rapidly evolving. The multiple inter-related technologies required for implementation of such wireless systems requires a co-design strategy in communication algorithms, digital architectures as well the analog and digital circuits required for their implementation. Critical to good design of these chips is the definition of energy and area performance metrics that can facilitate the tradeoff of issues such as the cost of providing flexibility or the amount of parallelism to exploit. These design decisions can result in differences of orders of magnitude in the metrics between what is possible in the technology and what is often achieved if the costs are not fully understood. A design infrastructure which supports architectures, which optimizes the metrics, will be described for wireless systems that provides a fully automated chip design flow design flow from a high level system specification.

Microwave III-V Semiconductors for Telecommunications and Prospective of the III-V Industry

Dr. Chan Shin Wu
President & CEO WIN Semiconductors

March 20, 2002

The Microwave m- v semiconductor IC technology (Primarily GaAs) has emerged as a powerful, enabling, technology for the wireless and optical communications in the past 5 years. It has been dominating, or making substantial penetration into, the market for handset power amplifiers and switches, advanced wireless LAN RF front-ends and various other key RF components for broadband wireless, wireless infrastructure, satellite telecommunications, high data rate fiber optical communications and automotive radar applications. The Microwave III-V semiconductor IC industry has grown dramatically in the past 2-3 years. It is worth noting that the majority of the recently formed GaAs Fabs are located in Taiwan. Their intent is to provide pure-play foundry services following the silicon foundry business model developed by TSMC and UMC. In this presentation, we will discuss the key components of III-V microwave transistors (HBT, pHEMT and MESFET etc.) and their RFICs/MMICs, their electrical performance, major applications, market status, trends and opportunities. We will define the current status for the global m-v semiconductors industry, the rapidly growing GaAs MMIC Fab industry in Taiwan and its advantages for providing a one-stop, total solution for the wireless and optical communication components customers.

Tomorrows High-quality SoCs Require High-quality Embedded Memories Today

Ulf Schlichtmann
Senior Director, Infineon Technologies AG

March 20, 2002

Embedded memories increasingly dominate SoC designs -whether chip area, performance, power consumption, manufacturing yield or design time are considered. ITRS data indicate that the embedded memory contents of ICs may increase from 20% in 1999 to 90% at the SOnm node by the end of the decade. Therefore, even more than today, the success of tomorrow's SoC design will depend on the availability of high-quality embedded memories. Advanced process technologies pose new challenges for meeting these quality criteria. Some of the challenges are: providing flexible redundancy solutions for embedded SRAMs; designing competitive memories despite ever increasing leakage currents; reducing SRAM susceptibility to soft-error rate (SER). These challenges are bringing about the need for significant innovations in design of embedded memories, much more so than in recent previous process generations. In the presentation, the challenges will be outlined and solutions will be proposed. The focus of the discussion will be on SRAM/ROM, but other technologies such as eDRAM and "IT SRAM" will also be addressed.

Platform Leadership in the Ambient Intelligence Era

Bob Payne
US CTO and Senior Vice President/GM of System ASIC Technology, Philips Semiconductors

March 25, 2003

Design reuse has become essential to cope with the ever-increasing design complexity. IP level reuse alone has proven insufficient. Platform based design allows the validation of a robust combination of IP blocks and provides a reference HW and SW baseline which can be supported with an integrated development environment. Several years ago we transitioned into the streaming data era with most systems serving as content generation appliances, content consumption appliances or content distribution equipment. Now we have entered the age of ambient intelligence where the streaming data is served up through wireless links. What will platform leadership look like in this new era? How will the SoC infrastructure change as we move to 90nm technology with more than 30M gate per square centimeter integration capacity? How are usage patterns changing and what represents the killer application that enhances the users quality of life by enabling more advanced interaction with the ambient intelligence? What is it going to take to make a step function improvement in system level design productivity? What happens when power optimization becomes the dominant design consideration? What about SoC affordability? What will the SoC design of the future look like? These are just some of the thought provoking issues that will be addressed in Bob Payne’s keynote.

Quality SoC Design and Implementation for Real Manufacturability

Susumu Kohyama
Corporate Senior Vice President, Toshiba Corporation

March 25, 2003

Device miniaturization near 100nm node and beyond together with extreme multi-level interconnect started to create fundamental economical and engineering challenges. Especially, past success model of “Layer Masters” confessed difficulties to fill the gaps between each separated layers to complete integrated results, for meeting performance and yield with a reasonable timing. However, it is also obvious that classic IDM model proved to be so inefficient, since inevitable separation and standardization of various aspects of design and technology are not established adequately. Those issues are even more significant when we discuss complex SoCs for 90nm and 65nm nodes, where design and implementation commingle in various different manners. A solution for these challenges is a new open idm model where open collaboration and strong differentiator are essential.

This presentation will discuss from a “SOC Centric Open IDM” perspective, the whole flow of design and implementation for real manufacturability, where true knowledge of integration and management skill function to enhance differentiators on top of open platforms.

Quality Challenges of the Nanometer Design Realm

Ted Vucurevich
Senior Vice President and Chief Technical Office, Cadence Design Systems, Inc.

March 25, 2003

It is commonly agreed that sub-nanometer design is electronic design technology’s next big challenge. With the economic stakes higher than ever, the vendors of electronic design solutions must put themselves into their customers’ shoes through comprehensive, high-quality programs. My understanding of the differences designers face at geometries below 100 nanometers has led to my discussion of some of the challenges the industry faces in the sub-nanometer realm. This includes the domination of wires in digital design, which requires the ability to design the best quality wires through continuous convergence, a wire-centric methodology. In the nanometer world, the front-end and back-end disappear, leaving the prototype as the chip. This includes detailed wiring, and a new full-chip iteration every day. Most sub-nanometer ICs and SOCs will be digital/mixed-signal. This leads to custom design issues, such as integrating sensitive circuits with massive digital and mixed-signal design, productivity and foundry interface. Nanometer soc verification includes digital, analog and software, and a 70 percent silicon re-spin rate because of associated functional errors. At sub-nanometer levels, design-in becomes a major bottleneck, especially across a design chain, which can only be solved by silicon-package-board co-design.

Addressing the IC Designer’s Needs: Integrated Design Software for Faster, More Economical Chip Design

Rajeev Madhavan
Chairman & CEO, Magma Design Automation

March 26, 2003

Electronic design automation continues to attract a great deal of investment from the venture community, fostering the creation of startup companies focused on developing unique point-tool solutions. While many innovative new technologies come from this, industry must consider the increasingly critical need of IC designers and manufacturers: integrated design flows that enable the design and production of chips with fewer resources and in less time, without compromising the quality of results. Increasingly evident is the advantage of integrated design and the economies it brings while delivering the same quality of results as point-tool-based approaches. The future of EDA depends on the industry’s ability to deliver solutions that enable the IC industry’s integration of electronic design tools and processes as it relies on EDA to provide the means for producing the next generation of semiconductor products.

Closing the Gap Between ASIC and Full Custom: A Path to Quality Design

Michael Reinhardt
President & CEO, RubiCad Corporation

March 26, 2003

Although process technology has shrunk down to nanometer features over the last decade, the gap between ASIC design and full-custom ic design has widened. This gap includes significant differences in performance, price, and profit between the two design styles. It is also revealed by huge differences in quality between the two styles in speed, power distribution and consumption, yield, and reliability, in some cases as much as an order of magnitude. To fully utilize the latest process technologies, a full-custom design approach with the productivity of an ASIC flow is necessary.

A VLSI System Perspective for Microprocessors Beyond 90nm

Shekhar Borkar
Fellow & Director of Circuit Research lab, Intel Corporation

March 26, 2003

Microprocessor performance increased by five orders of magnitude in the last three decades. This was made possible by continued technology scaling, improving transistor performance to increase frequency, increasing integration capacity to realize complex architectures, and reducing energy consumed per logic operation to keep power dissipation within limit. The technology treadmill will continue to fulfill the microprocessor performance demand; however, with some adverse effects posing barriers—limited by power delivery and dissipation—and not by manufacturing or cost. Therefore, performance at any cost will not be an option; significant improvements in efficiency of transistor utilization will be necessary. This talk will discuss potential solutions in all disciplines, such as micro-architecture, circuits, design technologies & methodologies, thermals, and power delivery, to overcome these barriers for microprocessors beyond 90nm.

Simplify: Enable Quality, Enable Innovation

John Chilton<
Sr. VP and General Manager, Synopsys, Inc.

March 23, 2004

As the old sales adage goes, “Nothing happens until somebody sells something.” For the semiconductor-based electronics industry, the never-ending challenge is to find and sell the next IC-based new “something” (or “somethings”) that consumers just can’t live without. It’s an immense and extremely expensive undertaking to find/create/deliver a killer app, requiring a single-minded, undistracted business focus and an immense amount of creative design innovation. Fortunately, there is a wealth of business-savvy, creative systems companies able to meet that challenge, as long as they are free to concentrate on what drives their core competency: designing and selling exciting new business systems and consumer devices. What they need from their chip manufacturers is an agreement on specs, models, pricing, and delivery. Simple. Fortunately, there are semiconductor firms up to the challenge, as long as they are free to exercise their core mission: designing and selling faster and slicker chips, often now with software and boards attached. They need to focus on taking their customer’s performance specifications and turning out a system on a chip that does exactly those things, on time and on budget. What they need from their EDA vendors are tightly integrated design tools that allow them to meet their goals of performance, price, and predictability. Simple.
Unfortunately, these industries don’t reflect this simplified, rosy picture… yet. The hard reality is, however, that they do have to get there, and soon, or live with dwindling prospects for the future. This presentation will discuss strategies for simplifying the semiconductor value chain, thereby enabling each segment to focus on doing well what it does best, for the sake of the future of the entire electronics industry.

Design for Manufacturing? Design for Yield!!!

Marc Levitt
Vice President and General Manager
Cadence Design Systems, Inc.

March 23, 2004

Today’s nanometer-scale designs are two orders-of-magnitude more complex than designs were in the early 1990s and are commonly manufactured with processes at or below the 130nm feature size. This has brought about a fundamental change in the way design teams must approach the release for their design data to their manufacturing partners. In the past, once a design was taped out and proven to be functional, the responsibility for ramping yield and enhancing the profitability of a design was primarily the responsibility of the manufacturing partner. This is no longer possible at 130nm and below. Once a manufacturing process has stabilized, direct action must be taken by each and every design team to "tune" their design for yield. Design-specific yield enhancement is the new frontier in EDA and while it includes the traditional Design for Manufacturing (DFM) technologies, it also covers much more. Failure to consider yield-degrading effects in IR drop, signal integrity, electro migration, and process variation will result is severe downstream problems in timing closure, functional errors during system bring-up, and the inability to achieve silicon yield and quality targets.

Nanotechnology-Nanoscale Molecular Memory

Shih-Yuan (SY) Wang
Senior Scientist Quantum Science Research, Hewlett Packard Laboratories

March 23, 2004

An overview of nanotechnology research at HP Labs/ Quantum Science Research will be given with focus on nanoscale molecular memories. Densities of 100 Gb/cm2 are within reach. QSR aim is to break away from the current “scaling” down approach and look for and/or invent innovative approaches that have the potential for high volume manufacturability to break through the ITRS Red Brick Wall and push the limit of the technology. Nanofabrication challenges and possible applications will also be discussed.

Digitally Named World: Challenges for New Social Infrastructures

Hiroto Yasuura
System Research Center
Kyushu University, Fukuoka, Japan

March 24, 2004

In the last three decades of the 20th century, many information and communication technologies have been developed and also introduced in social infrastructures, which are supporting our daily lives. Since the information technologies have progressed very rapidly, the basic structure of each social infrastructure, which was mostly designed in the 19th or the beginning of 20th centuries with few possibility of information technology, should be redesigned with an assumption of the existence of the advanced information technologies. Based on the high-performance SoCs (System-on-a-Chips) connected by wide-band networks, we can design next generation of social systems, which are directly related with quality of our society including individual rights and national security. In this talk, two social infrastructure information technologies are introduced. Personal Identifier (PID) system is an infrastructure for bidirectional mutual authentication, which will be used for electric commerce and governmental services. An RF-ID tag system is also important technology to implement efficient management of products and economic activities. Using PID and RF-ID tags, we can bridge a gap between the real world and the virtual one on computers automatically. We call the society, in which all persons and goods have their own digital names (identifiers) and are recognizable both in the real and virtual world, Digitally Named World. The systems require advanced technologies of SoC, networking, security and software. Here, technical challenges and social requirements for the new technologies are discussed. Some people are afraid of the infringement of their privacy in the digitally named world. Our discussions also include the technology to protect privacy and individual rights as well as efficiency and stability of our society.

Designing High Quality, Scaleable SoC’s with Heterogeneous Components

Pierre G. Paulin
Director, SoC Platform Automation
Central R&D, STMicroelectronics

March 24, 2004

Today’s SoC’s combine an increasingly wide range of heterogenous processing elements, consisting of general purpose RISC’s, DSP’s, application-specific processors, and fixed or configurable hardware. Five to ten processors on an SoC is now common. A bottom-up assembly of these heterogeneous components using an ad-hoc interconnect topology, different instruction sets and embedded S/W development tools leads to unmanageable complexity and low quality. This talk will present an approach to effectively integrate heterogenous parallel components – H/W or S/W – into a homogeneous programming environment. This leads to higher quality designs through encapsulation and abstraction. This approach, supported by ST’s MultiFlex multi-processing SoC tools, allows for the combination of a range of heterogeneous processing elements, supported by high-level programming models. Two programming models are supported: a distributed system object component (DSOC) message passing model, and a symmetrical multi-processing (SMP) model using shared memory. We present the results of mapping an Internet traffic management application, running at 2.5Gb/s. We demonstrate the combined use of the MultiFlex multi-processor compilation tools, supported by high-speed hardware-assisted messaging, context-switching and dynamic task allocation in the StepNP platform.

Performance Limitations of Devices and Interconnects and Possible Alternatives for Nanoelectronics

Prof. Krishna Saraswat
Rickey/Nielsen Professor of Engineering
Stanford University

March 24, 2004

For over three decades, there has been a quadrupling of transistor density and a doubling of electrical performance every 2 to 3 years. Si transistor technology, in particular CMOS has played a pivotal role in this. It is believed that continued scaling will take the industry down to the 35-nm technology node, at the limit of the ”long-term” range of the International Technology Roadmap for Semiconductors (ITRS). However, it is also well accepted that this long-term range of the 70-nm to 35-nm nodes remains solidly in the “no-known solution” category. The difficulty in scaling the conventional MOSFET makes it prudent to search for alternative device structures. This will require new structural, material and fabrication technology solutions that are generally compatible with current and forecasted installed Semiconductor Manufacturing. In addition, new and revolutionary device concepts need to be discovered and evolved. These can be split into two categories: one is the continued used of silicon FET-type devices but with additional materials, e.g., Ge and innovative structural aspects that deviate from the classical planar/bulk MOSFET, e.g., double gate MOSFET. The second category is a set of potentially entirely different information processing and transmission devices from the transistor as we know it, e.g. silicon-based quantum-effect devices, nano-tube electronics and molecular and organic semiconductor electronics. Continuous scaling of VLSI circuits can pose significant problems for interconnects, especially for those responsible for long distance communication on a high performance chip. Our modeling predicts that the situation is worse than anticipated in the ITRS, which assumes that the resistivity of copper will not change appreciably with scaling in the future. We show that resistance of interconnect wires in light of scaling induced increase in electron surface scattering, fractional cross section area occupied by the high resistivity barrier and realistic interconnect operation temperature will lead to a significant rise in the effective resistivity of Cu. As a result both power and delay of these interconnects is likely to rise significantly in the future. In the light of various metal interconnect limitations, alternate solutions need to be pursued. We focus on two such solutions, optical interconnects and three-dimensional (3-D) ICs with multiplicative Si layers.

Enabling True Design for Manufacturability

Dr. John Kibarian
President & CEO, PDF Solutions

March 22, 2005

Without any doubt, Design For Manufacturability has been the hottest buzzword for the last couple of years. This is quite justifiable by the enormous challenges in nanometer technology nodes and ever increasing design-process interactions. As a result, virtually all EDA companies have focused on providing the "DFM Solutions". Since the concept of DFM covers an extremely broad spectrum of tasks from the system level all the way to the manufacturing process, many of these DFM solutions are just the re-labeled design verification tasks.

Recent progress and remaining challenges in pattern transfer technologies for advanced chip designs

Ashok K. Sinha
Sr. VP & GM
Applied Materials

March 22, 2005

Even as the Moore's law continues to drive "tiny technologies" through relentless scaling, the main technology driver for semiconductor chips has evolved from DRAMs to Microprocessors to FPGAs. The underlying metrics have evolved from bits per chip and cost per bit for computers to functions per chip and cost per function for consumer products. This talk will review the remarkable progress that has been made in enabling pattern transfer technologies, including mask design, lithography enhancements and precision etching on the new 300mm wafers for an increasingly wide variety of new materials. However, there is a cost associated with all this and the cost-benefit tradeoffs will almost certainly drive new inflections in the entire food chain, which I will try to identify.

Shifting Perspectives on DFM

Janusz Rajski, Chief Scientist, Design Verification and Test Division
Mentor Graphics

March 22, 2005

Nanometer technology has ushered in new and significant yield and manufacturing considerations and constraints. The lack of major increase in yield improvement between the 350nm and 180nm nodes suggests that the yield loss mechanisms are not only increasing in numbers, magnitude, and complexity at each successive generation, but they are increasing at a rate fast enough to largely offset ‘cosmetic’ improvements in tools and methodologies. If EDA tools are to assist the semiconductor industry at the 90nm and 65nm nodes, there must be profound changes to existing tools, and the introduction of new technologies that allow designers to consider and optimize for manufacturing at each stage of the design, verification, tapeout and test process.

Collaboration, Quality and Value in the Design Chain

David Courtright
Vice President and Chief Technology Officer, Cadence Design Systems

March 23, 2005

>Meeting the challenges of product development cycles and managing the complexity of increased semiconductor functionality has created a demand for a virtual reaggregation of the silicon design chain. These challenges continue to multiply with the concurrent move to smaller geometries. This presentation, will address the challenges by calling for an industry commitment to collaborate across the design chain. It is shown how collaboration is necessary for companies to meet time-to-market demands and manage the increasing complexity in chip design, while preserving the intellectual property of each member of the design chain. The speaker will also discuss the need for developing higher quality EDA tools as a means to the end of developing the next generation of electronics products.

IP Quality: A New Model that Faces Methodology and Management Challenges

Kurt A. Wolf
Director, Library Management Division
TSMC

March 23, 2005

The promised value and productivity from re-aggregating the IC design chain isn’t always delivered, in part because of isolated IP product development/quality related practices, and in part because of an inability, from a design management perspective, to see “big picture” issues in the IP marketplace. However, these challenges are not insurmountable. The concern over IP quality has rightfully grown over the past years as the future growth of the IC industry depends on two factors; a) achieving higher levels of design productivity and b) shifting internal resources towards creating and delivering value-added user benefits that stimulate increased end-product consumption. While the second factor is not discussed in this presentation, there’s a presumption that higher IP quality and productivity enables a shift of resources to more applications-oriented design.

A pre-requisite to achieving the productivity gains is substantial improvements in the level of IP quality, coupled with increased forethought during product development. This presentation describes a methodology to evaluate IP for SOC integration. The focus is on development & quality verification practices that also account for the issues of IP integration. Additionally, the long-term growth of the semiconductor industry may be limited by the lack of value placed on collaboration, support, quality verification, and due diligence between SOC design teams and their IP partners. This presentation also describes improvements in the Hard IP business relationship between these groups that enable dramatic growth through slight changes in communications models. By developing reasonable expectations and focusing on open discussion between each group, perspective begins to shift. The true value of the design team and IP partnership is a function of successful collaborations – not when the user squeezes the last drop out of NRE, royalty, per-use, or other financing models. And the value-add of the partnership is realized when that collaboration includes additional real, shared incentives that more fully value the IP industry, rather than focus on purely lowest cost.

SoC Engineering Trends as Impacted by New Applications and System Level Requirements

Bernard Candaele
Department Head, SoC, IC & EDA
Thales, Paris Colombes, France

March 23, 2005

The SoC increasing integration scale as well as the system and customer requirements are important factors for a complete revisit of the development models for electronic products. New customer models ask for software driven electronics. Software engineering is moving to a component-based and MDA development approach to be applied to embedded applications. Hardware engineering is moving to SSDI System Level Development and Reuse methodologies. The 2 approaches have now to be further developed and combined for next generations SoC’s to get high quality and adaptable designs at a reasonable development cost. New application-level quality standards have also to be part of the complete development flow.It is demonstrated through several examples these new methodologies: system engineering methodology on software radios (UML, PIM Platform Independent Model and PSM Platform Specific Model) and its current extension to the hardware parts (SCA, OCP potential extensions), system engineering in line with the Common Criteria development and qualification process for new security products (PP Protection Profile and ST Security Target,…), development and validation methodology in line with DO254 standard for new safety products in avionics (formal verifications, …). Impacts on SoC architectures and design techniques will be discussed during the talk.

Modular service-oriented platform architecture - a key enabler to SoC design quality

Dr. Risto Suoranta
Principal scientist and Research Fellow
Nokia

March 28, 2006

In the digital convergence era, new products are created at ever faster pace by combining and integrating existing and new technologies in innovative ways. Designers of these products are already facing immense productivity and quality demands. We need new architectural thinking to address the demands of the future. We need to achieve very high-level of reuse and at the same time manage the system complexity. Platformization and modularity are the key to do this. The key to true system modularity is an architectural model where the functional and physical architecture are aligned. One solution is a uniformed interconnect based device architecture consisting of sub-systems with services and applications. Reuse takes place in multiple levels from requirements, through service specifications, and interconnect nodes to sub-systems. Design flow provides verified formal sub-system requirements to be used in internal and/or external purchasing. Finite energy, device heating and peak-power limitations call for a well defined EPM architecture. This solution gives: Ease system integration, Scalability from device to SoC implementations, Allow efficient horizontalization, Match performance with requirements, Speed-up innovations-to-product cycle, while still taking into account the fact we are operation in energy - power - heat limited world.

Deep sub-100 nm Design Challenges

Dr. T. Furuyama
GM
SoC Research Center, Toshiba

March 28, 2006

Moore's law and the scaling theory have been the guiding principle for the semiconductor industry to accomplish its rapid progress and persistent growth. Semiconductor chips had been continuously benefited from the device scaling by simultaneously achieving higher density, higher performance and lower power consumption until they reached the 100 nm technology node. However, once the silicon technology exceeded this point, i.e. in sub-100 nm nodes, some important device parameters have started to diverge from the scaling theory, such as threshold voltages and leakage currents. As a result, we are able to enjoy only higher density from the device scaling, but neither higher performance nor lower power any longer especially at the deep sub-100 nm nodes. In addition, the increase in device density has created various new problems. A single LSI chip can accommodate more number of gates than the engineers can properly design and integrate in a reasonable time period. This gap causes a serious design efficiency problem. Another problem is the large power consumption of the chip and the power density, which reaches the range of a nuclear reactor. Not only how to efficiently cool the chip but also how to properly pour the huge current into the chip is getting extremely difficult. Even though the LSI design and initial development are successful, highly integrated chips will face yield problems in the fabrication line often at the beginning of the volume production. Yield learning and the quick yield ramp are crucial especially when the product life time is short, which is usually the case for digital media related SoC's (System on Chip). This presentation will introduce and discuss several approaches to counteract these problems, such as high-level language and platform based design flow, various low power technologies from devices, circuits to architectures, and DFM (Design for Manufacturing) related technologies.

Successful IP Business Models

Dr. Di Ma
Vice President, Field Technical Support
TSMC

March 28, 2006

The onward march of Moore's Law obviously brings with it a host of challenges, not the least of which is how to design to the space that is now available on an average leading edge semiconductor device. With millions of available transistors and a variety of technology options to choose from, it's small wonder that IP providers have managed to continue to build a viable industry. But that industry is changing, with numerous business models and sources of IP emerging. How will the interaction between IP providers and semiconductor manufacturers change? How will the industry benefit, in terms of quality and availability of IP?

Adding Manufacturability to the Quality of Results

Dr. Raul Camposano
Sr. VP, GM and CTO
Synopsys

March 29, 2006

Quality has many definitions. Conformance to specifications; Customer Satisfaction; Delivery divided by Expectations; etc. EDA’s sense of quality is determined by what it’s customers want. Do we have a virtuous cycle in the quality relationship between EDA and its customers? EDA is also the quality tools supplier to help the Electronics Systems and semiconductor companies to produce quality products on time. The speaker will examine both aspects of the quality issue from an EDA perspective.

Future Memory Technology Trends and Challenges

Dr. Changhyun Kim
Fellow, Samsung

March 29, 2006

As memory market enters the Gigabit and GHz range with consumers demanding ever higher performance and diversified applications, new types of devices are being developed in order to keep up with the scaling requirements for cost reduction. Among these devices are well-known ones such as the recessed channel transistors, but also FinFET and vertically stacked transistors for DRAM and charge trap devices for Flash memory. The latter ones are still not at a manufacturable stage yet. Even more exotic memories implement new materials and stacked architectures on the cell, chip and package level. On the performance side, increasing speeds require higher time resolutions. The future difficulties of process control by far exceed those of conventional planar devices. Therefore device characteristics are expected to show ever increasing PVT variations. As these variations become more and more inevitable, especially as dimensions approach the atomic scale, negative effects on circuit and device performances have to be prevented by new, appropriate methods of 3D device modeling and circuit design which consider the mentioned parameter variations. In this talk such challenges will be discussed as well as some approaches to overcome them. An outlook will also be given about the memory technology trends in the next decades.

Device and Technology Challenges for Nanoscale CMOS

Dr. H.-S. Philip Wong
Center for Integrated Systems and Department of Electrical Engineering
Stanford University

March 29, 2006

With the introduction of 90 nm node technology, silicon CMOS is already at the nanoscale. There is no doubt that the semiconductor industry desires to stay on the historical rate of cost/performance/density improvement as exemplified by the International Technology Roadmap for Semiconductors (ITRS). The challenges for continued device scaling are daunting. At the highest level, the challenges are: (1) delivering cost/performance improvement while at the same time containing power consumption/dissipation, (2) control of device variations, and (3) device/circuit/system co-design and integration. New devices and new materials offer new opportunities for solving the challenges of continued improvement. In this talk, we give an overview of the device options being considered for CMOS logic technologies from 45 nm to 22 nm and beyond. Technology options include the use of device structures (multi-gate FET) and transport-enhanced channel materials (strained Si, Ge). Beyond the 22 nm node, research are underway to explore even more adventurous options such as III-V compound semiconductors as channel materials, metal Schottky source/drain. Beyond that time horizon, there is the question of whether new materials and fabrication methods such as carbon nanotubes, semiconductor nanowires and self-assembly techniques will make an impact in nanoscale CMOS technologies. We survey the state-of-the-art of these emerging devices and technologies and discuss the research opportunities going forward. We conclude with a discussion of the interaction between device design and the circuit/system architecture and how this interaction will change the landscape of technology development in the future.

A perfect world, on a chip

Sanjiv Taneja
VP and General Manager
Cadence DesignSystems

March 27, 2007

The business impact of product quality is reflected in key metrics such as market share and revenue growth, profit margin, brand equity and customer satisfaction. Achieving and maintaining complex product quality is an increasing challenge as time to market shrinks and performance increases in nanometer technologies. EDA has a significant role to play in enabling high product quality, including quality-by-design with full manufacturing awareness in the design flow from micro-architecture to mask layout; the validation of product quality as products get tested in the manufacturing floor; and the closed loop corrective action to DFM infrastructure driven by yield diagnostics.

Tipping point for new design technologies: DFM, low power and ESL

Jeong-Taek Kong
Sr. Vice President, IP
Samsung Electronics

March 27, 2007

Semiconductor scaling has been driven by advances in both transistor and process technology, and may continue for the next decade down to 5~7 nm of gate length. In this context, it is necessary to predict how design technology will continue to exploit the added capability afforded by semiconductor scaling in coming years. Although predicting the future is extremely difficult, the best way is to review the past and to assess the key factors in the successes. First of all, this talk will review how three main design technologies - DFM (Design for Manufacturing), low power design, and ESL (Electronic System Level) design - achieved success in industry. In particular, such industry success will be considered from the tipping point perspective: how a technology can be widely adopted by designers. Success factors include the inherent quality of each design technology as well as the effort required to adapt and disseminate the technology for the product differentiation. As evidence for success, the talk will cite several innovative examples wherein Samsung designers applied the new design technologies to differentiate their products. Finally, this talk will conclude by anticipating another tipping point of emerging design technologies for the next decade while considering semiconductor business aspects as well as the future direction of SoC technology.

Programmable engines for embedded systems: the new challenges

Marc Duranton
Senior Principal
Embedded Modem & Media Subsystems
NXP Semiconductors

March 28, 2007

Programmable embedded systems are ubiquitous nowadays, and their number will even further increase with the emergence of Ambient Intelligence. One of the first challenges for embedded systems is mastering the increasing complexity of future Systems on Chip (SoC). The complexity will increase irremediably because the applications become more and more demanding and the algorithmic complexity grows exponentially over time. For example, coming TV image improvement applications will exceed the tera-operation per second for High Definition images, not counting the requirements for new features like content analysis and indexing, 2D to 3D conversion, mixed synthetic and natural images and other requirements for Ambient Intelligence. The increasing complexity is even more present in mobile devices: mobile phones combine functions of still and moving camera, audio/video player, TV receiver (also in HD), videophone, payment system, internet nterfacing plus their basic function of audio communication. This is achieved by assembling together heterogeneous IP blocks. This will bring the challenges of designing multi-core systems using all possible levels of parallelism to reach the performance density required, of extracting all the parallelism from the application(s) and of mapping efficiently to the hardware. Major breakthroughs will be required in compiler technology and in mapping tools, both in term of correctness and in terms of performance to achieve an efficient use of resources (performance- and power-wise), but also for the debug, validation and test of the system. Systems cannot be completely verified with simulations, and we will need new validation approaches otherwise the unpredictability and unreliability due to the combination of use cases will make the systems practically unusable. Assembling systems with "unpredictable" elements will increase the global system unpredictability. Also, systems are not really designed with "separation of concern" in mind, and due to shared resources, a slight change can have a drastic impact. But for most embedded systems, a main challenge is in having sustained performances, not peak: guaranteed performances, and predictable timing behavior are important, together with Quality of Service, safety, reliability and dependability. The notion of time is key in embedded systems, and most of the current methods and tools, inherited from mainstream computer science, didn't really cope with this extra requirement. The new technology nodes (65nm, soon 45nm) will also bring their own challenges: the global interconnect delay does not scale with logic, the increasing variability of components (leading to design - such as timing closure - and yield problems), and the leakage power will be major problems for the design of complex systems. To cope with all those challenges, methodologies that bridge the gap between design tools and processes, integrated circuit technologies, processes & manufacturing are required to achieve functional and quality designs. Guided by the principles of predictability and compositionality, we will increasingly need to design reliable systems from uncertain components, use higher abstraction level specification, formal methods that allow "correct by construction" designs and virtualization of (shared) resources allowing a "separation of concerns".

Soft-errors phenomenon impacts on Design For Reliability technologies

Marc Derbey
CEO iRoC Technologies

March 28, 2007

We will mainly address here the "alter ego" of quality, which is reliability, and is becoming a growing concern for designers using the latest technologies. After the DFM nodes in 90nm and 65nm, we are entering the DFR area, or Design For Reliability straddling from 65nm to 45nm and beyond. Because of the randomness character of reliability - failures can happen anytime anywhere - executives should mitigate reliability problems in terms of risk, which costs include cost of recalls, warranty costs, and loss of goodwill. Taking as an example the soft error phenomenon, we demonstrate how the industry first started to respond to this new technology scaling problem with silicon test to measure and understand the issue, but should quickly move to resolving reliability issues early in the design. In this field, designers can largely benefit from new EDA analysis tools and specific IPs to overcome in a timely and economical manner this new hurdle.

Forging Tighter Connections between Design and Manufacturing in the Nanometer Age

Joseph Sawicki
Vice president and general manager
Design to Silicon Division, Mentor Graphics

March 28, 2007

As we dive deeper into nanometer technologies, we must rethink the way we design. Tools, techniques, and methods that once worked without fail cannot hold up at the 65 and 45 nanometer depths, making it more challenging than ever to achieve yield. In nanometer technology, DRC is not enough. We must redefine the sign-off process itself to include a spectrum of new methods that assess design quality. Sign-off must include not only fundamental, rule-based physical verification and parasitic extraction but also a set of automated technologies that help improve yield by enhancing design itself. DFM solutions must deliver these automated technologies to the designer in a practical and easy to use way. This includes new ways to visualize and prioritize the data produced by the analytical tools. It also requires that existing tools expand their architectures to provide yield characterization and enhancement capabilities. Finally, the most successful DFM methodologies in the nanometer age will apply these new capabilities throughout the design flow - not just at the point of sign-off.

Shrinking time-to-market through global value chain integration

Drew Gude
Director, High Tech&Electronics Industry Solutions
Microsoft Corporation

March 18, 2008

The product development challenges for high-tech companies are even greater than most industries, thanks in large part to their dependence on an increasingly distributed and complex global value chain and extreme pressure to deliver innovation to market quicker than their fierce competition. That chain of frequently independent companies collaborating on these shrinking project timelines stretches from product conception to chip design, product development, production, assembly, testing, packaging, and delivery. Central to addressing these challenges are solutions and interoperable IT enterprise architectures that can streamline this innovation pipeline. In this presentation Drew Gude of Microsoft discusses the opportunities to shrink product time-to-market by more quickly, efficiently, and securely collaborating and integrating with product development value chain partners.

Bounding the Endless Verification Loop

Robert Hum
Vice President & General Manager, Design Verification and Test Division
Mentor Graphics Corporation

March 18, 2008

Although more and more engineering resources are being focused on verification, most of the effort is expended on re-simulating what has already been simulated. And once the effort is through, only 20% of the state space has been verified, at best. Verification today is a frustrating, open-loop process that often doesn’t end even after the integrated circuit ships. In response, the whole verification methodology infrastructure is undergoing major changes—from adoption of assertion-based verification, coverage-driven verification, to new approaches in test bench generation/optimization, integrated hardware acceleration and more.

Consumerization of Electronics and Nanometer Technologies: Implications for Manufacturing Test

Sanjiv Taneja
Vice President, Encounter Test business unit<
Cadence Design Systems

March 19, 2008

Test has long been recognized as the bridge between Design and Manufacturing. However, innovation and deep integration in design and test tools has not kept pace with the consumerization of electronics and the rapidly evolving nanometer IC design and manufacturing. As a result, the full potential of Test has not been harnessed by the mainstream semiconductor community. The consumerization of electronics places significant new demands on low power, correctness and time-to-volume production. The rapid advances in nanometer technologies pose additional set of challenges due to the advanced physics effects and higher scales of transistor integration. The EDA industry needs to establish a new paradigm and a "deep integration" to meet these challenges. During the design phase, a power-aware DFT architecture must integrate tightly with low power design and implementation flow. Later, during the manufacturing phase, the benefits of DFT must be seamlessly harnessed for rapid scan diagnostics based yield learning using not only logic information from the design database but also using layout timing and power information.

Statistical Techniques to Achieve Robustness and Quality

Chandu Visweswariah
Research Staff Member
IBM Thomas J. Watson Research Center

March 19, 2008

Variability due to manufacturing, environmental and aging uncertainties constitutes one of the major challenges in continuing CMOS scaling.

Worst-case design is simply not feasible any more. This presentation will describe how statistical timing techniques can be used to reduce pessimism, achieve full-chip and full-process coverage, and enable robust design practices. A practical ASIC methodology based on statistical timing will be described. Robust optimization techniques will be discussed. Variability makes post-manufacturing testing a daunting task. Process coverage is a new metric that must be considered. Statistical techniques to improve quality in the context of at-speed test will be presented. Key research initiatives required to achieve elements of a statistical design flow will be described.

The Greening of The SoC - How Electrical Engineers Will Save The World

Rich Goldman
Vice-President, Strategic Alliances for Synopsys and CEO of Synopsys Armenia, USA

March 19, 2008

Global Warming is hot! Climate change is occurring all around us, and the scientific evidence is increasingly overwhelming pointing to man's hand in the phenomena. We are already seeing huge impacts of Climate Change, much faster than anybody predicted, only a few short years ago. What can we do about? How can we slow and even reverse our impact on Climate Change? The key man made contributing factor is carbon emissions, primarily from coal fired power plants. We need to reduce the number of plants that we building, then the number of power plants that we require. The key to this is a reduction in power consumption. There are many everyday actions we can take individually to help.
Al Gore states that Global Warming is an engineering problem that will be solved by engineers, addressing the issue as an opportunity, rather than additional cost. We will explore how engineers will impact Climate Change. Low Power IC design techniques will play a role in this just as new powerful techniques are coming into vogue.

Beyond CMOS: The Age of Partnerships

Chi-Foon Chan
President and COO, Synopsys Corporation

March 17, 2009

In less than 10 years our industry has matured considerably.  We no longer chase performance through scaling but design sophisticated multi-disciplined systems.  Design challenges such as power and signal integrity that were only secondary concerns are now primary. Critical dimensions are measured in multiples of atoms, and the design process is costly and resource hungry across all stages.  Today we are pushing the limits of physics at 22nm and below. 

Beyond CMOS we see increasingly complex designs that are merging analog with digital and wireless, including MEMS and other organic-based materials encapsulated in SIPs.  How will we accomplish this level of design complexity while maintaining high quality, reliability and low cost development in ever-narrowing market windows?  The designs of tomorrow require broad-based partnerships and alliances that surpass today’s practices and incorporate all players working closely from product inception to completion.

Predictability Is Key to Quality Design

Rajeev Madhavan
Chairman and CEO , Magma Design Automation 

March 17, 2009

In semiconductor design, predictability provides the ultimate metric of the design process. The highest quality can never be achieved without a truly predictable design flow. What does such a flow require? Various approaches, ranging from wireload models to topological analysis, have been used to try to estimate during logic synthesis what will happen during physical design - but have always come up short. Systems on Chip (SoCs) always contain large macros, which make estimates based on wireload models or topological analysis useless. A truly predictable flow should be able to take clock and power into account throughout placement, optimization and routing. Finally, when assembling the SoC, the flow needs to be able to handle mixed-signal blocks in a predictable fashion. Engineering is an exact science. Estimates are for statisticians, yet, that's all that today's "state-of-art" flows deliver. While heuristics are useful, a solution built using only heuristics will not deliver high quality IC design. The Electronic Design Automation industry should develop exact methods and algorithms that enable predictable flows. This is especially important in turbulent economic times where time, money and manpower are in extremely short supply.

The P*3 of ESL: Productivity, Performance, Power

Simon Bloch
Vice president and General Manager of the Design and Synthesis Division , Mentor Graphics Corporation  

March 17, 2009

ESL is steadily making inroads into production design flows to meet the surging challenges of SOC design. Overall, designer productivity is threatened by ever increasing challenges in design and verification due to the demands of deep submicron design. At the same time, the transition of more functionality to software and multi-core processors is threatening IC performance, while architectural decisions regarding power are being made too late in the design process. In all these areas, ESL can and is making a major impact. In this keynote, Simon Bloch will address the new and maturing ESL tools and methodologies, combined with the emerging TLM standards, that are providing a powerful approach for improving design quality, providing faster verification and design times, validating hardware dependent software and optimizing for low power. In other words, ESL offers a powerful and tangible ROI where IC designers need it the most: in productivity, performance and achieving lower power goals.

High Performance Graphics in a Consumer SoC:

How to fit it into your system without squeezing everything else out

Peter McGuinness
Imagination Technologies  

March 18, 2009

High performance graphics has gone mainstream in a range of embedded devices. Driven in large part by the fact that for a consumer, the user interface is the device and the primary user interface is visual, in less than a year the discussion has moved on from whether to include hardware acceleration to how much acceleration is needed to compete; whereas the level of graphics was previously defined by the limitations of the system, now the system is being redefined by the need to provide compelling graphical interfaces. Having been given the task of coming up with a graphics enabled device, the first problem the architect must confront is simply fitting the technology into the system: the driving forces in embedded design are still cost, power efficiency and simplicity, so how can a technology which has developed in an environment largely insulated from these forces be modified to meet these unchanging criteria? This talk will take a look at some of the ways in which performance-efficient techniques can also deliver power and bandwidth-efficiencies sufficient to minimize some of those problems. We will take a look at what sort of power efficiency improvements can be delivered by hardware and what can be expected in terms of bus and memory interface loadings from some typical graphical operations. Next, we will turn to the effect on firmware of adding a new accelerator into the system. With a simple CPU driven interface, the software task is manageable without the need for an extensive infrastructure but with the user expectation of fancy animated graphics which mixes in video and other multimedia, the need for a uniform framework of interoperable firmware becomes critical. The second theme of the talk will be to survey the standard APIs which make this possible, focusing on the open standard APIs managed and promoted by Khronos. Last but equally important, is the ecosystem of tools and applications available to OEMs which will allow them to differentiate and add value to the graphical platform you have given them. This, for the consumer, is where the rubber hits the road and the extra cost of graphics is repaid by the improved useability and functionality of the end device. We will take a look at the kind of tools you should expect from a graphics vendor, and we will survey content creation and management applications out there which will help to deliver that extra value.

45nm and Beyond: Why the Process Technology Drove the Paradigm Shift in Design

Mike Smayling
Senior Vice President, Tala Innovation  

March 18, 2009

The semiconductor industry has been cost-driven from the start, as evidenced by the adherence to Moore’s Law for the past 40 years. Three waves of effort, with some overlap, have bought us to logic and memory production at sub-50nm feature sizes. The first wave was scaling, enabled by the continual availability of better and better process equipment at slowly escalating costs. Especially in lithography, new exposure tools with higher resolution, and new etch tools (remember the “writing in stone” part?) with better selectivity and uniformity enabled the relentless reduction in feature sizes by over 100x. The second wave was materials, which permitted electrical performance to keep pace with scaling. Lower resistivity conductors such as copper and various silicides, and lower permittivity dielectrics such as CDO and air, have allowed device interconnects to scale. Higher permittivity dielectrics are getting MOS transistor scaling back on track. But now we are seeing the third wave. This is the paradigm shift in design caused by process technology reaching economic and physical limits. Deep subwavelength optical lithography won’t be economically replaced by EUV, in spite of 15 years of forecasts to the contrary. On-wafer structures are less and less like what is drawn in a CAD tool, making predictive simulation less accurate. Devices dimensions are reaching atomic limits, and new device models are needed to reflect the reality of variations like random dopant fluctuation. IC design has been successful in part because different levels of abstraction could be isolated and dealt with individually, with only minimal interactions with factors more that one level away. Without this kind of isolation, we will end up with system design engineers worrying about the numerical aperture of scanner lenses. In order to archive yieldable designs based on the structure of the abstraction of complex system level design, new components need to be created at multiple levels. This presentation will highlight some of the process technologies that are being implemented at 45nm and beyond. This will include the why and what of device, cell, macro, and routing architecture changes and their impact on the design methodologies in use today to create and validate a design.

Weathering the Storm: Fortifying Memory Storage through the Global Recession

Jim Elliott
Vice President of Memory Marketing, Samsung Semiconductor  

March 18, 2009

Given challenging market conditions, Samsung is looking at 2009 as a pivotal year in which to promote memory storage growth sectors that are less prone to market volatility. This keynote by Samsung Semiconductor Vice President Jim Elliott will highlight what is working for memory storage providers today and the opportunities that are beyond the horizon. Mr. Elliott will examine up-and-coming memory storage solutions including the rapidly expanding solid state drive (SSD) market and the increasing use of embedded NAND flash chips. In particular, his presentation will highlight how market conditions, higher densities and improved performance are pushing SSDs to the forefront of new storage solutions. The presentation will also discuss why the market is ripe for consolidation, and underscore the increased importance of vertical integration and strategic alliances.

High Rel by Design - Creating Enterprise Class Memories

Ramanan Thiagarajah
Sr. Director of Product and Test Engineering, Inphi Corp   

March 23, 2010

Enterprise class memory requirements are quickly outpacing the cost/density value curve. While DRAM $/bit is cheap, enabling high density solutions produces a technical and manufacturing challenge. With shorter product development and technology cycles for enterprise memory solutions, design quality and system integration is becoming increasingly important. The talk will take a holistic look at the role of DFm as it applies to product definition through end-of-life and the intricate balance between product robustness and time-to-market.

The New Challenges of Advanced SoC Implementation

Shankar Krishnamoorthy
Chief Scientist, Place & Route Division, Mentor Graphics   

March 23, 2010

With the advent of advanced process nodes, IC design teams have an increasing ability to pack more functionality and performance into state-of-the art SOCs. However, design challenges are growing as we push the limits of complexity, size, power reduction, and manufacturing scaling. These challenges, if left unaddressed, can result in an adverse impact to schedule and designer productivity. Designers need a new generation of physical design tools to effectively address issues such as multi-mode multi-corner design closure, optimization for low power, compensating for manufacturing variability and handling the sheer magnitude of billion-transistor designs. They also need tools that take advantage of the latest multi-core processors for rapid turnaround, and enable seamless chip assembly at the full-chip level. Mr. Krishnamoorthy’s talk will describe the new requirements for physical design tools and how innovative approaches can make designers successful in the era of “manufacturing aware” design. Examples from a variety of design applications, including HDTV, graphics and mobile processors illustrate this highly informative presentation

Beyond Endless Verification: Delivering High Quality at Low Expense

Mark Gogolewski
CTO & CFO, Denali Software  

March 23, 2010

Everyone is familiar with the skyrocketing costs of verification. Denali Software faced the same challenge when tackling the verification of their configurable controller for PCI Express, one of the most complex and wide-spread interface protocols. As a commercial provider of IP, quality could not be sacrificed. At the same time, the business model could not support a huge design and verification team, nor wait forever. In this keynote, Denali CTO Mark Gogolewski will center on how a small group of talented, highly motivated engineers was able to consistently deliver one of the industry’s most complex IP cores, reliably and on-time.

Test of the Future - Some Thoughts for the Next Decade

Antun Domic
Senior Vice President and General Manager, Synopsys 

March 24, 2010

Over the last 40 years, test has moved from being a fab tool to being a design tool, and has become an integral part of the design flow. This move has allowed better (QOR), cheaper (COR), and faster (TTR) test. As the vanguards of the semiconductor industry approach the 32-nanometer node and start planning the jump to the 22-nanometer node, a number of fundamental challenges are emerging, which force a thorough rethinking of the role of test. Like drugs, which often have counter-indications and side effects, even nanometer design and manufacturing are not immune to drawbacks. This requires that test assume an equal station to nanometer design and manufacturing, is accounted for by them, and inter-operates thoroughly with them. Both implementation and yield management tools may feed test with the design and manufacturing-related information it needs to keep problems manageable, while guaranteeing the desired quality and cost of results. At the same time, test can feed implementation and manufacturing with a great deal of information, which can help identify, locate, fix and/or prevent yield issues. In this keynote, Dr. Domic will describe how design, manufacturing, and test can join forces, and “collaborate” to battle the nanometer challenges.

Design for eBeam : Getting the Best Wafers Without the Exploding Mask Cost

Aki Fujimura
CEO, D2S & eBeam Initiative 

March 24, 2010

DFM (design for manufacturing) and RET (reticle enhancement technologies) have garnered much attention, highlighting the need for designers to consider the effects of the physics of light in the semiconductor manufacturing process. Largely ignored, but equally important are the effects of the physics of electron beams used both in ebeam direct write lithography and in ebeam mask writers. The talk discusses how important it is for designers to understand Design for eBeam.

Cost-Aware System LSI Design

Steve Glaser
Corporate VP, Cadence Design Systems 

March 24, 2010

Based on traditional cost factors, SoC design costs are forecast to approach 50-$100M for a single chip. Only in the very largest markets can a company gain an adequate return on that level of design cost. At the same time, there is a growth of package, test and IP royalty costs relative to die costs that must be managed to maintain adequate gross margins. Finally, with the time to market sensitive nature of consumer markets, companies are experiencing an unacceptable cost of delay. Companies are seeking strategies to become ‘cost aware’ as they proceed with design and implementation, in order to meet their profitability goals. As a result, the largest companies are making internal investments in ad hoc, unconnected extensions to their design flows. Worse yet, many companies continue to drive blind with little hope of hitting cost and profitability targets. There are specific best practices associated with managing unit costs. This starts with technical chip planning and IP evaluation combined with cost analysis. This is followed optimization techniques for die, test, packaging, and IP royalties coupled with tracking against the original chip and cost plan. To manage design costs, there are emerging best practices to develop, qualify and instrument IP in a way that makes it ‘integration ready’. This is followed by methods and automation to speed and lower the cost of SoC functional integration and verification, and implementation. Glaser will show how a cost-aware approach addresses each of these challenges to manage costs and improve profitability, while delivering competitive products on schedules that are tighter than ever before.

Design for manufacturing (DFM) Innovations and advances for IC Design

Vinod Kariat
Fellow, Cadence Design Systems 

March 15, 2011

In advanced technology nodes, design for manufacturing (DFM) is critical to address throughout the IC design flow. Traditional DFM signoff-based approaches have long turnaround times and are often employed too late in the design process. This talk describes how advanced node design requires starting with the end in mind; it requires in-design DFM to scale diverse design needs from blocks and IP to large SoCs.

Multitechnology Hyperintegration Platform: the technology crystal ball

Kamran Eshraghian
President, Innovation Labs, Perth, Australia
& Distinguished Professor, World Class University (WCU), Korea

March 15, 2011

The science fiction of yesterday depicted by such characters as Dick Tracey and Captain Kirk of the space ship Enterprise has stretched the minds of researchers, developers and industry into futuristic programs that span over a range of activities from multifunctional nanoparticle-based smart pills for image sensing and targeted therapy where monitoring agents are encapsulated and activated or monitored with either electromagnetic or light waves, through to new materials conjectured to change the current design practices and product development framework. To gain an economic advantage and in the quest for intelligent integrated systems both industry and academia have invested over trillion dollars in the last decade of research funds towards integration of radically differing technologies. Although many of such programs are in their embryonic phase, they continue to be the catalyst for future generation of products showing new possibilities for a variety of intelligent-based systems such as i-Health Care and i-Aged Care, i-energy management, i-environmental monitoring, i-security etc. The revolutionary marriage of nanoelectronics with photon and bio based sciences driven by new materials is becoming the enabler of novel circuits and systems with extraordinary new properties relevant to every sector of the economy. The presentation will provide an overview of the inevitability of heterogeneous integration using technologies that are either in their infancy or yet to be developed and will focus on new developments in a number of technology domains conjectured to challenge the perspective and the mind-set that researchers and industry currently may have.

For how much longer can Moore's Law hold?

R. Fabian W.Pease
William Ayer Professor (Emeritus) of Electrical Engineering
Stanford University

March 15, 2011

'When Moore first described his 'law' the rate-limiting factors to cramming more transistors onto a chip were defect density and circuit design. But for the last 30 years it has been lithography that has been the key pacing item. Now the awful cost of pushing lithography to finer features leads us to look at alternatives. These mostly take some form of exploiting the third dimension so we can stack transistors as well as the wiring. The techniques vary from simply putting one thinned chip on top of another to monolithically fabricating a circuit in which transistors as well as the interconnects are formed at each level.

Virtual Prototyping for HW-SW co-verification and co-debugging in a multi processor, complex chip designs

Eshel Haritan
Vice President, Engineering, System Level Solutions
Synopsys

March 15, 2011

Complex chips with multiple processors are being designed today to achieve aggressive performance and low power targets. However, multi processor chips create a huge quality challenge around HW and SW co-design. How to develop, debug and verify HW and SW functionality in a multi processor design? How to manage memory coherency and integrate multiple, dependent SW stacks? In the last few years, virtual prototypes have emerged as a way to run system level tests and perform hardware/software co-simulation and co-debugging before silicon or hardware prototypes are available. This presentation will present virtual prototyping and its merits for quality hardware/software design.

Ensuring Known Good IP for Successful IC Development

Juan C. Rey
Senior Engineering Director
Mentor Graphics

March 16, 2011

With consumer electronics driving the IC industry, opportunity windows are shorter than ever, while the scale and complexity of IC design continues unabated following Moore’s Law. One of the key areas of concern is the rising cost of design creation, and there are many Doomsday predictions of design costs becoming unaffordable. While most of the industry buzz is around system level design improvements, the pressure keeps growing for teams working closer to silicon manufacturing. This talk will focus on the challenges of physical design, and EDA advances required to shorten design cycles. For example, reuse of physical IP is a necessity to meet the future challenges of IC design, but integration of IP from many sources is a huge problem because it can introduce “fragility” into the manufacturing process, resulting in unpredictable yield and performance. A viable solution requires improvements in technical implementations as well as operational agreements across multiple supply chain members. Another example is how to reduce the number of iterations required for design closure by bringing more accurate signoff models to bear earlier in the design flow.

The State of 3D Circuit Integration and Its Effect on Design

Robert Patti
Chief Technology Officer
Tezzaron Semiconductor

March 16, 2011

3D integrated circuits are starting to make their way into the market place. The new technology offers great rewards in terms of power savings, density, and performance, but also carries new design and manufacturing challenges. The speaker will review various 3D integration technologies and discuss the current state of the art. He will also cover the impact on design and test.

Enabling the Smart-Connected Home

Manas Saksena
Sr. Director of Technology
Marvell Semiconductors

March 16, 2011

While Home automation technologies have been around for many years, they have so far seen adoption only in either very expensive homes or with do-it-yourself hobbyists. The rollout of the smart-grid infrastructure has generated a renewed interest in the dream of mass-adoption of smart-home capabilities with smart-appliances, networked lighitng controls, and smart HVAC systems. In this talk, we will explore the changes happening in the market place to bring about these smart-connected devices and systems in the home and connecting these to the smart-grid infrastructure and services. We will discuss the enablers for this vision to become a reality in the near-future, as well as challenges that need to be overcome.

Taming the Challenges in Advanced Node Design

Tom Beckley
Senior Vice President, Research and Development, Custom IC and Signoff, Silicon Realization Group
Cadence Design Systems, Inc.

March 20, 2012

As process technology marches relentlessly forward producing multi-billion-transistor integrated circuits, there is much discussion about best design techniques and power consumption strategies in the digital community. But what does this mean for the custom and analog design worlds? Is 20nm the final frontier? How about 14nm? Are there insurmountable problems due to the exacting and power-hungry devices that make up the analog world? Well, the custom dinosaur isn't extinct quite yet. Join this session to hear how circuit design, physical implementation, and verification are fusing into a new advanced-node methodology that copes with layout-dependent effects, complex interconnect rules, and lithography/colorization challenges so that custom and digital design can flourish together.

Beyond 28nm: New Frontiers and Innovations in Design For Manufacturability at the Limits of the Scaling Roadmap

Luigi Capodieci
Director DFM/CAD - R&D Fellow
GLOBALFOUNDRIES

March 20, 2012

The introduction of 28nm high-volume production for IC semiconductor devices will usher the era of "extreme low-k1" manufacturing, i.e. the unprecedented situation in the long history of the silicon technology roadmap, where computationally intensive (and EDA-driven) Design-Technology Co-Optimization will become the key enabler to a product success in terms of yield, time-to-market and profitability. This talk will provide a review and technical analysis of the methodological innovations in Design Enablement flows which are being introduced for early production at 28nm, particularly advanced DFM physical verification and DFM-aware router implementations. Rule-based, model-based and the newly released pattern-matching based hybrid verification, pioneered, industry-first, at GLOBALFOUNDRIES are prominent examples of these new enablement flows. DFM methodologies are complemented by a set of novel foundry-based flows identified as Design-Enabled Manufacturing (DEM). While DFM provides process awareness into the design cycle through accurately calibrated models and verification flows (DFM sign-off), DEM enables manufacturing/design co-optimization, using automated physical design analysis and characterization, which in turn drive process optimization, fine-tuned to specific customer product designs. The presentation will conclude with a preview of the "variability-challenge" intrinsic in the 20nm node and with an anticipation of the innovative EDA solutions which are currently being developed in the new Foundry-supported collaborative eco-system.

Resistive switching concepts: towards a paradigm change in using non-volatile memories?

Christophe Muller
Professor
Aix-Marseille University

March 20, 2012

Currently, the microelectronics industry faces new technological challenges to continue improving the performances of memory devices in terms of access time, storage capacity, endurance or data retention. The main issue to overcome is the downsizing of the memory cell necessary to embed an increasing number of elementary devices and the fulfillment of increasingly aggressive specifications from applications. Regardless of its NAND or NOR architecture, Flash technology still dominates the markets of non-volatile memories. Nevertheless, since the downscaling of the conventional floating gate appears ever more complicated below 22 nm technological node, opportunities are opened for alternative devices relying on resistive switching. As a result, emerging memory concepts are being explored to satisfy the growing needs for storage capacity, while complying with drastic applicative specifications (lower power consumption, smaller form factor, longer data retention, zero-defect products...). In addition, beside "conventional" standalone or embedded memories, resistive switching concepts pave the way towards design of innovative electronic functions such as field programmable gate array (FPGA) or logic devices (e.g. Flip Flop) in which non volatile memory cells are distributed.

Applications Driven Analog Technology Development and Innovation

Venu Menon
Vice President, Analog Technology Development, Technology and Manufacturing Group
Texas Instruments

March 20, 2012

Every electronic product in the world has analog semiconductor content and this content is growing in areas such as wireless communications, medicine, energy, transportation and security. Because analog chips have diverse and differentiated performance and power specifications, manufacturing technologies have to be tuned to meet these requirements. In this talk we will discuss some of the end market trends and the requirements they drive in analog technology development and manufacturing. 

Tech and Space: A Symbiotic Relationship

Rich Goldman
Vice President, Corporate Marketing & Strategic Alliances
Synopsys

March 21, 2012

This keynote examines the close linkage between the development of the semiconductor and space industries from the '50s to the present, and juxtaposes the incredible advances of computing power enabled by the semiconductor industry, and the amazing achievements of the manned space program utilizing such tiny computing power. In this insightful retrospective, Rich will explore how the achievements of both innovative industries have paved the way for astounding new technology and quality advances. Rich will also offer a glimpse into the future of what may change the way we inhabit our own world and travel beyond it.

The End of Performance Scaling?

Dean Tullsen
Dept of Computer Science and Engineering
UC San Diego

March 21, 2012

Moore's Law is not done yet. We continue to get more transistors to work with in each processor generation. Traditionally, these transistors have translated directly into performance gains. However, two phenomena have the potential to seriously derail that performance scaling, even in the face of increased transistor counts: the parallelism crisis and the power crisis. We'll talk about each phenomenon, why we need to address these at every level, from software down to circuits. We'll talk in particular about what we have been doing to address these issues, especially the parallelism crisis, in our group -- at the architecture, compiler, and programming language levels.

10x Power Reduction with 10x More Variability: Does it Make Sense?

Jos Huisken
Principal Scientist , Imec Netherlands
Holst Centre, Eindhoven, The Netherlands

March 21, 2012

A key challenge of wireless sensor nodes for personal health is their energy efficient design. Comparing to the design for mobile appliances, where performance is still a main driver to enable more features, in sensor nodes the energy efficiency is the prime objective. The field of wireless sensor nodes, in which we consider personal health monitoring as a representative example, may become the next development wave fuelling the semiconductor industry. Currently a lot of research is carried out to conceptualize such systems, where integration is a real challenge and not just an "engineering activity", especially for very small form factor sensor nodes using an energy harvester. While looking at the digital design, creating an energy-efficient architecture remains a challenge. However, in the quest for energy-efficient small ubiquitous wireless sensor nodes, low operating voltages are required. When reaching near-, or sub-threshold supply voltages performance becomes more unpredictable for instance due to variations in the manufacturing process and noise. Does this still allow us to use modern technologies for a high volume production of wireless sensor node ICs?

The Changing Device Technology

Chenming Hu
TSMC Distinguished Professor of Graduate School
University of California, Berkeley

March 5, 2013

IC device technology has entered a new era of bold changes. FinFET may be the best known new technology. Ultra-thin-body is an attractive new technology. Even bolder changes are envisioned and needed to empower the semiconductor industry.

Sustaining Innovation for Smarter Computing in Data Centers

Brad L Brech
Member of the IBM Academy of Technology
IBM

March 5, 2013

Better business economics and accelerated business velocity are the two most important factors to the CxO's of clients moving forward. They see technology as a key to their success in meeting both goals in this fast moving world. Smarter Computing is about successfully overcoming the challenges of new analytics, cloud, big data and security requirements through use of appropriate technologies. In the end, doing things Smarter and Faster are the driving factors for the Next Generation of Data Centers.

System Level Perspective on Semiconductors for Intelligent Networks

Bill Swift
Vice President of Engineering
Cisco Systems

March 5, 2013

The impact of the internet on our lives is accelerating and the innovation required to build the technologies and products for these networks is accelerating with it. Innovations at the semiconductor level, board level, and system level in support of new requirements on signaling, packaging, operation, quality, and reliability are taking analytical, simulation, and compute technologies to new limits. In this keynote, Cisco VP of Engineering Bill Swift highlights technology and business trends, as well as innovation drivers for semiconductor technology in the industry and at Cisco enabling products and solutions for intelligent networks.

Trends in Analog/ Mixed-Signal Design Tools

Ed Petrus
Director of Custom Architecture, DSM division
Mentor Graphics

March 5, 2013

Designers who are creating analog/mixed-signal intensive designs are faced with a complex set of challenges. They need to have a high degree of confidence that their designs will be manufacturable and perform to specification in the foundry process before they even consider completing a design in an advanced process node. These ICs are often assembled using multiple resources and various design methodologies including IP reuse, top-down design, and bottom-up design. In the keynote, Ed Petrus discusses the unique challenges of designing custom ICs targeted for smaller manufacturing geometries, and talks about the tools being successfully deployed today while giving insights into what is on the horizon in terms of new functionality.

Physical-Aware, High-Capacity RTL Synthesis for Advanced Nanometer Designs

Sanjiv Taneja
Vice President, Product Engineering
Cadence Design Systems

March 6, 2013

The small world of sub-20nm is already upon us and has brought a new set of challenges for RTL designers as the race for best PPA (performance, power, and area) continues unabated. Challenges include giga-scale integration of new functionality, new physics effects, new device structures such as FinFETs, interconnect stacks with vastly varying resistance characteristics from bottom to top layers in a non-linear fashion and process variation. These challenges are raising several questions. Can RTL synthesis handle giga-scale, giga-hertz designs in a timeframe of market relevance? Can logic synthesis perform accurate and predictive modeling of the interconnect stack, vias and other physical effects in RTL? How do new device structures affect dynamic and leakage power tradeoff and library choices? How do logic structuring, cell selection, clock gating, and DFT choices change to anticipate and handle routing congestion? And how do we ensure strong correlation between logic synthesis and P&R/signoff? This talk will explore these challenges and provide an overview of state-of-the-art technology to address them in a predictive and convergent design flow.

The Lifecycle Of Audio Products, consumer versus professional

Perry Goldstein
Director of Sales & Marketing
Marshall Electronics

March 6, 2013

Most electronics will last many years if they are used in their intended manner. Professional electronics are not necessarily built to last longer, but to perform better. When they are built to meet the needs of the professional user, they will be in use for many more years than if a consumer product is used in a professional environment. This keynote provides a review of electronics lifecycle process, and the elements that make up the process, from a sales and marketing perspective. It will compare the design and lifecycle of consumer and professional electronics. The talk will further explore case studies of actual product applications.

A New Era of Computing: Are You "Ready Now" To Build A Smarter and Secured Enterprise?

Jacqueline Woods
Global Vice President of Growth Solutions, STG
IBM

March 4, 2014

We are experiencing fundamental changes in how we interact, live, work and succeed in business. To support the new paradigm, computing must be simpler, more responsive and more adaptive, with the ability to seamlessly move from monolithic applications to dynamic services, from structured data at rest to unstructured data in motion, from supporting standard device interfaces to supporting a myriad of new and different devices everyday. IBM understands this need to integrate social, mobile, cloud and big data to deliver value for your enterprise, so join this discussion, and learn how IBM helps customers leverage these technologies for superior customer value.

Foundation for Trustworthy Platforms

Sridhar Iyengar
Director of Security and Privacy Research
Intel

March 4, 2014

In the last decade, malware attacks on our platforms have exploded in frequency, sophistication and virulence. Billions of smart devices connect to servers in the cloud, all transporting, processing and storing our most sensitive data. All this forms a large threat surface that is becoming increasingly hard to protect against a determined adversary. This talk will address key security research leading to silicon features on Intel platforms. These provide a strong foundation to proactively combat malware, provide the necessary acceleration for encryption and enhancements for better isolation and data protection. The talk will also address how software can and should take advantage of the hardware features to address the challenges of security in a connected world.

Cloud-Delivered Security for the Mobile Enterprise

Amit Sinha
EVP Engineering and Operations, Chief Technology Officer
Zscaler

March 5, 2014

The adoption of Mobility, Cloud and Social Media is driving businesses to spend more on the deployment of costly appliance-based point solutions for Internet security that still leave significant gaps in coverage. Appliances also accrue additional costs due to administration overhead and backhauling of traffic to a central locations for inspection and policy enforcement. Cloud-delivered security can provide a safe and rich Internet experience for users on any device at any location, without requiring organizations to manage any hardware or software. Enterprises can eliminate appliance and backhauling costs by enabling their traffic to directly go to the Internet via a Security Cloud. This talk will focus on how to build and run a global Security Cloud that can protect millions of users across thousands of organizations.

Carbon Nanotube Computer: Transforming Scientific Discoveries into Working Systems

Subhasish Mitra
Professor and Director of the Robust Systems Group
Stanford University

March 5, 2014

Carbon Nanotube Field Effect Transistors (CNFETs) are excellent candidates for building highly energy-efficient future electronic systems. Unfortunately, carbon nanotubes (CNTs) are subject to substantial inherent imperfections that pose major obstacles to the design of robust and very large-scale CNFET digital systems: It is nearly impossible to guarantee perfect alignment and positioning of all CNTs. This limitation introduces stray conducting paths, resulting in incorrect circuit functionality. CNTs can be metallic or semiconducting depending on chirality. Metallic CNTs cause shorts resulting in excessive leakage and incorrect circuit functionality. A combination of design and processing techniques overcomes these challenges by creating robust CNFET digital circuits that are immune to these inherent imperfections. This imperfection-immune design paradigm enables the first experimental demonstration of the carbon nanotube computer, and, more generally, arbitrary digital systems that can be built using CNFETs. Monolithically integrated three-dimensional CNFET circuits will also be discussed. This research was performed at Stanford University in collaboration with Prof. H.-S. Philip Wong and several graduate students.

Rethinking Design Creation, Verification and Validation for the Internet of Things

George Zafiropoulos
Vice President of Solutions Marketing in the AWR Group
National Instruments

March 3, 2015

The proliferation of IoT devices is driving the design process to produce ever smaller, lower cost, and more highly integrated systems, with shorter time to market. To meet these challenges, we need to streamline the process of design and test and look for opportunities to become much more efficient. New methodolodies we be necessary to improve the design to test flow, system validation and improve test IP re-use. In this presentation we will explore some ideas of how to improve the overall design and test process.

What's Really Driving the Internet of Things? – Insights on the Market, Technology and Challenges

Mike Ballard
Sr. Manager, Home Appliance Solutions and Smart Energy Groups
Microchip Technology

March 3, 2015

The Internet of Things is an embryonic market that is currently driving a great deal of interest within the semiconductor industry. Not only as a compelling and potentially huge market that will create new sources of revenue for years to come, but also as a platform for technology innovations. But what is really behind this interest? What benefits could be derived from adding technology to many everyday products? What challenges do designers face when architecting these innovative systems? This keynote will answer these questions while providing a high-level overview of the market and insight into its differing communication standards. It will also discuss the challenges that extend beyond the embedded product, such as security and connecting to the cloud.

From Cluster to Cloud: How to Harness the Internet of Things

Clodoaldo Barrera
Distinguished Engineer and the Chief Technical Strategist
IBM

March 4, 2015

The Internet of Things and smart sensors are generating ever increasing amounts of data. Organizations are using public and private clouds to transfer and process this data, but bringing along challenges such as data duplication, under utilized disk storage and idle CPUs, with little or no prioritization. New developments in data aware scheduling will be described. These work with IBM Platform Computing workload management tools and intelligent caching to schedule and deliver data using the cloud. Combined with powerful solutions, this talk will highlight how organizations take advantage of intelligent data scheduling, enhanced workload management and efficient compute utilization, including case studies and client examples. Further the talk will showcase how all of these technologies are making cloud computing less expensive and more secured.

Connecting the Dots to achieve high Reliability and Quality

Raj N. Master
General Manager, Reliability, Quality and Silicon Operations
Microsoft

March 4, 2015

The ubiquitous trend towards a connected, digital, always-ON lifestyle is driving the development of electronic devices that have smaller form factors, higher performance requirements, along with expectations of reliability and quality. Consumers don't buy products because of high quality but expect it. The result is a constant upward pressure on delivering high reliability at silicon, package and system level. Although this general trend holds for applications in the computing arena, gaming console arena, consumer electronics arena as well as in the portable electronics arena, the relative constraints on the cost, size and complexity of the products make this a challenging task. This is further made more difficult by unpredictable ways consumers may use the product. Microsoft evolution of hardware and reliability challenges of Surface will be described to achieve a highly reliable and quality product.

New Frontiers in Hardware Security and Trust

Mark M. Tehranipoor
Intel Charles E. Young Professor in Cybersecurity
Florida Institute for Cybersecurity

March 15, 2016

Hardware security domain has received significant attention from researchers in academia, industry, and government due mainly to the globalized design, fabrication, and assembly of integrated circuits and systems. The complexity of today’s electronic components and systems supply chain has made it increasingly vulnerable to malicious activities, security attacks, and counterfeiting activities. In this talk, we will first analyze these vulnerabilities and threats. We will then present challenges dealing with emerging attacks and threats and present potential solutions to addressing them. Finally, we will present opportunities that securing hardware can provide at different application domains, different levels of abstraction, and from nano-device to systems.

Avoiding The Dark Side Of The Cloud Using Secure And Reliable IoT Devices

Navraj Nandra
Senior Director of Interface IP
Synopsys

March 15, 2016

Keeping the enormous amounts of data being generated by billions of smart connected devices ultimately stored in the cloud – secure - is a hotly debated topic. The number of connected devices is expected to reach 50 billion by the end of this decade. Today, it is estimated that 70% of IoT devices contain serious security vulnerabilities, 100 car models are affected with security flaws. This presentation will provide proposals for integrated silicon solutions that help prevent a wide range of evolving security threats in connected devices such as theft, tampering, side channels attacks, malware and data breaches.

Driverless Vehicles: Looking Ahead

Raj Rajkumar
George Westinghouse Professor
Carnegie Mellon University

March 14, 2017

Self-driving vehicles seem to have become quite the rage in popular culture over just the past few years, triggered in good part by the DARPA Grand Challenges. Self-driving vehicles indeed hold the potential to revolutionize modern transportation. This talk will provide some insights on many basic questions that, need to be addressed for the revolution to take place in practice. What are the technological barriers that currently prevent vehicles to be driverless? What can or cannot be sensed or recognized? Can vehicles recognize and comprehend as good as, if not better than, humans? Does connectivity play a role? Will the technology be affordable only for the few? How do issues like liability, insurance, regulations and societal acceptance impact adoption? The talk will be based on road experiences and will add some speculation.


ISQED