Plenary Session 2P
Wednesday March 28
Eindhoven University of Technology
University of Patras and Computer Technology Institute, Patras, Greece
Plenary Speech 2P.1
Wednesday, March 28
Programmable engines for embedded systems: the new challenges
Embedded Modem & Media Subsystems
Programmable embedded systems are ubiquitous nowadays, and their number will even further increase with the emergence of Ambient Intelligence. One of the first challenges for embedded systems is mastering the increasing complexity of future Systems on Chip (SoC). The complexity will increase irremediably because the applications become more and more demanding and the algorithmic complexity grows exponentially over time. For example, coming TV image improvement applications will exceed the tera-operation per second for High Definition images, not counting the requirements for new features like content analysis and indexing, 2D to 3D conversion, mixed synthetic and natural images and other requirements for Ambient Intelligence. The increasing complexity is even more present in mobile devices: mobile phones combine functions of still and moving camera, audio/video player, TV receiver (also in HD), videophone, payment system, internet nterfacing plus their basic function of audio communication. This is achieved by assembling together heterogeneous IP blocks. This will bring the challenges of designing multi-core systems using all possible levels of parallelism to reach the performance density required, of extracting all the parallelism from the application(s) and of mapping efficiently to the hardware. Major breakthroughs will be required in compiler technology and in mapping tools, both in term of correctness and in terms of performance to achieve an efficient use of resources (performance- and power-wise), but also for the debug, validation and test of the system. Systems cannot be completely verified with simulations, and we will need new validation approaches otherwise the unpredictability and unreliability due to the combination of use cases will make the systems practically unusable. Assembling systems with "unpredictable" elements will increase the global system unpredictability. Also, systems are not really designed with "separation of concern" in mind, and due to shared resources, a slight change can have a drastic impact. But for most embedded systems, a main challenge is in having sustained performances, not peak: guaranteed performances, and predictable timing behavior are important, together with Quality of Service, safety, reliability and dependability. The notion of time is key in embedded systems, and most of the current methods and tools, inherited from mainstream computer science, didn't really cope with this extra requirement. The new technology nodes (65nm, soon 45nm) will also bring their own challenges: the global interconnect delay does not scale with logic, the increasing variability of components (leading to design - such as timing closure - and yield problems), and the leakage power will be major problems for the design of complex systems. To cope with all those challenges, methodologies that bridge the gap between design tools and processes, integrated circuit technologies, processes & manufacturing are required to achieve functional and quality designs. Guided by the principles of predictability and compositionality, we will increasingly need to design reliable systems from uncertain components, use higher abstraction level specification, formal methods that allow "correct by construction" designs and virtualization of (shared) resources allowing a "separation of concerns".
Plenary Speech 2P.2
Wednesday, March 28
Soft-errors phenomenon impacts on Design For Reliability technologies
CEO iRoC Technologies
We will mainly address here the "alter ego" of quality, which is reliability, and is becoming a growing concern for designers using the latest technologies. After the DFM nodes in 90nm and 65nm, we are entering the DFR area, or Design For Reliability straddling from 65nm to 45nm and beyond. Because of the randomness character of reliability - failures can happen anytime anywhere - executives should mitigate reliability problems in terms of risk, which costs include cost of recalls, warranty costs, and loss of goodwill. Taking as an example the soft error phenomenon, we demonstrate how the industry first started to respond to this new technology scaling problem with silicon test to measure and understand the issue, but should quickly move to resolving reliability issues early in the design. In this field, designers can largely benefit from new EDA analysis tools and specific IPs to overcome in a timely and economical manner this new hurdle.
Plenary Speech 2P.3
Wednesday, March 28
Forging Tighter Connections between Design and Manufacturing in the
Vice president and general manager
Design to Silicon Division, Mentor Graphics
As we dive deeper into nanometer technologies, we must rethink the way we design. Tools, techniques, and methods that once worked without fail cannot hold up at the 65 and 45 nanometer depths, making it more challenging than ever to achieve yield. In nanometer technology, DRC is not enough. We must redefine the sign-off process itself to include a spectrum of new methods that assess design quality. Sign-off must include not only fundamental, rule-based physical verification and parasitic extraction but also a set of automated technologies that help improve yield by enhancing design itself. DFM solutions must deliver these automated technologies to the designer in a practical and easy to use way. This includes new ways to visualize and prioritize the data produced by the analytical tools. It also requires that existing tools expand their architectures to provide yield characterization and enhancement capabilities. Finally, the most successful DFM methodologies in the nanometer age will apply these new capabilities throughout the design flow - not just at the point of sign-off.