AI Growing Impact On Chip Design And EDA Tools : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

Key Takeaways

  • Many workflows in the data center are customer-specific, which is part of the reason there is so much interest in agentic AI-enabled tools.
  • Large systems companies are pressing EDA vendors for performance improvements to keep pace with their AI workflows.
  • The makeup of design teams is changing as AI infiltrates more of the chip design process.

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of AI on chip design, and the subsequent demand for changes in EDA tooling, with Thomas Andersen, vice president for AI & Machine Learning at Synopsys; Sridhar Boinapally, senior director of analog/mixed signal tools/flow at Intel; Alex Starr, corporate fellow at AMD; Stuart Oberman, vice president for GPU hardware engineering at Nvidia; Silvian Goldenberg, partner and general manager for silicon engineering infrastructure at Microsoft; and Borivoje Nikolic, professor of electrical engineering and computer science at the University of California at Berkeley. What follows are excerpts of that panel discussion, which was held in front of a live audience at the recent Synopsys Converge conference. Find part 1 of this discussion here.

SE: Are we heading toward more generic hardware, due to the rapid changes in AI algorithms, where software is more important than in the past? Or do you foresee even more customization?
L-R: Synopsys’ Andersen; Intel’s Boinapally; Microsoft’s Goldenberg; Nvidia’s Oberman; AMD’s Starr; and UC Berkeley’s Nikolic.

SE: Are we heading toward more generic hardware, due to the rapid changes in AI algorithms, where software is more important than in the past? Or do you foresee even more customization?

Oberman: A major component of Nvidia systems is to have a software environment that we can optimize for and build the highest performance per area, performance per watt. For high-performance data centers, you don’t talk about performance per area. You talk about performance per gigawatt. That already is a standardization in the industry. As a designer, I need to be able to scale the rate at which I put out more performance per watt, year after year after year. That is a differentiating factor, and there are different ways that companies can achieve that.

Starr: EDA companies are the experts in the tools and their domains, and the skills and the capabilities they’ve built over the years are just tremendous. Those are the things that are really going to move the needle. Big semiconductor companies have a lot of internal know-how and techniques — heterogeneous flows, internal data that’s very bespoke. The focus from an EDA point of view is how to leverage all of that. And EDA companies need to realize how to leverage all of the expertise they’ve got in those tools. You’ll see some variation of where a semiconductor company is on the spectrum of internal infrastructure and AI IQ, and where they’re going to be on that threshold. Will they take full agentic solutions from an EDA vendor versus more focused detail about specific individual tools?

SE: So what do you need that you don’t have today?

Boinapally: A lot. EDA companies are slightly behind in this agentic AI revolution. There are a lot of people working on it, including the hyperscalers. The traditional mindset has been, ‘I’m going to give you tools, I’m going to give you engines, I’m going to give you capabilities.’ Some people are strong in execution. ‘If I give you the tools, you designers will fix it. You own execution.’ But the agentic revolution takes this one step higher, where you can provide more automated tools. The experts in these tools know what they were meant for, but you can go a step further with agentic tools, which can iterate the design by itself based on some higher-level spec. That’s where we need to go.

Andersen: Both parties have some knowledge and expertise to bring to the table. It isn’t really any different than what it is today. It’s just a different mechanism to deliver it. Today, we make the best possible tools with the algorithms. We have support people who explain how to debug and how to solve specific problems that are very close to the tools. Our customers build workflows — chip design flows — and they’re the experts in how the chips evolve. Both parties have some expert knowledge. Now the mechanism changes. We provide expertise not through people, but through databases and AI agents that can debug problems, while they focus on the chip flow through automation. They do that manually today. They run scripts that run their CAD flows, and tomorrow they’ll write agents that run their CAD flows. The knowledge and the expertise exist on both sides, and that’s not going to change. We have our part to do, and our customers have their part to do. If they didn’t do their part, then all the products would be the same. Of course, that’s not the case. But at this point in time, everything we’re doing to train our agents comes from humans. Humans describe, ‘Here’s how I do these things, here’s how I debug this thing, here’s how I set up this flow.’ At some point we need to get to a self-learning system, where the system discovers something better than was ever done before. That is not the case today. Today, I’m relying on the experts. In the future, maybe there will be less differentiation. We don’t know.

Goldenberg: The first thing we realize today is that we need good data. The organization of the data, access to the data, and not having to graph and search and build databases used to be difficult. We also have a lot of tools and all this creativity to do the right thing. And then you had to go through multiple vendors, you had to put the whole thing together, and have a coherent design. For that you need to have an infrastructure and standards. Now, we can start building workflows, where each design house can bring its own views and talent. This is what makes my chip, my style, be successful.

Boinapally: Even if EDA gets to more automation, you will still have different products. An analogy is to think of it like Hollywood movies. They use the same movie cameras, the same everything, but there are lots of different movies.

Oberman: We see it every day with Claude, which is able to directly go after the enterprise applications. They’re in Excel, they’re in PowerPoint. So the question arises, ‘What is the integration path going forward with some of these tools and the model makers?’ Today, a lot of the output we have stays on the front end. It’s complicated to feed that into an LLM, but you can manually write logs and feed those into an agentic flow. That’s part of what we’re already seeing as a solved problem in the enterprise space. But is there a good path working with the model makers to have that directly integrated?

Nikolic: When AI is deployed in various phases of design, what will be the cost? The problem today is that unlike many other fields, there is no surrogate simulation. The simulation tools have just been tracking high-performance computing, not the speed of AI inferencing. While AI inferencing is continuing to run faster, EDA tools are slow to catch on to the existence of GPUs. There are academic papers where people are trying to speed up these processes. Parallelizing simulations could bring much faster designs.

Andersen: I’m sure you’ve all heard about AlphaEvolve. The problem in the EDA world is that the runtimes are too hard. That is an unsolved problem. Of course we have survey models, but they may not be accurate enough. We do have parallelization in some areas like simulation. GPU acceleration does exist, but unfortunately not for every part of the problem space.

SE: How does AI work with design teams? There typically are multiple people working on these designs. And how do different teams interact with AI?

Starr: We’ve enabled our engineering work with the latest tools available, pushing out some agentic frameworks with different guidance on that. A lot of that is ‘off to the races,’ so to speak. They’re building these flows. They’ll typically look at the big problem areas they have — a lot of the task-based functions, like where’s all the energy, and the more routine work done by the engineers. So they’ll think about trying to solve those problems and go after them. This includes things like brute-force analysis and debug, which are the low-hanging fruit. But every design is different. We’re seeing sort of general approaches appearing, and we’ve been very successful here. We’re also seeing very bespoke and detailed use cases that are very specific to a team’s problems.

SE: Is the makeup of these teams changing with that?

Goldenberg: We share data. We capture the results. There’s a lot of sharing of solutions that happens across the team because everyone is learning. The dynamics of the team have changed a lot. In the past we developed flows. Now I have to design teams, and they develop the flows and do the experimentation themselves. There’s a melding of roles all across the design teams. Those are artificial barriers. And the team is definitely becoming more efficient on the larger problem.

Boinapally: Some of this is very similar for us, creating a platform and giving everyone the same set of tools. AI is a little bit scary and a little bit exciting. There are unknowns with lots of things changing all the time, so it’s a little bit intimidating, too. What we’re seeing is there is a mindset change that needs to happen for both the user and the development community. So we’ve been working on large training programs to get more people familiar with it and to learn new skills. This is something that is fast-changing, but it’s what we have to do.

AI Growing Impact On Chip Design And EDA Tools