CAD simulation: even better than the real thing?
CAD and simulation software are getting good – so good that they could take prototyping and prototype test out of the design loop. Or could they?
Design and simulation software is becoming increasingly clever and accurate, with latest versions plugging holes and further automating design on an annual basis, making the design engineers’ jobs easier and easier. If continued to its logical conclusion, the design rules and facilities included in the computer-aided design (CAD) package, allied with reliable and accurate simulation data, should mean first-time-right products every time. Prototypes would be a thing of the past and, as a consequence, it would obviate the need for the test function as part of the design loop.
We are not there yet. Bjorn Sjodin, VP of product management at simulation company COMSOL, comments: “In general it is probably too risky to eliminate testing altogether. However, in many cases testing can be reduced. Tests are expensive, so if you can reduce the number of prototypes created or tests run you can save money. Sometimes the number of prototypes or test runs can be reduced by an order of magnitude or more thanks to simulations.”
As Joe Langston, key account manager at Tektronix, points out: “You can simulate all you like but you’re still going to come across problems where, for example, the components are too close together or you may need to move the power supply.” A physical test is the only way to pick up such issues.
The key consideration is that simulation or design tools like optimised topology are only ever going to be as good as the information that is fed in at the start of the process. If every possible physical phenomenon that the product can be subjected to is known, along with the reliable specifications for all parts used, then it might be a different story.
In reality, this is not the case. Sjodin gives one example: “Material specifications provided by a supplier manufacturer may have errors, which will propagate into the definition of the model and simulation. Only testing would reveal this.”
Paul Sagar, director of product management for PTC Creo, takes a similar stance to physical testing: “The problem is if my requirements aren’t 100 per cent exact then I am designing for the wrong use-case and the wrong environment,” he says. “What testing does is to help understand exactly what the use case is for my product and help me ensure that I am building to the right set of engineering requirements. So what I am doing is replacing assumptions in my design with real facts.”
Prototyping is here to stay
Right from the early days of CAD, 40 or more years ago, one of its big selling points was that it had the ability to reduce the number of prototypes. “As more and more companies are doing construction analysis, computational fluid dynamics (CFD) analysis and so on, that is helping to reduce the amount of prototyping and prototype testing that needs to be done, because it allows you to have a degree of confidence in your design before you go to the prototype stage,” says Sagar
“In reality we are a long way off, and maybe we will never get to, the point when we don’t need to do prototypes and prototype tests. You still need an element of that and there is still significant value in it.”
In fact it is the nature of the project that can dictate the design strategy. For example, the design of a digital electronics product cannot realistically be done without CAD/ electronic design automation (EDA) tools, particularly at the chip scale. As Mart Van Gijsel, marketing brand manager at Keysight, says: “The cost of manufacturing (including for a prototype) can be enormous, a lithographical mask for a silicon integrated circuit can run up to millions of dollars. So running multiple iterations is simply not practical and the designers rely more heavily on their EDA tools to get it right first time.”
Even in this extreme design environment, physical testing is still vital to gain confidence in the simulation and discover reasons for discrepancy between the virtual and physical results.
Despite Keysight’s prominent position in the test equipment market, though, Van Gijsel explains that there are times when tests are not practical. “Often simulation tools can be more flexible than test equipment. For example, if you look at development of new standards (e.g. 5G), in simulation tools, you have all the freedom to create signals according to a new standard or even to a standard which is in development. But test equipment may not be ready as the standard has not been finalised yet.
“Another challenge in advanced comms is that real world environments are not always easily replicable in a lab setting or do not match the theoretical models. In this case an actual physical test is often required; this could include over-the-air test inside a chamber, or field test using a portable or drive test solution.”
Analogue electronics is a very different animal. Yokogawa sells the majority of its benchtop instruments to analogue electronics engineers, the automotive sector being the largest market. “For the analogue guys the first stages of development will still be a breadboard where they see if real-life components can create the effect they are looking for,” says Clive Davis, European marketing manager for the test and measurement division. “In analogue it is still very much trial and error, while it is the digital side where the advanced tools come into their own.”
EDA and T&M working together
The EDA and test and measurement industry have co-existed now for several decades and they will continue to have a close relationship. Joel Woodward, oscilloscopes product planning, Rohde & Schwarz, believes there are several factors driving continued test needs, the first being that simulation is so ‘compute intensive’ that it is impractical to simulate for long periods of time. Even with the increasing power of computers, increased design complexity ensures that simulation is not a practical alternative for long-term real-world testing.
“Simulation relies on users creating test cases,” says Woodward. “It’s not practical to anticipate all real-world dynamics that create a large number of varied test cases. A mobile design team I recently visited produced >100 prototypes primarily to equip the software team. Lowering power consumption was a primary goal with each software iteration; the software team used oscilloscopes to determine how the latest software algorithms impacted overall software consumption over several hours of use in sleep mode, deep-sleep mode, with periods of having the device awake. Trying to simulate this is unrealistic.”
All environmental considerations would have to be accounted for. “How do you model a cell phone operating while the user drives under a bridge in a rain storm in Hong Kong?” observes Woodward “Sometimes, it’s just easier to test in the real world. The ability to change FPGA designs quickly plays into the advantage of testing – a few seconds in the real world compared to what would have taken weeks of simulation time.”
In reality the whole design process is iterative, with physical test and simulation being used together – what influences a device, part, or process may be complicated and can only be understood by iterating testing and simulations and thereby calibrating the simulation.
“The physics and mathematics behind certain effects may only be known in an approximate sense with ‘free parameters’ that can only be determined from tests,” says Comsol’s Sjodin. “A typical example is mechanical damping, which usually is determined by tests and then fed into the computer-aided engineering (CAE) analysis.” Fluid dynamics, including aerodynamics, is a classic example where tests frequently have to complement simulations in order to calibrate the model.
“I think what is changing is that we are still making prototypes, but ‘sensoring them up’ to understand exactly the behaviour of the product,” says PTC’s Sagar. In other words, testing is moving away from the benchtop to out in the field, not just for prototypes but also for end products. “That provides a couple of things,” continues Sagar. “One is the data to see how the product is behaving and monitor performance to see if anything is failing or if maintenance is required. The second is being able to understand how my product is being used. Knowing how people are using it can help me design for this usage and maybe if there are new business opportunities.”
Most engineering challenges, from combatting global warming to making devices more efficient, will involve increasingly small improvements that in turn require increasingly precise measurement. “If you made a difference and it is lost in the noise of your product you can’t say if you have actually made a difference or not,” says Yokogawa’s Davis. “That is where the drive for us to measure more accurate and smaller and smaller amounts becomes important. For things like a solar inverter where efficiency is at 98 per cent, one manufacturer is trying to get competitive advantage to try and get 0.1 per cent difference – he needs us to prove that.”
Design and simulation software is obviously getting more capable, but it appears the requirement for oscilloscopes, signal generators and the like is still strong. A fact summed up by Sjodin: “Simulation will continue to reduce the need for testing and we will see more cases where the design is ‘right’ from the start. However, the need for tests will probably always be there.”
Sign up to the E&T News e-mail to get great stories like this delivered direct to your inbox every day.