Today’s advanced simulation capabilities can eliminate the need for many physical tests. But they can also lead to overconfidence and grossly incorrect results, which add to cost, schedule and performance issues. This is not a new situation; it’s really an updated embodiment of the “garbage in, garbage out” (GIGO) software principle. The numerical precision for which computers and algorithms are so well suited is not a substitute for verifying critical assumptions and parameter values of the simulation, and in fact, such precision can induce false confidence and unpleasant surprises. Looking at it another way, it’s like having 10 significant figures on your results, although your raw experimental data was only accurate to two digits.
What’s driving the emphasis on simulation? The obvious answers are cost and time, along with design complexity. Clearly, a properly structured, well-executed simulation can enhance those critical aspects of success, while allowing exploration of “what if?” design variations and tolerance assessment. Organizations such as NASA have put an extreme emphasis on simulation in place of test, because the challenges of system-level test are so formidable, while the quality of the simulations has improved dramatically, notes Dr. Mary Baker, president and technical director at ATA Engineering Inc., San Diego. She maintains there’s a mindset asking, even demanding, “can we save some money by avoiding those test costs?” Even in the earliest stages of your design process, don’t assume there is nothing yet to be tested. You may have a previous product that you are using as a starting point for an improved, next-generation design. Perhaps there are material samples, configurations or partial prototypes that you need to characterize more fully before you start on what may be a wrong path.
| Fig. 1: When validating a military command, control, communications and intelligence system shelter design for nuclear blast survivability, testing proved the corner welds would remain intact at their weakest point on the long side of the shelter. |
Driving Better Design There’s a downside to placing too much reliance on design by analysis and simulation, without the right injections of design by test at critical junctions in the program. Every analysis and associated simulation incorporates many assumptions about system parameters and dynamics. Some of these assumptions are fairly solid, and come with a high level of confidence. But many others may be well intentioned but wrong, or little more than sophisticated guesses. Of course, it’s one thing if you are doing analysis and simulation on well-known and fully characterized materials and designs, such as metal structural elements and conventional joints, whose properties and attributes are long-established and understood. It’s a very different situation when you venture into new areas such as components and complex joints made of composite materials or joining composites and metals.
For example, Baker cites a damping factor in a rocket design, where unavoidable lift-off and launch vibrations could have combined with insufficient damping and led to catastrophic failure. One area of concern was a joint composed of advanced composite materials. Based on initial analysis and designer experience, the predicted damping was around 1.5%, while the ATA team said that at least 2.5% was needed for this design. Before continuing with the simulation, the ATA team set up a test for this joint alone, which showed the actual damping was 0.2%, an order of magnitude less than what was needed.
|The FlashCal system is a piece of hardware developed by ATA engineers for the calibration of aircraft strain sensors. |
Even in the case of standard materials, it may be necessary to set up localized, specific tests to ensure the design’s critical points. Baker gives the example of a military command, control, communications and intelligence (C3I) system shelter, designed to be deployed via truck, ship or airdropped into service. It also had to withstand the overpressure of a nuclear blast. The concern was whether the structure, built of standard materials, could survive the transport, delivery and blast conditions. Not only was relevant blast-related test data inadequate, but setting up a full test to prove overall performance was obviously impractical under any sort of realistic cost and time budget. The major points of uncertainty were the complex joints of the corner welds: One analysis showed they would crack, another pointed to the opposite conclusion; but all these analyses were admittedly burdened by severe modeling uncertainties of the assembly of aluminum skin panels, corner extrusions, welds and rivets. Any weld failure would have mechanical consequences, of course, but also degrade the shelter’s electromagnetic interference (EMI) integrity and electronic performance. Designers were fairly confident that the greatest stress and resultant corner moments would be at the corner welds along the mid-span of the long side of the shelter (see Fig. 1). The solution was to build a test fixture representing the corner welds alone, a process that took just three days. Tests and data confirmed (via penetrating dye and other techniques) that the corner welds would remain intact. With this information, the shelter design team could proceed with confidence with the remainder of the analysis and simulation.
Targeting the Cost of Testing a Design
You hear it over and over: Simulation saves money, and doing design by analysis and simulation is cheaper and quicker than doing it via developmental tests.
Well, yes and no. Certainly, for some project areas, such as in aerospace, testing is a major undertaking. But as simulation advances in capabilities and corresponding expectations, and increasingly employs multiphysics to combine disciplines into one overall model (electrical, mechanical and thermal aspects, for example), the cost and time of such analytical cycles and runs increases as well.
Further, the model is charged with integrating so many functions and roles that it can become unwieldy in construction, degrees of freedom, and implementation. At the same time, such models embed a large number of assumptions, all of which will have a ripple effect as they propagate through the simulation.
That’s why a carefully targeted developmental test can be a vital part of the design, simulation and modeling process. It can both simplify some aspects of the model "replacing complex equations with simplified ones or actual data "and it can reduce the uncertainties that accompany many assumptions, thus increasing overall level of confidence.
Tests can also drive design in the right direction. In another case described by Baker, an improved shake-table test fixture was needed so that both modal and qualification testing could be combined for a satellite, to reduce overall test time (see “Shaken, Not Stirred,” page 36).
|Fig.2: By testing specific areas of uncertainty first, you can direct component-design specifications toward outcomes with a higher likelihood of success. In this example of satellite testing on a shake-table, Fn = Shaker system’s lowest natural frequency; Fo= Satellite target mode frequencies. |
Can We Talk?
It’s important for the analysis and test teams to talk early and often. While this may seem obvious, Baker notes than in many companies, they are actually separate and isolated groups. As a result, the analysis group may not cite the areas where they are making assumptions for which they do not have a firm basis or high confidence. In contrast, when there is team overlap and integration, the test engineers can question assumptions early in the cycle "and also suggest specific, limited-scope tests to verify these assumptions, or provide better data so the simulation can continue with a higher degree of confidence.
The role of test is not to replace simulation, but to support it. In fact, Baker says it’s a recursive process, where the project goes from analysis to test, back to analysis, and even back to test as needed. She also points out that it is critical to get testing to happen sooner in the cycle, before it costs too much to set up or to correct problems. Test is not just a blind trial-and-error experiment; you focus it to understand what you are doing. In crude terms, you build it, and then try to break it. Doing this will validate your underlying assumptions for critical points of the design and inflection points in the project timeline, where the impact of changes rapidly escalates once you cross those points and start filling your bill of materials (BOM), ordering long-lead items, and committing to tooling, documentation, fixtures and more. Get the tests to happen sooner, when they don’t cost too much, and you’ll minimize the need for tests at the end of the project, when it’s too late and very costly to change the design approach. That final test should be for approval and sign-off, not for developing additional insight into the design’s real parameters and characteristics. It makes more sense to test at points where incorrect assumptions, subtle unknowns, and perhaps an excess of sophisticated guesswork will have major negative consequences, as the test results can be used to refine and enhance the simulation-centered analysis. While simulation can provide overall performance verification, a well defined and focused, properly constructed physical test has its own virtues. In blunt terms, test provides more credibility than analysis alone. “For almost any complex design, analysis may appear to be less costly than test, but test has much more credibility,” Baker points out. “Strategic test is well worth the cost.”
Shaken, Not Stirred
To properly evaluate the satellite in a combined modal and qualification test, the first natural frequency of the shake table (shaker) must be higher than the target-mode natural frequencies of the satellite itself. In addition, the shaker’s cross-axis coupling and need for uniform excitation level dictate a very light, yet stiff shaker structure.
The standard approach of design, analyze, build and test would eventually work, but likely require many iterations. At the same time, using this procedure may have other negative consequences. Developing the detailed finite element analysis (FEA) models takes time, which is counter to investigating design change and larger innovations.
|Testing and analysis of the shaker design showed that an innovative head expander would produce a high stiffness-to-mass ratio. |
Rather than begin the new design, in this application, engineers first did a modal test on the existing system, and a simple FEA model to represent the dynamics of the primary modes. The test verified the simple model, and the results were used to direct the new design by providing bearing stiffness data that was hard to predict by analytical means alone. Fig. 2 on page 34 shows this design approach, which differs from the usual design/test/redesign approach.
The tests also showed that the head expander of the shaker controlled the lowest natural frequency and cross-axis coupling. As a consequence, designers realized that this head expander would need a high stiffness-to mass ratio. To achieve this, they used a magnesium weldment, which allows the head expander to move vertically via eight bearings (see image above). The outsides of the bearings, in turn, are supported by steel “pedestals” for stiffness, but because these pedestals are static and not part of the moving structure, their weight is not a concern.
Bill Schweber is an engineer with electronic and mechanical design experience. He can be reached at email@example.com.
More Info ATA Engineering Inc.