Every crash test is unique — the chaotic nature of a crash can never be a repeatable event.
Simulating a car crash with a computer is one of the most demanding tasks performed by engineers in the automotive industry. The size and cost of computer models for car-crash analysis is steadily growing — models with millions of finite elements are common. Plus, huge compute power is needed to run such models in acceptable periods of time.
But does this growth of model size and bandwidth really pay off? Is it really possible to build safer cars by building larger and more sophisticated computer models? Are there any intrinsic limitations to crashworthiness of automobiles, dictated by the physics involved, which make it impossible to improve performance beyond today’s already high standards?
The answer might be articulated along the following lines:
- Models are only models. A model, no matter how refined, tells only part of the truth. Using a model that misses some physics introduces a limitation as to how much the real product can be improved.
- Signals recorded by accelerometers during car-crash tests contain a non-negligible amount of chaos. Chaotic systems (such as the atmosphere) are not fully predictable. While certain patterns can be observed (e.g., the seasons), predictions are essentially impossible.
- While a crash is an exquisitely stochastic phenomenon, the car industry ignores this fundamental fact of physics, and insists on building deterministic models.
Lab tests are not fully representative of real crash conditions. Moreover, imperfections in the manufacturing process make it practically impossible to build two identical cars. Therefore, a single crash test can be very misleading.
Many people overlook the fact that models are already the fruit of sometimes quite drastic simplifications of the actual physics. For example, the familiar Euler-Bernoulli beam differential equation is a result of the following assumptions: the material is a continuum, the beam is slender, the constraints are perfect, the material is linear and elastic, the effects of shear are neglected, rotational inertial effects are neglected, and the displacements are small.
While these assumptions might at first seem reasonable, their combined effect could, in many cases, be responsible for a substantial “loss of physics.” Models never represent reality completely, and this incompleteness is always a source of unexpected errors.
Experience shows that models that capture correctly more than 90 percent of reality are quite rare. A car crash is a highly nonlinear phenomenon, which has very little in common with the bending of long and slender beams. The phenomena involved in a crash (material and weld failure, buckling, crushing, etc.) are so broad that a fidelity of 90 percent is, most probably, unreachable with contemporary simulation technology. But the question remains: even if models were “perfect,” would it be possible to build safer cars than those already on the roads today?
Figure 1: Here is an example of a filtered crash pulse obtained in a crash test.
Chaos and Crash
Tests for chaotic content in a time history can be performed with a variety of mathematical techniques. The basic tests for chaos are: Poincaré sections (or return maps), log-linear power spectrum, Hausdorff dimension, correlation dimension, and Lyapunov characteristic exponents. For the purpose of this discussion, we won’t go into the details.
One example of a typical crash pulse is illustrated in Figure 1 (above). A few years ago I tested a real crash signal (i.e., not one generated in a computer simulation) for chaotic content. The tests revealed that the signal contained a fair amount of chaos. In fact, the fractal dimension of the crash pulse of 1.8 — noninteger dimension points to a fractal — confirmed that there is hidden chaos in the crash response. But what does this mean? Why is the presence of chaos a nuisance?
Contrary to popular belief, chaos does not imply randomness. Chaos is a deterministic phenomenon that can be described by closed-form equations. Chaotic phenomena and systems have the nasty characteristic of being extremely sensitive to initial conditions. Ultimately, this means that every crash test is a unique event — it is practically impossible to repeat the same initial conditions, even in a lab. Even if it were possible to manufacture two identical cars, the chaotic nature of a crash would still make each crash an unrepeatable event.
In a stochastic crash experiment I performed with colleagues in 1997 at the University of Stuttgart’s Compute Center on a 512 CPU Cray T3E, it was discovered that the angle of impact in a car crash is a key variable that determines how the structure will deform. This apparently innocent variable was found to be more important than design details or material properties. It was in fact found that the response bifurcated around the nominal impact angle of 90 degrees; in other words, hitting the barrier at 89 or 91 degrees made a huge difference in the response. In order for a finite element model to show this difference, the model must embrace enough physics.
Figure 2: Example of Process Map obtained using OntoSpace to analyze USNCAP crash-test data.
Because of manufacturing imperfections, assembly tolerances, material property scatter, etc., it is impossible to manufacture two identical cars. But this is only one side of the coin. Once you put a new car on the road, it starts to age. Certification crash tests are performed on brand new cars in clinical conditions while in reality it is aged cars that suffer crashes (and not precisely in a lab). While one could accept this because of cost constraints, what is not easy to accept is that — with very rare exceptions — cars are designed neglecting the above-mentioned uncertainties.
A deterministic model is built in which everything is nominal. All welds are in place, within manufacturing tolerances, with perfect materials. This model is then used in computer simulations to come up with a design. If that were not enough, these deterministic (optimistic) models are used to optimize the design, i.e., to deliver the best possible crashworthiness.
Using OntoSpace, we recently processed the results of more than 800 U.S. New Car Assessment Program (USNCAP) crash tests performed between 1979 and 2005 (see Figure 2, below). The analysis revealed that the data is increasingly complex. In other words, deterministic crash models will become increasingly less credible.
The answer to our question, at this point, is evident. The underlying buckling-dominated nature of car crashes makes it impossible to predict behavior, even in a lab. The chaotic component in crashes adds extreme sensitivity to initial conditions, such as angle of impact. As a result, physics of crash make optimization of crashworthiness a futile exercise. You cannot optimize a car for crash just like you cannot design an optimal asset portfolio for a future stock-market crash. One thing you can be sure of, though, is that crashes rarely, if ever, occur against a standard barrier at 31 mph at exactly 90 degrees.
Jacek Marczyk, founder and chief technical officer of Ontonix, has more than 20 years’ experience in CAE, holds an MS in aeronautics engineering from Polytechnic University of Milan, Italy; an MS in aerospace engineering from Polytechnic University of Turin; and a Ph.D. in civil engineering from the Polytechnic University of Catalonia in Barcelona, Spain. Send your comments about this article to DE- Editors@deskeng.com.