Phenomena in Engineering Materials

Workshop

April 2-4, 2012

Perspective

Materials, either natural or man-made, are a ubiquitous feature of our built and natural environment. They appear as geologic media through which our evolving planet interacts with us, as load-bearing structural members, as machine parts and as microelectrical mechanical systems (MEMS) in the devices used to sense our environments. Understanding the mechanics of materials is important for many reasons: 1) mitigating the damage due to natural disasters, such as earthquakes, by predicting likely areas to see damage and thus providing early warning; 2) to aid in the design of more earthquake resistant structures and materials; 3) for avoiding catastrophic land-slides due to failure and rapid flow of granular media like saturated soils; 4) for better and more cost-effective design with existing materials used in structural, energy, and sensing applications, such as NiAl and TiAl alloys for light-weight, high-temperature operation in turbine engines for energy needs; and 5) for the design of new materials providing targeted functionality, such as high strength and high-ductility, as in bulk metallic glasses or advanced high strength steels.

All of the above applications require a fundamental understanding of the mechanisms of stressing and deformation of the involved materials and structures at appropriate length and time scales. An important realization is that it is not possible to acquire the required understanding from experimentation alone, if at all. Even in cases where this may be possible in principle, present-day cost constraints make this solely experimental paradigm impractical; it is simply not possible in today’s day and age to have a decade-or-more long design cycle to introduce a material for a specific application. Thus, advanced physical modeling along with robust numerical simulation of these models is a key to success in the applications described and many others.

At the scale of angstroms and femtoseconds, all of our materials of interest can be described by quite reliable physical theories (molecular dynamics/quantum mechanics) whose solutions, in principle, can be computed at arbitrarily larger length and time scales. However, except for very few applications related to the design of new materials for nanoscale applications, all of the applications of our interest occur at length and time scales that are simply not computable with the aforementioned physical theories, because of the number of computations needed, even with the most powerful computers that may be expected to emerge in the next decade or more. For example, consider the deformation of an automobile frame during a traffic accident or a landslide in saturated soil after heavy rains. For the deformation of the auto frame, the most fundamental micro-scale objects that govern the response are the atoms which make up the metal in the frame; for the landslide, the micro-scale objects are the grains of soil on the hill side. The laws which govern the micro-scale are well-known. One could, in principle, use ‘brute force’ to directly simulate all of the elementary objects at the micro-scale. However, it will not be possible to perform these simulations in the foreseeable future with even the most powerful computers on the planet: a very rough estimate would give as many as 10^{24} soil grains in a hill-side and 10^{28} atoms in an auto-frame; molecular simulations of metals require resolving atomic vibrational time scales of ~ 10^{-15} seconds while the time scale over which the auto frame deforms in rapid impact is ~1 second, a 15 order of magnitude difference.

Thus, such practical constraints force us to study existing, and develop new, techniques for consistent and robust “coarse-graining” of fine length and time scale dynamical response. Here coarse-graining refers to the development/discovery of adequate approximations to self-contained laws governing phenomena at coarse scales that, once developed, can be exercised to understand meso and macro-scale phenomena without reference to the microscale theory. Such study hinges on a careful attention to detail of specific applications coupled with a serious study of, primarily, continuum mechanics, statistical physics, dynamical systems and mathematical homogenization theory.

A natural question that arises is whether such self-contained laws at coarse scales exist at all? Fortunately, the answer is in the affirmative. The mere fact that Newton and Euler’s Laws for rigid body motion work exceptionally well for macroscopic bodies (e.g. the precessing top) made out of billions of rapidly oscillating atoms (nuclei, sub-nuclear particles …..) following a completely different set of dynamical equations is physical proof that

- the same fundamental physical phenomena can look very different when viewed at different scales
- it is possible to construct accurate mathematical theories of the behavior at two very different scales, where the coarser theory can even be analyzed efficiently and in a great degree of detail.

Multiscale modeling comes in where a fine dynamical theory is available, one has some idea of what coarse variables one might be interested in and the question turns to how one can set up a dynamical theory for the response of the chosen coarse variables. In other words, not enough understanding can be garnered from experimental observation or theoretical understanding of the fine system to come up with the right coarse theory. An example from engineering that illustrates the situation clearly is the still unsolved problem in mechanics of materials of being able to predict the stress response of a single crystal as a function of strain, strain rate, and crystal orientation.

Some natural questions come up with respect to the development of a coarse theory:

- Is an arbitrarily physically motivated set of coarse variables adequate to have a deterministic coarse response, with initial conditions specified only on this coarse set?
- If not, can one understand the nature of the stochasticity that arises from an underlying deterministic fine theory?

While it seems natural from examples in nature that multiscale modeling should be possible, it is equally clear that much needs to be done to completely understand even the scope of the question. Moreover, the mathematics involved is a subtle interplay of ideas from dynamical systems theory, differential geometry and the theory of nonlinear PDE and homogenization, which can only be executed by the best talent in these fields with practitioners like engineers providing the physical intuition and context and transforming rigorous mathematics and proofs to workable algorithms and bringing the theoretical advances back to advance engineering applications. Even within the realm of applied mathematics, issues like the emergence of memory variables and stochasticity in coarse response come up in what appear to be completely different approaches to the problem, that of PDE homogenization on one hand and the Mori-Zwanzig Projection Operator technique on the other, and a great hope would be the understanding of the relationships between the ideas involved and possibly combining all to a bigger whole.

- In the context of autonomous systems of ordinary differential equations (representative of fine-scale material behavior, e.g. Molecular Dynamics), is it possible to find a systematic way to define ‘slow’ variables – even when the system does not come in an a priori scale-separated form - and practically computable
*autonomous*dynamics (i.e. meso/macroscale models) for their evolution? - In the same context, is there a systematic way to approximate the slow dynamics of a large ODE system in terms of the interaction of the slow dynamics of suitably defined parts of it?
- What are the prospects for averaging time-dependent PDE that represent transport phenomena through wave propagation, and is there hope to develop practical computational tools for such problems?