Originally published July 1998
The development of new materials can be a resource-intensive process. Ideas are tested out in the lab experimentally, and refined further before being tested again. The iterative process leads to better products, but can require significant time and labor, as well as cost.
In an effort to cut the demand for such resources, many companies are turning to modeling and simulation to help short-circuit the research process. Although the terms modeling and simulation are often used interchangeably, there is a subtle difference in meaning. Simulation usually refers to a case in which little or no lab experimentation is required; the user inputs parameters into a computer program that mimics, or simulates, what happens in a real-life situation. The term modeling is more generic, and has been applied, for example, when structure-activity or structure-property relationships are obtained statistically, or when statistics or neural nets are used to develop a cause-and-effect relationship or model between experimental input data and measured experimental properties.
Today's faster and more powerful computers mean that it can be quicker to obtain answers on a computer rather than in the laboratory, so that more ideas can be tested and refined before experimentation is needed. This means that valuable lab time can be focused on the most promising ideas. Increasingly, too, modeling and simulation can provide insights that are difficultperhaps even impossibleto arrive at experimentally. In this way, modeling and simulation are beginning to play a role that is fully complementary to that of experimentation.
Of course, it is a prerequisite that the models be validatedthat is, that they have been shown to reproduce experimental results. Assuming that this has been accomplished, the benefits of modeling have been identified as the ability to:
- Develop products more quickly, through a focus on the best ideas of researchers.
- Minimize wasted effort by redirecting research projects earlier using the increased understanding gained through modeling.
- Check out processes before expensive test rigs are built or costly chemicals purchased.
- Enhance the creativity of researchers, who become more willing to test ideas at the periphery of their experience and who begin to think of their problems in a new way.
- Organize information and create an "information infrastructure" relating to specific research problems, thereby saving time and effort that would otherwise go toward reproducing results that are already available.
- Create a high-tech image with clients, which can be useful in positioning products in the marketplace.
These benefits of modeling can relate both to a company's core product and to the establishment of that product as a brand of choice.
Modeling and simulation have long been used in the field of drug design, where they help to determine the best active molecules for synthesis and testing.1 In the molecular design area, the connection between molecular structure and activity is a direct one, since the properties of the molecule determine its activity.
For materials, on the other hand, the connection is often more tenuous. Molecular properties may determine material behavior, but it is often more crucial to understand how molecules are organized on a larger scale, and how this organization affects the overall behavior. Therefore, an awareness of the macro and, more recently, the "meso" scale is important in understanding material properties. On the macro level, for example, computational fluid dynamics (CFD) can be an important tool for looking at extrusion of polymers and their flow within dies. Calculations on the meso scale are a newer development, and will be covered in more detail in this article as they relate to commonly used medical plastics.
At the molecular level, it is possible for researchers to look at the detailed electronic structure of molecules in order to determine properties such as dipole moment, charge distribution, color, and reactivity. Because such methods are time-consuming, however, they are generally restricted to molecules with relatively few heavy atoms.
It is also possible to use a classical modelso-called "molecular mechanics"to describe the molecule as a sophisticated collection of balls and springs, with interatomic forces, angle-bending forces, torsional forces, and so on. These forces are usually obtained by fitting to experimental data, and dynamics can be allowed for by giving the atoms an initial impetus and then using Newton's equations to see how they will move.
For the meso scale, the same techniques can, in principle, be used. However, this requires examining a very large number of atoms, which becomes too demanding of computer resources. An additional problem arises in that one "snapshot" of the system is unlikely to represent reality: because many configurations of the atoms within a material will be similar in energy, it is usually more important to look at a statistical average.
For this reason, different types of models are often used at the mesoscopic level. Among these are the so-called Monte Carlo methods, which select sites at random and therefore reflect the statistical randomness in real processes. Thus, Monte Carlo methods are often classified among the simulation techniques. Two of the examples discussed below rely on Monte Carlo methods. For the example of polyurethane polymerization, monomers are selected at random for reaction. In the case of polyethylene deformation and fracture, bonds are selected at random within an entangled network, and examined to see whether they will break when a certain strain is applied.
These algorithm-based simulation methods are not the only ones used successfully in materials modeling. Pattern-recognition techniquessuch as like those provided by neural netshave been used to develop cause-and-effect models, searching for relationships within data and using the models for "what if" predictions and (together with optimization techniques like genetic algorithms) for producing the best properties.
This article discusses three specific examples of computer-based modeling: the sequence distribution of monomers in polyurethanes, which impacts mechanical properties; the effect of entanglement spacings on the processing of polyethylene; and the use of neural nets to spot cause-and-effect relationships, which enable the user to optimize material properties.
Polyurethanes are formed by reacting polyols with isocyanates. They have been used in numerous medical and surgical devices, where they offer comparatively good performance as blood-contacting and tissue-implantable materials, at least for short-term use. Polyurethanes are versatile materials, and can be produced as cross-linked systems (for rigid and flexible foams, for example) or, when difunctional monomers are used, as linear chains that have elastomeric properties. For biomedical applications, the polyol is typically a polyether of molecular weight between 600 and 2500. Aromatic di-isocyanates are often used to impart a rigidity to the material.
Elastomeric polyurethanes can be either thermoplastic polyurethanes (TPUs) or cast polyurethanes (CPUs). Except for differences in preparation, these can have similar properties. In both cases, they are formed from difunctional polyethers and difunctional isocyanates, polymerized into linear polymer chains. The resulting material can be processed by heating, and therefore behaves as a thermoplastic. TPUs and CPUs owe their properties at physiologically realistic temperatures to the formation of domains within the polymer system, caused by phase separation of "hard blocks" and "soft blocks." Hard blocks in one chain will interact by hydrogen bonding with hard blocks in another chain, forming semicrystalline regions that act as mechanical cross-links and contribute to the tensile strength. In order to develop this phase segregation, the hard blocks must be of a reasonable length. The amorphous soft blocks contribute to the elasticity of the material. To predict whether phase separation of hard blocks and soft blocks will occur, it is useful to examine the sequence of monomers in the polymer chains to see if hard-block sequences of sufficient length will be formed. Employing a simulation permits researchers to explore, for instance, the effect of changing the mole or weight ratios of the materials, or to determine how relative material reactivities are affected by catalysts, without the need to carry out laboratory-based experimentation and characterization.
As an example, consider the case when pure MDI is reacted with 1,4-butanediol and a polyether diol of molecular weight 2000. It is known that the hard blocks are formed when the number of MDI-butanediol links reaches a sequence length of 3 or 4 pairs. If fewer pairs than this are present, it means that the molecules do not develop regions that are sufficiently rigid to "lock in" to adjacent chains. Therefore, predicting the sequence distributions and sequence lengths becomes an important factor in understanding what effects will be induced by changes in chemistry.
To investigate this, we can set up a "virtual reaction pot" that contains both the MDI and a mixture of polyols. Assuming that all hydroxyl groups are equally reactive, a Monte Carlo simulation can be carried out to mimic what will happen in the polymerization of a real system.2 The basis of the Monte Carlo procedure is the selection of two monomers, followed by an assessment of whether these monomers can (and will) react. Their ability to react depends on their chemical nature. For example, a hydroxyl can react with an isocyanate, but experimentally it is known that two hydroxyls will not react in polyurethane systems, so this "reaction rule" is excluded. In the simulation, if two molecules react, a record is kept of their connectivity, and the reacted product is put back into the reaction pot for subsequent selection. If the molecules do not react, then they are also put back into the pot. In this way, a detailed picture of the chemical architecture of all the polymer chains can be built up, and a sequence analysis lets the user look at the architecture with a level of detail that can only be accomplished experimentally via destructive testing.
By selecting different mole ratios of butanediol to polyether diol, different formulations can be investigated quickly. For a ratio of 2 moles of butanediol and 3 moles of polyether diol, polydispersity is approximately 1.8. Number-averaged and weight-averaged molecular weights (which can be compared with GPC results) are predicted to be 16,000 and 30,000, respectively. Changing to 3 moles of butanediol and 2 of polyether diol gives the same polydispersity, but lower molecular weights, corresponding to the higher proportion of the low-molecular-weight butanediol. Molecular weight will be reflected in the rheological properties. However, the more important differences occur in looking at the sequence distribution.
Table I summarizes the results of a sequence analysis, for the case in which 5 moles of diol are reacted with 4.5 moles of di-isocyanate, but the proportions of short-chain and long-chain diols are varied. The search has been carried out for sequences CABABABAC, CABABABABAC, and CABABABABABAC, where A = butanediol, B = isocyanate, and C is the polyether diol. Thus, the analysis searches over sequences of 3, 4, and 5 repeat units, respectively. Clearly, there is a significant increase in the hard-block sequences as the amount of butanediol is increased. Perhaps more interestingly, as the amount of the butanediol is increased, the sequences with three AB repeats actually fall off in number as the longer sequences are formed.
Sequence Length 3
Sequence Length 4
Sequence Length 5
Computer simulation can also be used to investigate the case in which one type of hydroxyl group is significantly more reactive than another. For example, secondary hydroxyl groups are generally found to be only 10% as reactive as primary hydroxyl groups.3 Therefore, secondary hydroxyls will react only when the primary hydroxyls are becoming scarce, a fact that has a significant effect on the polymer chain architecture and, consequently, on observed properties. Table II repeats the study carried out in Table I, but assumes that there are secondary hydroxyl groups on the polyether diol. In this case, the "blockiness" of the polymer is increased significantly.
Sequence Length 3
Sequence Length 4
Sequence Length 5
These two examples show how hard-block sequence lengthand hence the mechanical properties of the TPUscan be manipulated through different ratios of short-chain and flexible diols, and also how choosing diols of different reactivity can have very significant effects. For such an analysis, computer simulation offers two advantages over conventional experimentation. First of all, it allows ideas to be explored quickly: each of the simulations takes only 1 or 2 minutes on a modern PC. Secondly, the simulation gives information about the chemical architecture that would be very hard to determine experimentally, and would probably require time-consuming and expensive destruction of the polymer chain.
When polymer fibers are drawn, they can undergo a number of yield processes. Simple fracture is the most obvious, occurring when the fiber is unable to support any further load. Phenomena such as necking and micronecking may also be observed.
The behavior of drawn fiber depends on a number of variables, including the draw rate, draw ratio, temperature, molecular weight (distribution) of the polymer, and entanglement spacing. The interchain and intrachain interactions can also be important. Measuring certain of these variables is a straightforward process, but determining the effects of otherssuch as entanglement spacingposes a bigger challenge.
Methods for simulating amorphous polymers as a regular array of entanglements have been developed by Termonia.4 Chains of monodispersed polymer, of a defined molecular weight, are put down at random on a lattice. Bonds between the different chains, and within a single chain, can be given bond strengths that depend on the chemistry of the system in question. Most of the published calculations concentrate on polyethylene, although some work has been undertaken for poly(methyl methacrylate).
In Termonia's model, a small deformation is applied to the system. Using a Monte Carlo statistical process, each bond is visited in turn, to see if its energy exceeds a specific threshold. If it does, it is deemed to "break." Once all the bonds have been examined, the elongation is incremented, and the process repeated. Bonds that break are not allowed to form again. Entanglements between chains are represented as friction points, so that chains can slip past each other with an associated energy penalty.
These simulations of polymer draw and fracture are computer intensive, and may take more than a day to run on a Unix workstation. In the case of parameters like draw rate, the results are entirely as expected, and in fact might be quicker to carry out through experimentation. The faster the draw rate, the more likely the system is to fracture rather than to stretch.
Figure 1. Stress-strain curves for monodispersed polyethylene of molecular weight 475,000 at a temperature of 348K. The top curve shows entanglement spacing of 1900, in which fracture occurs; the bottom curve shows an entanglement spacing of 3800.
For other parameters, the simulation offers insights that would be hard to obtain in any other way. For example, changing the entanglement spacing affects the behavior of drawn polyethylene dramatically, as illustrated in Figure 1. For systems in which entanglement spacing is relatively low, fracture is inevitable. For highly entangled systems, the system is much more likely to pull out smoothly, perhaps showing micronecking. This confirms experimental observations, which note that gel-spun polyethylene (e.g., Dyneema from DSM Performance Polymers, Sittard, The Netherlands), which has high entanglement spacings, demonstrates good mechanical properties. Simulation offers the ability to determine which factors are important for controlling the mechanical properties of polymer fibers. The process can also factor out different effects through a series of "what if " simulations in whichunlike in the labjust one variable can be changed.
The final example of modeling presented in this article does not rely on simulation to mimic a real system. Instead, it uses neural-net technology to find cause-and-effect relationships within experimental data, developing a model that can be used to determine optimum properties from a known formulation. Neural nets are well-established pattern-recognition techniques that can "learn" from data presented to them. They have had significant application in product formulation and in process optimization.5,6
Typically, for formulation examples, a neural-net architecture with a single hidden layer (illustrated in Figure 2) is adequate. The number of nodes in a hidden layer can be varied, depending on how many inputs there are in the problem and on the number of experiments that have been carried out. A number of excellent textbooks on neural nets are available.7,8
Figure 2. Schematic diagram of single-hidden-layer neural-net architecture.
Unlike the previous examples, which involved simulation, this type of modeling is specifically data driven. Experimental data are collected and presented to the neural net in the form of inputs (which can be ingredients and/or process variables), and outputs (which can be any measured property). The neural net "learns" from the data, so that it can predict what effect changes in the formulation would have. The model developed uses one or more "hidden layers"nodes that are simply used as connections between the inputs and outputs, and which are weighted according to the strength of the connection. These models, although they cannot be expressed mathematically in a simple manner, are exceptionally quick in operation and very effective in modeling nonlinear relationships.
The field of adhesive formulation, which has significant impact in the medical plastics market, is one area in which such models have been successfully applied. Properties include peel strength, shear strength, resistance to moist environments, ease of application, and ability to adhere to different surfaces. Among the ingredients are often adhesion promoterswhich are added to form stable bonds to different substratesand getting the right amount of these can be a challenge, especially when many other design variables can be altered.
Using either specially collected or historical data, the neural net searches for relationships that connect changes in the formulation to observed differences in measured properties. The model developed by the neural net can be used in a "what if " mode to explore the effect of making changes in the formulation. Even more powerfully, however, it can be used to find the optimum properties, using sophisticated search algorithms to hunt for the global minimum in the design space. The optimum depends on the particular problem being addressed: for example, one might wish to maximize the shear strength while minimizing the amount of adhesion promoter required. In other words, the optimum must be defined in terms of the relative importance of each property, and the desired value that each property should take. The search may be constrained, as when, for example, specific restrictions are put on the input conditions. With this information, a mathematical representationthe "objective function"can be set up.
Once the model has been developed, a number of trial solutions are generated. A criterion of fitness is determined, taking into account how well each solution matches the objectives set for the search. Each of the trial solutions is assessed for its degree of fitness, and the more fit solutions are given more "children" in the next generations of trial solutions. In this way, the global optimum can be determined reliably in a complex, multidimensional design space.
The neural-net technology required for model development has been integrated together with the genetic algorithm optimization techniques into a commercially available software package, which adds significantly to the ease of use.9
With rapid advances in computer capabilities and widespread availability of commercial software, it is now feasible to perform computer experiments on materials so as to complement procedures carried out in the laboratory. Typically, computer simulation and modeling allow for a greater understanding of a system, including the ability to explore variables independently and to determine which are the most important in establishing specific properties. Information difficult to obtain experimentally can often be gleaned from these "model solutions." In many cases, modeling is now able to play a full role in complementing experimentation. These techniques are increasingly being used in the design of novel materials and processes, and have the potential to significantly impact the design of new medical plastics.
1. Richards WG, Computer-Aided Molecular Design, London, IBC Technical Services, 1989.
2. These calculations were carried out with DryAdd, a computer simulation package copyright ICI plc and The Glidden Co. (19881993), and Oxford Materials, (19941998).
3. Woods G, ICI Polyurethanes Book, Chichester, UK, Wiley, 1987.
4. Termonia Y, "Molecular Models for Polymer Deformation and Failure," chap 6 in Computer Simulations of Polymers, Colbourn EA (ed), Harlow, UK, Longman Scientific and Technical, 1994.
5. Gill T, and Shutt J, "Optimising Product Formulations Using Neural Networks," Scientific Computing and Automation, September 1992.
6. Rowe RC, and Roberts RJ, Intelligent Software for Product Formulation, London, Taylor and Francis, 1989.
7. Pao Y-H, Adaptive Pattern Recognition and Neural Networks, Reading, MA, Addison-Wesley, 1989.
8. Wasserman PD, Neural Computing: Theory and Practice, New York City, Van Nostrand Reinhold, 1989.
9. CAD/Chem Custom Formulation System, developed by AI Ware, Cleveland.
Elizabeth A. Colbourn (PhD) is managing director of Oxford Materials Ltd., based in Tattenhall, Cheshire, UK. Trained in theoretical chemistry at Queen's University in Canada and Oxford University, she worked for 16 years as a computational chemist with ICI plc in the UK, where she established and led the materials-modeling team. She is the author of numerous publications in the open scientific literature.