In this study, we review the development and application of multi-physics and multi-scale coupling in the construction of whole-heart physiological models. Through an examination of recent computational modelling developments, we analyse the significance of coupling mechanisms for the increased understanding of cardiac function in the areas of excitation–contraction, coronary blood flow and ventricular fluid mechanical coupling. Within these physiological domains, we demonstrate and discuss the importance of model parametrization, imaging-based model anatomy and computational implementation.
Inherent to the physiome and virtual physiological human (VPH) vision, and characteristic of the constituent research programmes of the participating scientific community, is a shared interest in the concept of multi-scale and multi-physics coupling of previously distinct mathematical models. The VPH style of modelling, using explicit representations of the biophysical mechanisms within model equations, at each level in a modelling hierarchy, allows the coupling of equations between models to be cast in terms of the existing state variables of each model. This in turn facilitates the rapid development of models that link across physiological function and/or scales. When effectively applied, the resulting coupled model enables a valid analysis of an increased range of function, incorporating an enlarged set of the complex cause and effect relationships that exist in many physiological systems.
Currently, computational models of the heart collectively represent an advanced framework for capturing the integrated and coupled function of a single organ system. As such, it offers a tangible example with which to present and discuss some of the challenges and opportunities associated with coupling across scales—from subcellular to tissue—and functions—from electrophysiology to organ-scale mechanics. It is the large number of available component cardiac models and their history of development which provide many of the opportunities and challenges of quantitatively capturing coupled function.
The history of the cardiac modelling field is, to a significant extent, defined by the development of cellular electrophysiology models in the last 40 years (Nickerson & Hunter 2006). This class of models has provided a successful paradigm for integrating individual datasets into a common framework and for interpreting the ensemble behaviour of electrical action potential recordings in altered ionic conditions. Applying the same methodology, cellular models of myocardial contraction have subsequently been developed (Niederer et al. 2006). This has permitted transient Ca2+-induced excitation–contraction to be characterized by coupling electrophysiological and mechanical models at the cellular level (Niederer & Smith 2007).
Advances in high-performance computing have, more recently, facilitated the development of finite-element-based models of the heart, which accurately represent both cardiac anatomy and microstructure to enable a multi-scale approach to simulating whole-heart function. These mathematical descriptions serve as spatial frameworks for embedding functional cellular models of electrical activation and the tension generation. Through this mathematical embedding, the cellular model drives the spread of electrical waves and cardiac contraction within simulations of whole-organ electromechanics. In parallel with the electromechanical models, coupled fluid mechanic representations of chamber and coronary blood flow have been developed. These frameworks seek to capture the influence of myocardial contraction, using linear and nonlinear mechanics theory, on fluid flow governed by the Navier–Stokes equations (Lee & Smith 2008; Nordsletten et al. submitted a,b). Coupled fluid mechanical models have been specifically applied within the chambers to quantify the two-way fluid–structure interaction, between ventricular fluid dynamics and contraction, which is fundamental for understanding the cardiac pump function. Within the coronary vasculature, the interactions between contraction and the dynamics of perfusion are equally important to understand the energy supply–demand relationship of the heart.
Although offering an array of simulation opportunities, the coupling of models, such as those outlined above, introduces a number of new challenges into the model development process. Among these challenges are the relationships between model parameters and the experimental and/or clinical data from which they are derived. The integrative nature of coupled models, and indeed the physiological systems themselves, means that the introduction of inappropriate new parameters, or model components, has the potential to compromise all elements of the ensemble framework (Niederer et al. in press). Furthermore, where a model parametrization is valid for only a specific range of function, the replacement of boundary conditions with the output of another model is potentially problematic.
A number of projects are currently underway to provide tools that will enable researchers to address these issues. In particular, these efforts include the development of model exchange formats to unambiguously define models using markup languages (Beard et al. 2009). These formats have been used to support the coupling of individual models (Terkildsen et al. 2008) and we are now working towards the linking of model parameters to experimental sources within established databases (Ribba et al. 2006). However, as we will demonstrate in this study, new methods for the analysis of the models themselves are also now required to support the coupling of models in a robust and transparent way.
Of further concern, when linking model parameters to systems of interest, is the methodology employed for processing the increasing quantity, and quality, of data available from advances across a range of in vivo and in vitro imaging technologies. Specifically, the challenges lie in integrating multimodal data into a common reference frame, and developing robust validation methods in the context of wide physiological variability within populations. Inside the clinic, such integration presents the opportunity to develop truly personalized models with which to optimize both treatment delivery and patient selection within groups. In basic science, as this study will aim to illustrate, model customization has the potential to provide the data from which fundamental structure–function relationships can be deduced, analysed and ultimately validated.
From a computational perspective emerge a separate set of challenges related to the development and implementation of computational solution methods that preserve accuracy and robustness in all components. A major computational challenge stems from the range of spatial and temporal scales present in physiological systems. The diverse behaviours of the heart and blood, coupled with these disparities in scales, directly impact the model's representation of the underlying physics—often requiring varied discretizations or numerical techniques. Computational techniques to address these issues are equally varied, ranging from weak unidirectional coupling (Kerckhoffs et al. 2003b), where the output of one model is used to drive another, to strong coupling where different equation systems are solved simultaneously and in some cases are assembled into a single monolithic set (Nordsletten et al. submitted a,b). The computational challenge with the development of coupling methods, as we will demonstrate, is to balance the transfer of information between components with computational tractability.
It is the coupling of these four components—deformation, excitation, ventricular fluid flow and coronary haemodynamics—as a progressed set of exemplars for multi-physics and multi-scale coupling which we will focus on in this paper. Our goal is to briefly review the current state of the art, outline the coupling methodology and then present results in a relevant context that both demonstrates the potential and highlights the particular challenges, introduced above, in using coupled models for providing insights into cardiac function. Specifically, this will include (i) issues of model inheritance and capacity for in silico imaging using models of cardiac electromechanics, (ii) the challenge of image analysis and potential to elucidate structure–function relationships by combining imaging and models of coronary haemodynamics and (iii) the computational issues of coupling fluid and solid solution methods to simulate ventricular fluid dynamics, and the potential of this method to analyse energy dissipation in the cardiac chambers. Finally, we will discuss the implications of coupling additional functional models in the heart, and, where relevant, other integrated models of organ systems.
2. Electromechanical coupling
The synchronous and efficient contraction of the heart is achieved via the coordinated spread of electrical activation across the myocardium. The electrical and mechanical systems are intimately linked by the anisotropic microstructure of the heart that regulates the spread of the electrical wave and the resulting deformation. Each system also regulates the other via a complex web of feedback mechanisms, with the extent of deformation dependent on the frequency of electrical activation and the electrical properties of the cell dependent on deformation.
Upon electrical activation, a myocyte releases Ca2+ from intracellular stores. The Ca2+ then binds to troponin C, which acts as a switch activating the generation of tension in the heart. Figure 1 shows the transduction of the Ca2+-binding signal (white arrows) and multiple feedback mechanisms (grey arrows) involved in tension development. As the muscle generates tension, it also deforms, altering its capacity to generate tension (arrows labelled (a) in figure 1). The tension developed by the myocyte alters the buffering of Ca2+ by troponin C (arrows labelled (b) in figure 1), thus altering both the myocyte Ca2+ dynamics and electrophysiology. The significance of each step in this multi-looped feedback system is beyond analysis by intuition alone, yet it is widely acknowledged that the development of a quantitative understanding of electromechanical coupling is fundamental for predicting cardiac function. This need to quantitatively represent the important component functions of electromechanical coupling has motivated recent modelling efforts.
(a) Review of recent work
The first model of coupled electromechanics (Kaufmann et al. 1974) combined a ventricular action potential model (Krause et al. 1966) with a model of tension generation. This analogue computer model combined separate models of the action potential and tension generation by physically wiring together the two analogue computers. Similar methods are applied in current electromechanics models, where mechanics models are coupled to electrophysiology models via common variables, such as intracellular Ca2+, Ca2+ bound to troponin C, stretch/strain and tension. Linking multiple models together allows for the investigation of larger and more complex systems, while benefiting from existing knowledge captured in earlier models.
More recently, in silico electromechanics investigations have been performed by Rice et al. (2000), who examined the regulation of contraction for short interstimulus intervals using an electromechanical cell model that combined earlier electrophysiology (Jafri et al. 1998) and mechanics (Rice et al. 1999) models. Similarly, Crampin & Smith (2006) combined two earlier models (Hunter et al. 1998; Faber & Rudy 2000) to isolate the role of individual proton dependencies on tension generation in a cellular model. A recent model by Niederer & Smith (2007) has combined three earlier models (Pandit et al. 2001; Hinch et al. 2004; Niederer et al. 2006) to develop a species- and temperature-consistent model of the rat ventricular myocyte. The model was then used to investigate the feasibility of different feedback mechanisms proposed to regulate the slow force response to stretch.
At the organ scale, detailed cellular electromechanics models have been embedded in mathematical descriptions of the cardiac anatomy to simulate whole-organ electromechanical function. Two methods have been developed for coupling the electrical and mechanical systems at the organ scale: weak and strong coupling (Niederer & Smith 2008). In weak coupling, the electrical activation is solved separately and activation times or a Ca2+ transient for each point in the heart wall are passed to a model of contraction, which is then used to calculate deformation. In these simulations, electrical activation is simulated using Eikonal mapping (Tomlinson et al. 2002) or ionic cell models to provide an initial time for tension generation in a mechanics model. These models have provided insight into the effects of pacing on cardiac synchrony (Kerckhoffs et al. 2003a,b; Usyk & McCulloch 2003), and progressed the coupling multiple physical systems within the heart (Watanabe et al. 2004).
In strongly coupled models, the electrical activation and mechanics models are solved simultaneously, with the mechanics model feeding back a strain field to alter any stretch-dependent components of the electrophysiology model. These types of frameworks have been used in simulations that have demonstrated the importance of deformation in describing the morphology and timing of the T-wave in the electrocardiogram (Smith et al. 2003), the effect of heterogeneous electrical properties on the distribution of fibre strain in the ventricle (Nickerson et al. 2005) and the role of deformation in arrhythmias (Nash & Panfilov 2004).
(b) Challenge and opportunity: parametrization of models
As demonstrated above, electromechanics models routinely combine the existing detailed models of electrophysiology and contraction (Terkildsen et al. 2008). This model reuse is the strength of the VPH modelling philosophy. However, there are a number of important considerations when combining existing models of cardiac electromechanics. The first challenge comes from the reuse of earlier components in coupled electromechanics models, without refitting the parameters of the ensemble framework. In this situation, the new model inevitably inherits the experimental data's dependencies of the reused components. As this process is repeated across multiple generations, these dependencies can become obfuscated, potentially resulting in the introduction of inappropriate data into a model (Smith et al. 2007).
A related difficulty comes from combining two existing models that may not represent the same species or temperature (e.g. models that have combined a model of contraction fitted to rat data recorded at room temperature with an electrophysiology model fitted to guinea-pig data at body temperature or others that have used a guinea-pig cell model in a rabbit heart geometry). Combining models that represent different species and temperatures introduces errors into the model that may be significant, given the degree of interspecies and temperature variability observed experimentally.
The risks of component reuse outlined above remain difficult to quantify, although they can be highlighted by examining the changes in model function when one component is replaced with the same component from another model of the same system. This investigation provides a functional test of the two model components, which are typically represented with a different set of equations, but have the common goal of representing the same system in the same environment. Figure 2 shows a specific example where three distinct models of the human ventricular myocyte models (Priebe & Beuckelmann 1998; Iyer et al. 2004; ten Tusscher et al. 2004) are solved using the Na+/Ca2+ exchanger model component from each of the two other models. This example demonstrates that, even when the species and temperature are the same, component reuse can result in significant changes in model function. We suggest that this type of analysis provides a means of determining whether the functions in a given set, or class, of models have converged to the point where building models through inheritance of components is possible.
(c) Potential to elucidate structure–function relationships
Whole-organ electromechanics models provide a wealth of information on the complex tension and strain fields present in the heart, as well as providing new information on the distribution of work across the myocardium. This in turn provides the capacity to derive parameters that are not directly observable but play a key mechanistic role in the disease process (for example, tissue stress and measures of pump efficiency) to assist treatment decisions. Towards this goal, detailed strongly coupled electromechanics simulations have been performed in a rat left ventricle model. Using this model, new metrics of regional performance can be extracted from the simulation results. In figure 3a, the regional distribution of work rate direction and magnitude is shown. The methods developed to perform these simulations are now being applied to human model development and applications. Figure 3b demonstrates how a patient-specific geometry can be fitted to cardiac magnetic resonance imaging (MRI) data. The resulting geometry can be used to predict how the heart will deform (figure 3c) and the simulation results can be used to predict measures of deformation that cannot be readily determined from standard imaging modalities.
3. Coronary blood flow
The coronary vasculature stems from the aortic sinus of the heart and forms a distributed network of interfaces between the blood and every myocyte. Structurally, the coronary system is tightly integrated with the myocardial architecture. Indeed, the flow of blood in these vessels is intimately related to the motion of the myocardium and energetic demand. Most of the recent efforts to model this coupled system have focused on two main challenges—to obtain a detailed description of the vessel and myocardial structural relationship that spans over multiple scales, and to achieve an appropriate level of model complexity that captures the essential physical mechanisms governing the haemodynamics.
(a) Review of recent work
Recent advances in medical imaging technology have brought major developments in the structural imaging of coronary vasculature. In particular, a number of purpose-developed and non-clinical imaging modalities have served as strong driving forces behind the three-dimensional visualization of the smallest coronary vessels in whole-heart/large tissue blocks, which was previously unachievable. Although no single modality is currently capable of capturing the complete human coronary vasculature, the resolution and acquisition volume of these devices are constantly improving. Notable examples include the micro-computed tomography (μCT), which can image a whole-rat organ vasculature at approximately 1 μm resolution (using a synchrotron X-ray source; Jorgensen et al. 1998; Plouraboue et al. 2004; Heinzer et al. 2006); a custom-built cryomicrotome device that captures serially sectioned images of a whole human heart at approximately 25 μm resolution (Spaan et al. 2005); and the extended volume confocal imaging device, capable of distinguishing multiple tissue types (Sands et al. 2005).
Although these modalities are applied destructively on tissue samples, combining them with functional imaging studies provides a possible means to deal with the problems of model parametrization and in vivo verification. The recent developments in high-resolution magnetic resonance perfusion imaging (Lee et al. 2004; Nagel 2008; Plein et al. 2008), and automated processing of microsphere-injection experiments using the cryomicrotome, hold potential for the validation of multi-scale models describing coronary structure–function relationship in a locally averaged manner.
The key to simulating coronary blood flow on an extracted model anatomy (structure) is the computationally efficient simulation of perfusion (function). The one-dimensional vessel flow model, based on Navier–Stokes equations, has been available in the literature for many years and forms the foundation of several haemodynamic network modelling studies. In recent years, with the advent of faster computers, the model has been extended by various numerical techniques, including finite-element method (FEM; Sherwin et al. 2003; Mynard & Nithiarasu 2008) and applications to extensive vascular networks of the heart (Smith et al. 2002). In other studies, attempts have also been made to incorporate the one-dimensional approach within a hierarchical framework coupling three-, one- and zero-dimensional models, in order to model multi-scale physics of the whole-body circulation (Formaggia et al. 1999, 2001).
(b) Challenge and opportunity: image processing
One area in which recent progress has been achieved addressed the need for the development of computational methods that analyse the rich datasets acquired by the various imaging modalities. The devices reviewed above routinely yield datasets in the gigabyte range, precluding the possibility of manual processing. Owing to their novel nature and the image characteristics, special attention must be paid to the processing and automation of the data analysis, where standard algorithms are insufficient. We have focused our recent efforts on developing customized automatic segmentation and mesh generation algorithms to address such specific requirements (Nordsletten et al. 2006). Our work on cardiac μCT datasets employed an efficient combination of methods based on iterative two-dimensional sampling and vessel boundary extraction using active contours. Its capabilities were demonstrated through application of the method to the coronary image data of a whole rat heart, consisting of approximately 20 000 vessel segments (Lee et al. 2007). These results were validated against the outputs of higher resolution scans of the same tissue sample, which confirmed the sub-voxel accuracy of the technique (figure 4).
The characterization of the vessel–tissue structural relationship is one of the major challenges involved in modelling the coupled haemodynamic system. Recently, we have successfully applied the extended volume confocal imaging system, and developed tailored data analysis methods, to allow for the comprehensive description of the microstructural coupling between the coronary microvasculature and the laminar structure of the myofibres in transmural blocks of the ventricular wall. This was achieved by combining previous tissue preparation techniques (Young et al. 1998) with vascular casting (Spaan et al. 2005) to allow dual-channel acquisition. The development of custom segmentation algorithms then enabled a quantitative analysis of the degree of vessel–tissue coupling across the coronary vasculature (figure 5).
(c) Potential to elucidate structure–function relationships
Although modelling of flow in vessel networks has become a common practice with the increasing availability of three-dimensional angiograms, limited attention has been paid in the literature to the flow in microvascular networks. As direct experimental evidence and modelling studies (Fibich et al. 1993; Westerhof et al. 2006) have shown a significant functional flow-contraction coupling in the small vessels, our recent efforts have been focused on modelling the time-dependent flow in microcirculation. The model was built on the Navier–Stokes FEM framework and extended by the red blood cell transport (Lee & Smith 2008) and rheological model tuned for the complete physiological diameter range of vessels (Pries & Secomb 2005).
Building upon these foundations, we have extended the computational algorithm to incorporate heart contraction with the coronary flow, via projection of myocardial stress and deformation solutions outlined in §2c onto the coronary vascular fluid domain.
This coupled model now enables the quantification of two separate effects on coronary blood flow-produced contraction of the surrounding myocardial tissue. The modified equations account for (i) the external pressure gradient along the vessel from the change in intra-myocardial stress and (ii) the momentum transfer and change in geometry from the motion of vessel wall, assumed to be tethered to tissue material coordinates. The latter assumption allows for an application of a straightforward Arbitrary Lagrangian–Eulerian (ALE) formulation that addresses the fluid domain deformation, where the fluid grid movement is calculated from the tissue deformation. Further assumptions embedded in the coronary vascular flow model are that the velocity flow profile is axisymmetric (and thus the governing equations can be reduced from three to one dimension(s)), flow is fully developed and the vessel wall is elastic. The boundary conditions were chosen to approximate (in the steady state) the pressure drops reported across the arteriole, capillary and venue coronary vascular beds (Chilian & DeFily 1991).
The model is demonstrated on a network of coronary microvessels segmented from mid-myocardial wall (figure 6a). Boundary conditions were applied on the top face of the block increasing pressure from 1 to 3 kPa over 500 ms, the initial and all other boundary conditions were set at 1 kPa. While the spatial application of pressures and thus the driving gradients are likely to be a simplification in terms of predicting perfusion in the context of the structure–function relationship shown in figure 6, the spatial effect of uniform pressure application is likely to be small. The mechanical conditions external to the vessels were defined by solving the electromechanical model outlined in §2c with fibre angle distribution matching that of the image data, yielding extravascular pressure (figure 6b) and vessel strain (figure 6c) distributions. Figure 6d–f shows the flow rates in vessels with applied external pressure, vessel deformation, and both pressure and deformation, respectively, normalized with respect to the flow rates calculated in the vessel model. This approach uses the detailed structural information, obtained from the imaging work, to ensure that the coupling is achieved in a biophysically accurate manner. These preliminary results indicate that the effect of vessel strain on coronary haemodynamics is comparable with extravascular pressure. As indicated in figure 6d–f, the effects on individual vessels are highly dependent on orientation and scale.
The modelling framework described here illustrates a snapshot of the powerful ways in which modelling can contribute towards our understanding of a complex organ system, while offering a means to exploit the rapid advances enjoyed by imaging and visualization technologies. With careful construction and application, such an approach offers the potential to reveal the structure–function relationships underlying cardiac function, and enables the quantitative analyses of the coronary haemodynamics to an extent that would not, otherwise, be possible.
4. Fluid mechanical coupling
The effect of myocardial contraction and relaxation on overall heart function is closely coupled to the dynamic behaviour of blood flow through its chambers. Through the physical coupling of the kinematics and transfer of momentum between the ventricular chambers and myocardial wall of the heart, mechanical energy is transferred to pump blood and supply the body.
While the underlying coupling mechanisms are straightforward, translating this energy exchange to cardiac output, a source of major physiological and clinical interest is much more ambiguous. Answering questions on the functional behaviour of the heart, such as ‘what role does regional myocardial stiffness play on diastolic filling, pre-stretch and its eventual impact on the myocardial contractile state?’, requires a detailed knowledge of cardiac mechanics. In this sense, coupled mathematical models of fluid/solid mechanics provide an excellent foundation for in silico investigation.
(a) Review of recent work
A number of decoupled models have independently analysed the patterns of blood flow through the heart chambers (Long et al. 2003; Domenichini et al. 2005; Pedrizzetti & Domenichini 2005) or the mechanical action of its tissues (Nielson et al. 1991; Nash & Hunter 2000; Stevens et al. 2003). Qualitative comparisons of heart flow simulations with velocity-encoded MRI (Saber et al. 2001, 2003; Oertel 2004, 2005) and myocardial strains with modelled tissue contraction and relaxation show the promise of these modelling modalities (Nash 1998; Stevens 2002). However, these separated models of blood and tissue dynamics are limited. Both types of decoupled models are hindered by the need for detailed a priori knowledge of the forces or dynamics of the fluid/solid interface, effectively requiring that the dynamic interplay between fluid and solid be prescribed.
The need to elucidate structure–function relationships between the tissue and blood gave rise to a range of models for studying these coupled systems. The first coupled fluid/solid model of the heart was introduced by McQueen and Peskin, who included the two ventricular chambers along with valve structures (McQueen & Peskin 1989a,b, 2000; Kovacs et al. 2001). Using the immersed boundary technique of Peskin and McQueen, Lemmon & Yoganathan (2000) then assessed the flow behaviour of the left atrium and ventricle. This work was followed by Cheng et al. (2005), who, focusing solely on the left ventricle, incorporated full fluid complexity (through selection of an appropriate Reynolds number) with a more traditionally accepted solid model. Watanabe et al. (2002, 2004), through a series of papers, detailed a model that includes anatomically derived tissue behaviour along with a simple model of electrical activation. However, despite this significant progress, these coupled models have yet to fully incorporate the anatomical detail and solution accuracy of their decoupled tissue and fluid model counterparts.
(b) Challenge and opportunity: computational techniques
While the physical principles underpinning blood and myocardial mechanics models are equivalent, their dynamics are much more distinct. For example, while a myocyte in the heart may move a few centimetres during the cardiac cycle, erythrocytes within the ventricular chambers track complex paths into, and back out of, the heart.
Understanding how these disparate systems interact is not only the major source of interest, but also one of the major computational challenges. The difference in behaviour is reflected in the numerical schemes tailored for each problem. The simulation of biological tissues traditionally is solved using coarse grid, high-order, curvilinear finite elements (Nash & Hunter 2000; McCulloch 2004), while the simulation of blood flows is solved using fine grid, low-order, linear finite elements (Watanabe et al. 2004), finite difference (McQueen & Peskin 1989a,b; Domenichini et al. 2005) or finite volumes (Cheng et al. 2005; Greenshields & Weller 2005).
Furthermore, owing to the comparable significance of tissue forces, blood pressure and momentum, the interplay between tissue and blood yields a sensitive coupled system. Much of the work in coupled fluid/solid mechanics stems from partitioned schemes that, using classic methods tailored for each system, iterate between independent fluid and solid solves (Farhat et al. 1994, 1998). However, the strong influence of both blood and tissue means that these methods often become unstable. As a result, this strong interaction must be approached from a different, monolithic (Bathe & Zhang 2004; Greenshields & Weller 2005; Dettmer & Peri 2006) approach, whereby the solution of both tissue and blood are calculated together.
The classic monolithic coupling of fluid/solid bodies, however, leads to an imbalance between accuracy and computational tractability. This approach employs equivalent computational techniques to both blood and tissue, allowing the straightforward integration of the bodies into a single system. However, while integration is necessary for stability, we know these systems do not exhibit similar behaviour. Consequently, the coupled model loses sensitivity to the underlying dynamics of either the blood or tissue behaviour.
To address these concerns, we proposed a novel computational approach that combines the computational stability and flexibility of the monolithic and partitioned approaches (Nordsletten et al. submitted a,b). Specifically, Cauchy's first law and the equations of continuity, which preserve momentum and mass in an arbitrary time-varying domain, such as the chamber blood volume or myocardial wall, have been combined with the ALE form of the Navier–Stokes equations. The interfaces between fluid and solid are projected into a planar space over which the constraints of continuity and momentum conservation are upheld. This has the benefit of maintaining the energy estimate of each individual domain, and thus preserving classic results on the asymptotic behaviour of solutions with spatio-temporal refinement. Extensive work on the theoretical and applied aspects of this approach has verified the applicability of this method to complex fluid/solid problems such as those in the heart (Nordsletten et al. submitted a,b).
The solution properties of this scheme are demonstrated using a human left ventricular model on which we have simulated diastolic filling through the application of an inflow velocity boundary condition at the mitral valve opening. Figure 7a–c plots fluid pressure contours and figure 7d,e the resultant displacement of the myocardium (calculated at the Euclidean distance between material points in the rest and deformed configurations) at end diastole. Building a computational model based on the behaviour of the solid produces a computationally tractable model; the accuracy of the fluid behaviour is severely compromised (figure 7a,b). Alternatively, a model based on the fluid provides accuracy, but tractability is lost due to excessive refinement of the solid (figure 7c,d).
While still retaining the unified solution strategy of the monolithic approach, this computational technique allows numerical schemes surrounding both fluid and solid models to be built based on their behaviour (figure 7e,f), providing both accuracy and tractability.
(c) Potential to elucidate structure–function relationships
With accurate, robust and computationally efficient schemes for solving coupled fluid/solid problems comes the potential for further insight into the energetic behaviour of the heart. Even under normal function, the conduction of energy and its dissipation due to blood viscosity, Ev, may be followed throughout the cycle (figure 8). Here, Ev is given as the rate of viscous energy loss,where v is the blood velocity; μ is the blood viscosity; and Ω is the left ventricular geometry. These results are plotted for the idealized sample case of diastolic inflow given in figure 8.
With the recent advancements in imaging and image segmentation techniques, the physiological/anatomical detail necessary for accurately modelling the coupling of blood and tissue is now becoming rapidly available. Incorporating this information within a robust framework provides a solid platform for advancing these developmental models for physiological and clinical application.
In this paper, our aim has been to outline some of the general issues associated with coupling, using tangible examples from the cardiac modelling field. In the face of tremendously complex integrated physiological behaviour of cardiac mechanics, we make no claim that our presentation of coupling methodologies or associated issues is comprehensive. We believe that the issues of parameter estimation, extraction and computational processing are general, and thus relevant to the development of integrated organ system models in addition to the heart. However, inevitably, the physiological system will influence the development of the modelling approach and the specific implementation and testing of model coupling techniques.
For example, in this study, our analysis of the degree to which Na+/Ca2+ exchange plays a different role in three separate models of the human cardiac myocyte is particularly relevant to electrophysiology models. This is because both the models, as well as the physiological system itself, can be readily separated into distinct functional components (ion channels and exchangers) whose function is driven by common, systems level, variables such as voltage and ionic concentrations. This structure allows equivalence between model components to be defined and comparisons made. This type of analysis is clearly less relevant for comparing classes of models where their equation structure does not explicitly represent an established or a hypothesized biophysical mechanism. We would also like to note that, without a gold standard model, this type of comparison merely provides a relative measure. As such, the results presented here indicate, only, that the function of individual components, and indeed parameters, are very different despite the fact that the goal of each model is to simulate the same system. This suggests that the development of new models by coupling components from existing models must be done with caution in the cardiac electrophysiology field.
The lack of a known reference, or gold standard, also affects the degree of confidence one can have in models extracted through image segmentation, such as those demonstrated in §3. However, the application of algorithms to image sets collected at different resolutions, ideally from the same sample, again provides a relative measure of performance and some quantification of error. These results indicate the resolution below which extracted structures (in the case of image processing) can no longer be relied upon to form the anatomical basis of a model. When this structural representation is used as the geometric domain over which a functional model is solved, a sensitivity analysis of the variation, in key results with anatomical resolution, can provide another measure of the required detail.
The required level of model detail is particularly relevant for coronary flow modelling: from a computational perspective it is not tractable to extract a full vascular tree and solve the model of blood flow applied in this study for the whole heart. Furthermore, even with unlimited computational resource, the destructive imaging modality on which the model in figure 6 is based means such an approach is not possible in a human clinical context. A future step to incorporating the microcirculatory behaviour into a model at the whole heart scale could be to homogenize the explicit vascular structure of the microcirculation and apply a continuum approach to develop a porous-elastic model (Huyghe et al. 1992). Such a framework would provide the potential to capture the regional shift in perfusion during contraction in a computationally efficient manner and will be able to be directly compared with clinical positron emission tomography and MRI data.
While we have addressed a selection of methodological and computational difficulties, further challenges remain in the development of cardiac models. The coupling between cellular models and flow, for example, will face ongoing tests in the areas of both validation and reconciliation of the differences in the required mesh discretization levels. Additional functional couplings are also required to further develop a truly integrative understanding of cardiac physiology. Indicative of this need is the coronary coupled model in this study, within which capturing the strong, bidirectional coupling between coronary flow and passive contraction of myocardium will ultimately require the development of a new tissue constitutive model. In addition to further improvement of the electromechanics, coronary perfusion and ventricular flow models presented above, other functional elements also necessitate integration. These include the coupling of comprehensive models of metabolism and neural regulation, which will be essential for simulating disease processes in the heart.
The coupled models such as those demonstrated in this study and developed by others (Vigmond et al. 2008) provide new elements for underpinning a number of novel physiological investigations. To analyse the effects of, for example, ventricular or coronary fluid dynamics on myocardial mechanics under steady-state conditions, coupling can be represented by time-varying boundary conditions that are unchanged from one beat to the next. However, in more dynamic contexts and in particular pathological perturbations, the explicit inclusion of coupling mechanisms will be important for predicting complex functions and interactions. Examples of specific hypotheses requiring such a coupling include analysis of the efficiency of the transduction of mechanical work into pump function and the implications of increased heart rate on chamber and coronary blood flow.
In terms of translating coupled models into the clinical human disease context, there is also a significant challenge in parametrizing and personalizing models using only minimally invasive measurements. This is in part because a number of cardiac models that are currently available have been constructed using inhomogeneous data collected across a range of species. A future challenge for the VPH community will be to develop species-consistent databases that will transparently link experimental measurements with model parameters using computational tools. The application of these tools will be critical to supporting the iterative model development process and guiding a productive cyclic collaboration between experimentalists and modellers. Our hope is that these types of community-wide resources will reduce cases where a model's failure to represent experimental data represents an inappropriate structure or parametrization and increases the probability that a failure identifies a gap in understanding, which can in turn lead to new insights being gained.
The production of models appropriate for human translation, which are capable of simulating the functionality necessary for understanding and tailoring treatments for specific individuals, requires an array of robust data and novel parametrization strategies. Measurements of cardiac wall motion (Uribe et al. 2007), electrical propagation (Chinchapatnam et al. 2007), chamber flow patterns (Kilner et al. 2000) and coronary perfusion (Gebker et al. 2007) currently provide high-resolution datasets for characterizing patients; however, exploitation of the full value of imaging technologies, and the combined information content they produce, will require the capacity to integrate these multiple types of functional data into a consistent modelling framework. Furthermore, the derivation of function parameters such as conductivity and active and passive mechanical properties represents a more complex problem. In many cases, the determination of these types of parameters may represent an ill-posed problem and a significant challenge for customizing a model to analyse a specific dataset. Our fundamental motivation in developing coupling techniques and integrated models is thus to contribute, meaningfully, to the research and development required to move towards this goal. Our hope and expectation is that this process will support a paradigm shift away from predefined clinical indices determining treatment options, and a move towards true personalization of care based on an individual's specific physiology.
This work has been supported by the United Kingdom Engineering and Physical Sciences Research Council (FP/F059361/1) and by the European Commission (FP7-ICT-2007-224495: euHeart). The authors also extend thanks to A. E. Keller for her comments on the manuscript.
↵† These authors contributed equally to the study.
One contribution of 15 to a Theme Issue ‘The virtual physiological human: tools and applications II’.
- © 2009 The Royal Society