The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.
Many of the problems investigated in the life sciences involve datasets with increasingly large volumes of data that span multiple spatio-temporal scales and modalities. This trend is driven by the increasing resolution of modern imaging technologies. Resulting two-dimensional slices and three-dimensional volumes are significantly larger than those that have typically been handled on desktop computers for viewing and analysis. This means that the methods of data extraction, rendering and display must adapt, ideally in a way that is scalable to address the certain future increases in resolution.
The long-term context is the drive towards personalized medicine, which builds on the appreciation that medical interventions need to be tailored for the patient, not ‘just’ the disease. Increasingly detailed medical imaging data require new tools for visualization, exploration, annotation, evaluation and training. These tools need to support medical decision-making in real time, or at least within a time frame that is comparable with modern laboratory parameter assessment (hours, not weeks).
In this paper, we illustrate our contribution to data visualization, exploration and annotation, focusing on two datasets from a project addressing the individual histo-anatomy of a rabbit heart. The heart was first scanned non-invasively, using high-resolution magnetic resonance imaging (MRI), providing a uniquely detailed three-dimensional dataset of cardiac anatomy. Subsequently, the whole organ was serially sectioned for histological staining and light microscopy, providing a stack of two-dimensional extended histological sections. The scale of the data (1.5 GB for the MR data, 1.4 TB for the histology stack) prohibits their viewing in full resolution on conventional display equipment (Plank et al. in press).
Our aim was, therefore, to develop a visualization environment, both hardware and software, that would allow scientists to examine as much of the image data in high resolution as possible, while maintaining context for the entire scene. This has led us to exploit high-resolution display technology capable of rendering over 50 million pixels. Equally important is the development of software that allows the use of this technology in a way that is both interactive and intuitive.
2. High-resolution displays
Standard computer display devices are designed for use ‘at arm's length’, for which a density of approximately 100 pixels per inch is generally regarded as optimal for viewing. This corresponds, for example, to a 17 in. monitor, with resolution of 1280×1024 (SXGA), just over 1 million pixels, or a megapixel. The problem for cardiac images with a resolution of 32 000×32 000, i.e. just over 1 Gpixel (109 pixels), is obvious: they contain two orders of magnitude greater detail than can be displayed on standard equipment.
The only option available today is to increase the physical size of the display. One possibility is to use projection; a single projector does not offer sufficient resolution (currently limited to 8 Mpixels), but arrays of projectors can be used to provide a tiled display. For example, the GigaPixel laboratory at Virginia Tech uses an array of VisBlocks (http://www.visbox.com) to provide a large screen, high-resolution facility: each of the 18 VisBlocks has resolution of 1280×720 pixels, totalling over 16 Mpixels. However, there remain limitations: despite successful research into blending, the alignment of projectors still poses a challenge; projectors are expensive; and considerable space is required between the projector and the viewing surface. Moreover, a person standing in front of the screen will cast a shadow unless back projection is used, which further increases demands on space and expense.
A cost-effective solution is to build an array of liquid crystal display (LCD) screens, arranged so as to provide a single large display surface. These tiled LCD panel displays are becoming increasingly popular: they allow a pixel density equivalent to a desktop monitor. This density is much higher than can be achieved for equivalent expense using projectors. Moreover, the space required is considerably less. This makes them attractive for applications such as biomedical image inspection. Another advantage of the tiled LCD panel is with respect to the brightness of the display, which is greater than that with a projection solution, allowing operation under normal room lighting.
A broad overview of high-resolution display technologies is given by Ni et al. (2006). In this paper, we focus on our own experience of using a tiled LCD panel display, the LeedsWall. This tiled display, shown in figure 1, comprises 28 flat panels, each of resolution 1600×1200, arranged in four rows of seven. This constitutes a 53.7 Mpixel display. The LCD panels are connected to seven computers, all equipped with two nVidia 7800 GTX graphics cards running two panels each. The computers are connected via gigabit (Gbit) Ethernet to each other and to a central filestore. The total hardware cost, including the custom-made stand, was under £30 000, which compares favourably with multi-projector solutions (Johnson et al. 2006). For full details of the construction of the wall, see Hodrien et al. (2007).
One disadvantage of the tiled LCD panel displays is the potential for distraction by the monitor frames, or bezels. We have experimented with two approaches to rendering. The first is to simply ignore the gaps, and render every pixel in the image leaving the user to neglect the borders. In our experience, users can do this easily, especially when immersed in a scene. The second option is to set up the display software to automatically adjust for the gaps, as though the user was looking through a window with bars across it. The disadvantage of this method is that additional processing power is required to set up the multiple viewports. Also, parts of the data are obscured from view. The latter may be an important consideration if the display wall is used to search for detailed targets that may be obscured. Mackinlay & Heer (2004) considered this problem in depth, arguing for a solution where the window metaphor is used so that geometry looks natural (diagonal lines will cross boundaries as straight lines with a gap, which the user finds easy to ‘fill in’), but where any labels are always displayed in full.
The successful application of a large display wall to biomedical applications does depend not only on the screen hardware, but also on the input devices used to control applications. A standard keyboard and mouse would not be appropriate for users standing in front of the display, possibly walking along the length of the display. We have therefore experimented with a GyroMouse, a FrogPad keyboard, Flock of Birds controllers and a wireless games controller. For our cardiac application, we have used primarily the games controller joypad. This provides both analogue and digital control options, including a variety of configurable buttons. This has proved to be a suitably intuitive device for new users, regardless of whether they have experience of using similar input tools on computer games consoles.
3. Software for tiled displays
Several different packages support synchronized visualization across multiple displays, based on a range of computational resources. We have evaluated Chromium, Scalable Adaptive Graphics Environment (SAGE), Distributed Multihead X (DMX), Virtual Network Computer (VNC) and VR Juggler. The last package proved to be most suitable for our cardiac application.
Chromium (Humphreys et al. 2002) is a tool for cluster-based rendering. It wraps the existing programs, intercepting the OpenGL calls and distributing them to the machines of the cluster for rendering. This not only minimizes the effort required to adapt the existing software for a tiled display, but it also imposes fairly important limitations. Significantly, Chromium can be used only with OpenGL applications. In addition, we found that Chromium is very demanding on the network, especially the head node, unless one uses display lists. When the network is heavily loaded, synchronization between the screens can be compromised.
SAGE (Jeong et al. 2006) was designed to provide a flexible environment for running multiple applications on high-resolution tiled displays. It allows for separation between the back-end systems that create visualizations and the front-end systems that render them. This separation, as with Chromium, means that high-bandwidth interconnects are required for good performance. Even with 1 Gbit networking, we found that SAGE provided low frame rates and delayed interaction, and the performance appeared highly dependent on the resolution of the displayed windows, unlike non raster-based systems.
DMX (http://dmx.sourceforge.net) allows a logical X-server to be created across multiple X-servers. The entire wall can thus be treated as a desktop allowing windows to be dragged around freely. Our attempts to use DMX were not particularly successful: testing on four screens suggested potential, but with 28 screens the latency of interactions outweighed the benefits of this solution.
VNC (http://www.realvnc.com) provides a very simple method for rendering onto high-resolution displays. A single VNC server has an off-screen X-server, set to the size of the wall. Each cluster machine then runs a VNC viewer and displays the relevant part of the image. Any X-application can thus be run on a display wall, as with DMX. However, this approach places a heavy load on the off-screen X-server, as it is forced to handle the entire wall area on a single machine, which affects scalability. Slave machines also receive a large volume of data across the network, a problem common to all image-based systems. A more promising approach is to embed a VNC viewer within a VR Juggler application, since this reduces the resolution of individual VNC servers to the size of the displayed window. Multiple applications may also then be rendered on the wall.
VR Juggler (Bierbaum et al. 2001) is an application framework that replaces the OpenGL Utility Toolkit (GLUT) that provides a portability layer shielding programmers from operating system and window system dependencies, in the traditional OpenGL software stack. It is necessary, therefore, to rewrite the application, although the work involved in doing so is fairly minimal. Window creation, viewport/camera management and input handling are all different from GLUT. The benefit is that one can write an application that works on a desktop or a display wall, with no need for modification to the code. A configuration file is supplied at run-time that describes the system in use, and input and shared variables are distributed among the cluster nodes. Our experience with VR Juggler has been very promising. The effort required to convert a program is minimal, and the process is well-documented. Since the geometry and resulting rasters are generated separately on each node, VR Juggler has the advantage of greatly reducing network traffic, compared with Chromium. In fact, the minimum traffic required with VR Juggler is of the order of hundreds of bytes per frame, which is unproblematic.
4. Application: histology image viewer
An OpenGL image viewer was created, which can load an image and render it on the wall using VR Juggler. Application of compressed textures was also explored as a way of reducing memory usage of the graphics cards, although this either places a burden on the graphics cards at run-time (causing stuttering) or requires a pre-processing step. The largest image tested thus far was an example with 170 000×100 000 pixels (or 17 Gpixels), although there are no architectural limits other than those imposed by the use of the TIFF format. Images are first converted into pyramid tiled TIFF files, and then rendered on the display wall using VR Juggler and a pixel-perfect scaling system, i.e. one pixel on the display wall is equal to one pixel in the image. Images are loaded dynamically, streamed across the network from a networked file server. A wireless joypad allows the user to control image viewing without restrictions to movement in front of the screen. This provides the user with the ability to navigate an image with analogue controls for panning and zooming.
Figure 1 illustrates the appearance of a trichrome-stained section of cardiac tissue, as described by Burton et al. (2006). The native tissue sections have dimensions of up to 3×5 cm; their two-dimensional digital images therefore are composed of up to 2000 individual microscopic projections (each with 3.3 Mpixels), which are tiled together to generate one extended high-resolution image of the section, with pixel dimensions of 0.546 μm×0.546 μm (Plank et al. in press). As highlighted above, images of this size cannot be viewed efficiently on normal desktop hardware. User inspection is vital, however, as the tiled sections are obtained in semi-automated runs, with no need for qualified user intervention or observation. Thus, the development of tools to generate large-scale high-resolution histological data has thus far not been complemented by approaches to actual interactive assessment of results. The solution presented here offers a powerful new approach for biomedical staff to visually inspect their data at native resolution.
The image tiles are loaded as required, rather than at start-up, both to avoid the time cost of loading the full dataset and to allow handling of images larger than the available RAM of the machines. This then necessitates the loading of image data during run-time, while maintaining the performance of the interface for the user. A separate texture manager thread loads the required images, so that they are available to the draw thread as necessary. Rendering is performed using lower resolution textures if this texture thread falls behind the user's interaction, so the user is left with a lower resolution view, but one that still pans and zooms freely. This enables the application to maintain a high frame rate (higher than 30 frames per second) at all times.
Additional facilities are provided to assist with navigating around such a large image. A thumbnail view in the top-left corner of the display wall provides the user with an overview of the current display relative to the overall image. The user can place annotation markers on the image, which appear in both the full and thumbnail views. These markers provide a visual cue to points of interest in the image, and serve as direct navigational aids as the user can jump between markers.
The system records sessions to disk, so that they can be replayed later. Every interaction is recorded, so the replayed session appears identical to the original run. This has applications for training, and provides information that may help to quantify the efficacy of the system.
5. Application: three-dimensional MRI viewer
Full three-dimensional reconstruction of rabbit cardiac histo-anatomy has become possible, thanks to high-quality MRI data, provided by the team of Jürgen Schneider at the University of Oxford. The MRI data voxels have an in-plane dimension of 26.4 μm×26.4 μm, and an out-of-plane size of 24.4 μm. In total, 1440 TIFF images (16 bit) with a resolution of 1024×1024 pixels were used for post-processing. The segmentation of this near-isotropic three-dimensional dataset from the original scans, as described by Goodyer et al. (2007), removed minor imaging-related artefacts, so that clear boundaries between tissue surface and non-tissue create effectively a binary volume. From this volume, we used isosurfacing techniques to interconnect the boundaries of the tissue, employing the freely available Visualization Toolkit (VTK) libraries (Schroeder et al. 2006). This approach is described in more detail below, and follows the method described earlier by Young et al. (2008) for computed tomography data of bone structure.
To perform the isosurfacing, we employed the ContourFilter routine in VTK. Surfaces produced by the above procedure tend to be notably jagged. This is caused by the rectangular acquisition grid, and will tend to generate sharp joints at ‘corners’. We know a priori that such boundaries are smooth in biological samples. Thus, by applying a smoothing algorithm, we generate surfaces that mirror the overall shape of the data without being constrained by the ‘false precision’ of the boundaries. This smoothing operation is important in terms of not only providing visual realism, but also for subsequent use of data in simulations of cardiac behaviour or external interventions such as defibrillation (which otherwise would cause spurious current peaks at sharp changes in surface geometry). It is, however, imperative not to smooth too much, as real features in the data, such as fine processes of the Purkinje network in the heart, could be lost.
The final step of data pre-processing is a reduction in the number of triangles generated. In smooth sections of geometry, it is possible to map a surface using significantly fewer points than originally generated. Again, we used VTK for this operation, to produce good quality surface reconstructions with a reduced number of triangles. For the whole heart used in our example, a high-resolution geometry can be based on 45 million triangles.
In order to speed up the rendering routines, we have further post-processed the generated output files. By reordering the triangle strips into smaller sub-volumes within the entire set, it is possible to write the visualization application in such a way that any section that is not visible in the viewing window will not be processed.
The software package that we have developed, called oView, loads previously defined surfaces, generated as described above. Some aspects have been explained previously by Goodyer et al. (2007). The advantage of isosurfacing and smoothing during the pre-processing stages is that computationally expensive operations on large datasets will be performed only once, and then loaded as many times as needed for different visualization purposes. This reduces loading times to a few seconds, compared with isosurfacing and smoothing, which can take from minutes to an hour, depending on the size of dataset and the quantity of smoothing operations performed.
The visualization program employs standard OpenGL features to apply realistic lighting effects for enhanced realism, as illustrated in figure 2. This shows a detail view inside the left ventricular chamber. It is possible to include other data in the same space, such as text markers, or the segmentation of the vasculature. This provides an excellent opportunity for the development of anatomical teaching and assessment tools, with potential extension towards functional representations based on mathematical modelling-derived illustration of electrical potential gradients during the spread of normal or disturbed cardiac excitation, or the consequences of external electrical shock application for defibrillation of the heart.
The isosurfaces generated correspond to the sum of all interior and exterior surfaces of the heart. These include small voids within the myocardium (such as interstitial clefts and vessels; for example, inside the ‘open’ papillary muscle (PM) representations in figure 3a), and also fine structures within the cavities (such as free-running Purkinje fibres (PFs) between PM and left ventricular free wall in figure 3a). In the absence of an indication of tissue versus non-tissue volumes, visual distinction between muscle and cavity can be difficult. To aid visual perception, it is helpful to superimpose MRI images onto the tissue surface rendering. This provides good visual cues for tissue identification (figure 3).
Important features implemented for navigation of the datasets include translocation along any path in the three-dimensional coordinate system, panning, rotation and zooming. Another important navigation technique is the use of an additional cutting plane. As opposed to zooming, where the magnification (and hence projection of the data onto the display) changes, a cutting plane allows a steady projection area on the screen, while revealing successive sections that would otherwise be hidden from view. This is helpful when scanning for regions of interest, such as insertion points of free-running PFs into the solid cardiac muscle, or tissue abnormalities arising from remodelling, for example related to myocardial infarction. By first approaching the area, using progression through cutting planes, followed by zooming to increase detailed presentation, this allows unprecedented ease of exploration of complex three-dimensional datasets.
Additional benefits arise from the inclusion of extra data modalities, such as text labels and navigational reference points. Labels are an important guide to help users in identifying structures or locations, and aid independent multi-user evaluation of data as well as training and assessment. For text labelling, key features are initially identified manually, assigned to a coordinate, and then displayed on screen whenever the relevant coordinate is in the field of view. Navigational reference points are stored as ‘way points’, containing viewing position, angle and magnification, to allow one to retrace a trajectory, or to map out features of interest. Automated transition between these positions allows one to create a continuous guided fly through. In addition, navigational points can be exploited to identify regions of interest, which may then be revisited off-site, using more generally available equipment. This extends the usage of display wall applications from a few reference sites to more general and distributed access by biomedical personnel.
The major advantage of using a high-resolution display is that very fine detail can be seen without losing the contextual information about where in the dataset one is. Orientation is further aided by a ‘thumbnail’ image of the whole three-dimensional structure (figure 4) showing where the user's view is located, relative to the whole organ. When the view ‘enters’ the tissue, a cutting plane is applied to the thumbnail, in order to provide positional information, while projection of reduced-resolution MRI slices ‘behind’ the cutting plane aids histological substrate identification.
In order to increase the performance of the software, we have also generated an additional highly decimated volume. The relatively small amount of extra memory used for this is more than outweighed by the advantage provided by visualization of this dataset (rather than the higher quality one) whenever the user is moving the scene. This means that the overall user experience is dominated by ‘movie-like’ quality of translocation, achieved by a frame rate of 25 frames per second, even when the maximum number of triangles is viewed.
6. Conclusions and future work
In this paper, we have shown how high-resolution displays can be applied to the large datasets that emerge from modern life science applications. It has been shown that two-dimensional images of arbitrary size can be interactively controlled across multiple computational resources. The use of the pyramid tiled TIFF image format supports instant access to the appropriate sections for display at an optimum resolution. For three-dimensional geometries, we have demonstrated how high-resolution geometries can be displayed in a visually helpful format. Interactivity, when moving through the volume, is enabled by background use of a lower resolution dataset. Both for two- and three-dimensional applications, thumbnails, labels and navigational markers have been implemented in order to offer additional functionality for the user.
This technology is not limited to one site. A similar display wall has recently been commissioned at the Oxford e-Research Centre, and is now used to display way-point-based reconstructions of explorations conducted at Leeds.
Future work aims to combine the two sets of source data (histology and MRI) in a more accurate and efficient manner. Full three-dimensional registration of each histological slice to the MRI volume is a challenging task, which is at the heart of an ongoing BBSRC-funded research initiative (Plank et al. in press). Once this has been accomplished, we intend to apply textured sheets of histology onto the volume dataset, to provide high-resolution cut planes at any angle, independent of the original alignment of native sectioning planes.
In anticipation of the increase in three-dimensional data size, required to address clinically relevant scenarios, we will assess the use of off-screen rendering. At present, the full three-dimensional cardiac data volume can fit into RAM on each machine, and the generated geometry can be stored in the graphics cards' memory. However, it is not difficult to see that larger datasets would exceed these limits, as data requirements increase faster than computational memory. By splitting the data up onto a remote render farm, it would be possible to add another level of detail, or expand visualized tissue dimensions. Other potential developments will come from better integration with emerging technologies that may include the use of three-dimensional displays, haptic feedback controllers, touch screens and tracking of user position and observation target.
The data used were kindly provided by the Oxford ‘3D Histo-Anatomical Heart’ project (BBSRC E003443; PI's P. Kohl, D. Gavaghan, J. Schneider). P.K. is a British Heart Foundation Senior Research Fellow. We also acknowledge support from the JISC-funded VizNET project and from EPSRC (Integrative Biology e-Science project). Finally, the authors thank Roy Ruddle and Rebecca Burton for many useful discussions.
One contribution of 16 to a Theme Issue ‘Crossing boundaries: computational science, e-Science and global e-Infrastructure II. Selected papers from the UK e-Science All Hands Meeting 2008’.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
- Copyright © 2009 The Royal Society