Parallel Hierarchical Molecular Structure Estimation

Cheng Che Chen
Electrical Engineering Dept.
Stanford University
cchen@leland.stanford.edu

Jaswinder Pal Singh
Computer Science Dept.
Princeton University
jps@cs.princeton.edu
http://www.princeton.edu/~jps/

Russ B. Altman
Section on Medical Informatics
Stanford University
altman@smi.stanford.edu
http://www-smi.stanford.edu/people/altman/

Abstract
Determining the three-dimensional structure of biological molecules such as proteins and nucleic acids is an important element of molecular biology because of the intimate relation between form and function of these molecules. Individual sources of data about molecular structure are subject to varying degrees of uncertainty. We have previously examined the parallelization of a probabilistic algorithm for combining multiple sources of uncertain data to estimate the structure of molecules and predict a measure of the uncertainty in the estimated structure. In this paper we extend our work on two fronts. First we present a hierarchical decomposition of the original algorithm which reduces the sequential computational complexity tremendously. The hierarchical decomposition in turn reveals a new axis of parallelism not present in the "flat" organization of the problems, as well as new parallelization issues. We demonstrate good speedups on two cache-coherent shared-memory multiprocessors, the Stanford DASH and the SGI Challenge, with distributed and centralized memory organization, respectively. Our results point to several areas of further study to make both the hierarchical and the parallel aspects more flexible for general problems: automatic structure decomposition, processor load balancing across the hierarchy, and data locality management in conjunction with load balancing. We outline the directions we are investigating to incorporate these extensions.

Keywords
molecular structures, hierarchical computation, shared-memory parallel processing, computing with uncertainty.

 

1. Introduction

The determination of molecular structure is a key element of molecular biology. Proteins and nucleic acids are complex three-dimensional structures with thousands of atoms. An understanding of their structure can facilitate the study of how these molecules function and aid in the design of drugs which augment, interfere with, or otherwise affect their functions.

The problem of determining molecular structure is usually approached experimentally, using X-ray crystallography, nuclear magnetic resonance (NMR), or a variety of biochemical experiments that yield distances, angles, and other structural information [11]. In addition to these experimental data, there is a large body of knowledge from general chemistry: bond lengths, bond angles, and torsion angles between atoms. The quality and abundance of data from these sources can vary significantly. Thus, it is important to combine these sources of information effectively to produce not only a structure consistent with the data, but also a measure of the variability in the estimated structure.

A probabilistic algorithm for integrating multiple sources of uncertain data to compute an estimated structure was reported in [1]. The parallelization of this algorithm was presented in [6]. An extension to the algorithm to handle more general probability distributions was published in [2]. In this paper, we first briefly describe the formulation of the molecular structure determination problem and its algorithm. Then we show that by taking advantage of the problem structure of our application domain, we are able to organize the computation hierarchically. The hierarchical decomposition speeds up the computation tremendously by reducing the sequential computational complexity. In addition, the use of hierarchy exposes a new axis of parallelism that is both important and challenging to exploit. The results we have obtained for two problems reveal several issues in the automation and efficient parallelization of the hierarchical approach. Finally we outline the directions we are investigating to address these issues.

In the following discussion, we use measurements, observations, constraints, and data interchangeably to denote the available information about the structure of a molecule.

 

2. The Original Sequential Algorithm

We represent the unknown rectangular coordinates of the atoms of the molecule under examination by an n-dimensional state vector

x = (x1y1z1, …, xp, yp, zp)t,

where = 3p is three times the number of atoms. An idealized measurement of the molecule is modeled by a vector-valued function y of the state vector,

y = h(x).

For example, NMR measurements yield short-range distances between neighboring atoms. A distance function between two atoms of the molecule has the form

where x is the state vector of atomic coordinates, and (xiyizi) and (xjyjzj) are the positions of the two atoms i and j, respectively.

Random errors and noise in the measurement process are modeled as additive terms, so that an actual observation assumes the form

z = y + v = h(x) + v,

where the noise v is modeled as a Gaussian random vector. If the measurement process is unbiased, then repeated independent measurements will cluster around the "true" value, and the random noise will have zero mean. The covariance matrix R of the Gaussian noise term is related to the precision of the measurement technology. High-precision measurements have tight variances. In the notation of probability, we assume v ~ N(0R).

In addition to the state vector x, we also maintain a covariance matrix C. The diagonal entries of C are the variances of the coordinates of the state vector and provide a measure of the uncertainty of the estimated numbers. The off-diagonal entries are the covariances between the coordinates; they indicate linear correlations between the components of x. The information contained in the covariance matrix is useful in assessing, for example, which parts of the molecule are better defined by the data. Hence the pair (xC) represents our best estimate of the molecular structure along with an indication of the variability of the estimated numbers, given the available data.

We may incrementally update our structure estimate as new observations become available. Figure 1 shows the procedure for the application of one constraint [8].


Figure 1: Sequential Update Algorithm

The algorithm is able to process observations in sequence rather than all at once because the effects of previous updates are summarized as linear correlations in the covariance matrix. Given an initial structure estimate, we apply this procedure over the entire set of constraints to produce an updated structure. Because of the nonlinearity in the measurement functions, we typically have to re-initialize the covariance matrix and repeat the cycle of updates until the estimate converges to an equilibrium point. This algorithm can be extended to process non-Gaussian constraints [2].

Let's briefly analyze the computational complexity of this algorithm. Suppose the dimension of the state vector is n. Furthermore, assume that we have M scalar observations organized into vectors of uniform dimension m, so that the sequential update algorithm is applied M/m times.

Application of the update procedure using one m-dimensional observation vector entails the following computation:

  1. The (x n) matrix Hj is sparse for the kinds of observation data we typically encounter. Its formation involves evaluation and differentiation of the measurement functions and usually runs in O(m) time.
  2. Dense-sparse matrix multiplications (C-Hjt and HjC-Hjt) require O(mn) operations.
  3. The Cholesky factorization of the (m x m) matrix (HjC-Hjt + R) takes O(m3) time.
  4. Computation of the filter gain matrix Kj involves two triangular system solves with the dense (n x m) matrix C-Hjt and the Cholesky factor obtained in the previous step. This computation requires O(m2n) time.
  5. Update to the state vector estimate consists mainly of a dense matrix-vector multiplication and runs in O(mn) time.
  6. Update of the state covariance estimate requires a dense matrix multiplication and takes O(mn2) operations.
  7. Miscellaneous vector operations take O(n) time.

The total amount of computation once over the entire set of constraints is M/m times the above steps. To assess the effect of the batch factor m on the total computational time, we consider the average work per scalar constraint by dividing the total computational time by the total dimension of constraints. We see from the above categories that the average work per scalar constraint for vector operations decreases as m increases; for Cholesky factorization it grows as O(m2) with m; and for the filter gain matrix computation it increases as O(mn) per scalar observation.

Therefore we expect the average work per scalar observation to increase linearly at the beginning as we process larger observation vectors at a time. However, when m is increased to a comparable magnitude with n, the O(m2) growth in Cholesky factorization will start to dominate.

This simple analysis suggests that we process observations in small chunks, although it does not indicate exactly what size of chunks to use since asymptotic analysis does not capture the behavior of the algorithm for small values of its parameters. Also, it does not take into account the number of cycles of constraint application to convergence. In practice, convergence seems to be slightly faster with larger constraint batches.

 

3. Hierarchical Decomposition

Biological molecules have a natural hierarchy in their structure. Observations relating to the structure of the molecules tend to be largely localized following this hierarchy as well. For example, the amino acid residues which make up a protein consist of a common backbone and distinguishing sidechains. Residues which are nearby in the linear sequence of the protein's primary structure may arrange themselves in space to form secondary structures such as helices and sheets. These hierarchical subunits may interact to form further aggregate features [10].

In this section, we present a hierarchical decomposition of the state estimate update process. It derives from the problem structure of the application domain, namely locality of most observations. Mathematically the key observation is that updating the structure estimate with localized information involves computation with many zeros and leaves other currently uncorrelated (zero covariance) parts of the estimate unmodified. This fact can be exploited to reduce the computational complexity of the update procedure by recursively dividing the molecule into smaller and smaller parts and applying every constraint to the smallest unit which wholly contains it.

Let our current estimate of the molecular structure be (x-C-). Suppose the state vector estimate consists of two uncorrelated parts (whose dimensions are n1 and n2, respectively, with n1 + n2 = n); that is,

Let's examine what happens when we update our estimate with an observation which pertains to only one part (say, x(1)) of the structure. The function (say, of dimension m) which relates this observation to the underlying state of the structure has the form:

If we trace the execution of the update process (Figure 1), inside the loop we first differentiate the function h with respect to the n state variables of x to produce a linearized relation Hj (dimension (m x n)), evaluated at the current estimate x+,j-1. However, since h depends only on the first n1 state variables, the last n2 elements of each row of Hj are zeros,

Hj = (Hj(1) 0).

This in turn leads to a localized non-zero structure for the (n x m) matrix C-Hjt. The (m x m) matrix (HjC-Hjt + R) is the covariance of the locally linearized model of the observation and in principle could be fully dense. However, the (n x m) gain matrix Kj has the same non-zero structure as C-Hjt.

The zeros in the lower half of the filter gain matrix prevent change to the lower n2 coordinates of the state vector estimate. Furthermore, since

the update to the state covariance estimate will be restricted to the upper left block:

Thus this observation of a local part of the molecule produces only a corresponding local update to our structure estimate.

We should note that if we now introduce a constraint between the two parts of the molecule, then the off-diagonal zero blocks of the state covariance estimate will start to be filled in, and both halves of the state estimate will begin to change in response to new data.

The above derivation suggests a decomposition principle to reduce useless computation with zeros. Given an initial structure estimate (x-C-):

  1. Identify and partition the initial estimate into a number of uncorrelated parts:

  2. Peel off (x-(i)C-(ii)), 1 <= i <= q, and those constraints which functionally depend only on it, as a separate subproblem. We are left with constraints whose functional dependencies span partition boundaries.
  3. Recursively apply steps 1 and 2 to decompose each subproblem.
  4. In post-order, compute each subproblem with the constraints assigned to it and to no smaller subunit.

We should emphasize that this hierarchical organization achieves the same computation as the original flat problem. The difference is in the elimination of useless operations with zeros in the application of localized observations.

We can visualize a hierarchical organization of the molecular structure such as shown in Figure 2. Our original molecule of interest is positioned at the root. A node may give rise to some number of children, corresponding to initially uncorrelated sub-structures of the node. Associated with each node are observations local to itself; that is, we try to apply constraints at the lowest level of the tree possible. The update process starts at the leaf level. We update a leaf node with its associated constraints by applying the procedure of Figure 1. When all the children of a node have been updated, we may proceed to update the node itself.

 

3.1 Hierarchical vs. Flat Organization

In this section we use a set of simple problems to compare the computational efficiency of the hierarchical approach with that of the flat organization.

An RNA [10] molecule consists of a linear sequence of bases folded into a unique shape in space. There are four types of bases, A, C, G, and U, each consisting of a common backbone and a distinguishing sidechain. Beside the covalent bonds which chain the bases together, complementary bases may pair up out of the linear order. We focus on the simplest motif of RNA structure, a double helix. Structurally, an RNA double helix is made up of a series of base pairs twisted into a spiral pattern.


Figure 2: Hierarchical Decomposition of RNA Double Helix

In the flat organization, we simply treat the helix as a long vector of atoms. For hierarchical decomposition, we recursively divide the helical structure into two sub-helices of equal length, down to individual base pairs. We further decompose a base pair into two bases, each of which is divided into the leaf level of a backbone and a sidechain. This decomposition is illustrated in Figure 2.

The constraints we use to define the shape of the helix fall into five categories:

  1. Distances between atoms in the backbones.
  2. Distances between atoms in the sidechains.
  3. Distances between an atom in the backbone and one in the sidechain of a base.
  4. Distances between atoms on opposite sides of a base pair.
  5. Distances between atoms on opposite sides of two adjacent base pairs.

For hierarchical computation, the first two types of distances may be applied to the leaf nodes of backbones and sidechains. One level up in the hierarchy, the individual bases may be assigned the third type of distances to orient the backbone and the sidechain. The fourth type of constraints position two bases into a base pair. Above the level of base pairs, we apply the last category of distances whenever we join two sub-helices to form a parent helix.

Table 1 and the accompanying Figure 5 show the execution times, for both flat and hierarchical organization, for helices of various lengths (in number of base pairs). We see that the average execution time per scalar constraint with hierarchical decomposition grows much more slowly than with flat organization of the same problems.

Flat Organization Hierarchical Decomposition
Helix LengthNumber of Atoms Number of ConstraintsTotal Time (sec) Time per ConstraintTotal Time (sec) Time per ConstraintSpeedup
143 6751.16 0.001720.65 0.000961.78
286 15747.78 0.004942.42 0.001543.21
4170 329454.09 0.016428.45 0.002576.40
8340 6810427.23 0.0627430.98 0.0045513.79
16680 138243436.18 0.24857114.20 0.0082630.09

Table 1: Helix Run Times and Speedup of Hierarchical vs. Flat Organization




Figure 5: Helix Computational Efficiency of Hierarchical vs. Flat Organization


We can qualitatively understand the phenomenon as follows. Suppose the size of the molecule (dimension of the state vector) is n. The computational complexity of the update procedure of Figure 1 grows as O(n2) for a single scalar constraint. Therefore for flat organization the quadratic growth of per-constraint execution time is expected.

Next consider a hierarchical decomposition of a molecular structure in which a parent node gives rise to two children of the same size, half as large as the parent. If the depth of the binary tree is d, then at level i, (0 <= i <= d), there will be 2i nodes, each of size O(2d-i). The average execution time per constraint depends on how much of the computation we can push down the tree. Suppose a node at level i has mi scalar constraints, then the average execution time of a constraint is

In an optimistic scenario (such as the present examples of helices), most observations are localized, and we are able to assign most of them to nodes down toward the leaf level so that a node gets roughly some constant number of constraints. Then the average run time per constraint grows linearly with the size of the molecule:

t = O(2d) = O(n).

On the other hand, if a node has as many constraints as those of its children combined, then the computational advantage of a hierarchical scheme over a flat organization is more modest:

We believe that most realistic problems lie between these two bounds and that the advantages of hierarchical computation are substantial in practice.

 

4. Parallelization

In this section, we discuss the potential parallelism in our hierarchical algorithm. We first focus on the computation within one node of the structure hierarchy. The computation required for each node of the hierarchy is an instance of the flat problem and involves repeated application of the update procedure of Figure 1 over all the constraints of the node.

 

4.1 Intra-Node Computation

A coarse-grained way to parallelize computation for a node is across the application of constraints. We may divide the set of constraints assigned to the node into two disjoint subsets. In parallel, these two groups of constraints can produce two different updated structure estimates, (x1C1) and (x2C2). These two estimates may then be combined to produce a final estimate using the following procedure:


Figure 3: Combination of Independent Updates of a Node

If we partition the collection of constraints into more than two subsets, then the separately produced results have to be combined pairwise in a "tournament" fashion.

This coarse-grained approach has the advantage of low inter-processor communication requirement in the processing of the constraints: the processing of disjoint subsets is independent. However, there are two drawbacks. First, the combination process is an expensive computational overhead which does not exist in sequential application of constraints. If the node state vector has dimension n, the combination process requires essentially the same amount of work as applying a constraint vector of the same dimension. If the total dimension of the entire set of constraints is M, then M needs to be much larger than n for this scheme to be worthwhile. Second, this parallel approach requires duplication of the state data structures (x-C-), which take up the lion's share of the space required by a node.

In other words, this approach is more suitable for processing massive, redundant observations on structures whose state information is modest in size relative to the available storage. In contrast, data on biological structures are usually expensive to obtain and not in great abundance. Thus from the point of view of both computational overhead and scalability of memory requirement, parallelizing intra-node computation across constraint application is not attractive. See [6] for a numerical example.

We therefore rely on parallelism within the constraint application procedure (Figure 1) for each set of constraints. As shown in [6], this involves parallel matrix computation and yields good performance for nodes of realistic sizes.

 

4.2 Hierarchical Computation

Because the nodes in the structure hierarchy become progressively smaller toward the leaf level, sole reliance on intra-node parallelism is inadequate. Hierarchical decomposition exposes a new axis of parallelism beyond parallel matrix computation within a node. Although the children of a node in the structure hierarchy must be updated before the node, the computations for the subtrees rooted at those children are independent (more generally, nodes that do not have an ancestor-descendent relationship can be processed in parallel). Therefore given a number of processors assigned to a node, we may distribute these processors over the child branches. Moreover, if a node is assigned only a subset of the available processors, we may migrate the associated data closer to those processors in a physically distributed memory environment.

This presents a new challenge for efficient parallelization. How do we assign processors to nodes to achieve load balance? We would like to compute statically as much of the assignment as possible to have better control over data locality.

 

4.3 Processor Assignment

Our strategy is to estimate the amount of work at every node and divide processors among nodes accordingly. To determine how to do this, we constructed a set of nodes of various sizes. For each node size, we also varied the size of the constraint vector (number of scalar constraints) input into the update procedure at a time (Figure 1). We computed the average execution time of a scalar constraint for each experimental setting. The results are shown in Table 2 and the accompanying Figure 6.

Node Size (Number of Atoms)
Constraint Batch Dimension 4386 170340 680
10.00535 0.020080.07784 0.346011.41522
20.00324 0.011810.04571 0.199450.80863
40.00204 0.007120.02670 0.113540.45738
80.00154 0.005070.01868 0.076130.30157
160.00141 0.004350.01537 0.060010.23427
320.00176 0.005140.01689 0.063010.23850
640.00246 0.006280.01916 0.066570.25133
1280.00387 0.008990.02429 0.075830.27472
2560.00747 0.015330.03788 0.111430.38431
5120.01630 0.029150.06277 0.152570.46112

Table 2: Average Execution Time (in sec) Per Scalar Constraint



Figure 6: Projected Views of Per-Constraint Execution Time


We see that over a wide range of node sizes, the update procedure runs the fastest if we divide the constraints into batches of size 16, although from the view of FLOP count the run time should increase monotonically with the constraint batch size. As shown earlier in [6], when we segment the set of constraints into very small vectors and thus have to apply the update procedure (Figure 1) many times, the algorithm degenerates to vector operations. It repeatedly streams through one large data structure (the state covariance matrix) without much reuse in the cache and suffers large number of misses. Processing constraints in moderate-sized batches allows tiling of matrix computations, which exploits temporal locality well.

Also evident from Figure 6 is the quadratic growth of per-constraint execution time with the node size, which can be inferred from the slope of the lines on the log plot.

We applied a constrained least-squares polynomial fit to the data points of Table 2, excluding those for very small constraint batch sizes. We imposed two simple checks on the regression:

  1. The leading coefficient should be positive, in order for the regression model to be a growth function.
  2. The sum of coefficients and separately the constant term should be non-negative, so that the fitted polynomial does not predict a negative execution time for very small values of its parameters.

Although the constrained regression may not fit the data points as well as an unconstrained one, it ensures against catastrophic failure of the fitted model away from the data points.

The result of the regression is


Equation 1: Work Estimation

where t is the estimated execution time of an equivalent scalar constraint, n the node size (dimension of its state vector), and m the constraint batch dimension. We only included linear dependency on m because its higher-order effects are negligible over the range of values that we typically use and because the higher-order fits were not very stable and failed to satisfy the checks.

Although the above experiments used only distance constraints-the most prevalent form of data for molecular structures-we could in principle obtain a work estimation formula for each type of observation in which we are interested. This experiment may be done only once for each type of constraint (and not for every new problem) because the computational steps are the same except for the evaluation of the constraint functions.

Given a hierarchical decomposition of a structure, we can now distribute the computation to processors using the following heuristic steps:

  1. Estimate the work at each node using Equation 1. Accumulate the numbers upward so that the estimated work in any subtree is available.
  2. Assign all processors to work on the root of the tree.
  3. Given a node which has already been assigned processors, order the subtrees anchored at its child nodes in increasing amount of work.
  4. For every bipartition of the processors assigned to the node, find a partition point among the child subtrees to divide the work in a ratio as close to the processor partition as possible. Select the best match.
  5. According to the best match, separate the processors and the child nodes into two respective groups. Recursively perform Step 4 on the two groups until every child node is assigned some number of processors.
  6. Repeat Steps 3-5 for the child nodes.

A potential problem with this static assignment is that load imbalance may still occur if the mismatch between processor partition and subtree work ratio is significant.

 

4.4 Results of Parallel Computation

We now present the results of parallel computation for two problems. One is the RNA double helix of length 16 base pairs (Figure 2) described in Section 3.1. The other is the prokaryotic 30S ribosomal subunit. This molecular complex consists of 21 proteins and the 16S rRNA molecule, which contains about 65 double helices plus roughly the same number of interconnecting coils. The positions of the proteins have been determined by neutron diffraction mapping [4][5] and serve as reference points for placing other parts of the macromolecule. The helices and coils consist of RNA bases with geometrical constraints between them. They are positioned using experimental data which provide information about distances between helices and from helices to the proteins. The modeled problem has about 900 pseudo-atoms and 6500 constraints. A low-resolution discrete conformational space search [3] was first performed to generate initial structure estimates. This preprocessing step serves to combat low-quality locally optimal solutions which may be encountered by analytical procedures such as the current algorithm. See [7] for details of the problem and its computation. The decomposition of the 30S ribosome problem is indicated in Figure 4.


Figure 4: Hierarchical Decomposition of ribo30S

The execution platforms on which we performed our experiments are the Stanford DASH multiprocessor [9] and a Silicon Graphics Challenge. The DASH is an experimental research multiprocessor built at Stanford University. It supports an implicit shared-memory communication abstraction in hardware, with hardware-supported cache coherence. The machine we used has 32 processors organized into 8 clusters. A cluster comprises 4 33-MHz MIPS R3000 processors connected by a shared bus, and clusters are connected together in a mesh network. Every processor has separate 64KB first-level instruction and data caches and a 256KB second-level cache. Although the address space is logically shared, the main memory is physically distributed. Every cluster has an equal fraction of the physical memory of the machine. All caches in the system are kept coherent in hardware using a distributed directory-based protocol.

The SGI Challenge is also a shared-memory multiprocessor. It has 16 100-MHz MIPS R4400 processors with R4010 FPUs, connected to a 12 GB/sec bus. Each CPU has separate 16KB first-level instruction and data caches and a 1MB unified second-level cache. Unlike the DASH, the Challenge has centralized memory organization.

For both problems, we measured the execution time of the program over one complete cycle of application of all the constraints. It typically takes between 20 to 200 cycles over the constraint set for the solution to converge, but the pattern of computation is the same across cycles. We have excluded time for input, output, and initialization to discount cold-start effects. On DASH, we distributed some of the larger, contiguous pieces of data for a node to only the processor clusters which are assigned to the node to improve locality in main memory, in a round-robin fashion to avoid hot spots. We also recorded the time spent in the different types of array operations which make up almost all the execution time of the application in a typical run.

The results are shown in Table 3 through Table 6 and their accompanying figures. One confirmation of the effectiveness of the parallel hierarchical approach is that, compared to the largest problem we computed in [6], the current problems are much larger and yet executed in a fraction of the time. We see good speedups for both problems, around 24 for 32 processors on DASH and 14 for 16 processors on the Challenge. From the tables it is clear that most of the performance comes from efficient system solves, dense matrix multiplications, and dense matrix-vector multiplications. These operations have low communication requirement, and their computation can be effectively tiled for cache locality. On the other hand, the dense-sparse matrix multiplication routines achieve only about 55-75% of the ideal speedup on DASH. This can again be understood in terms of locality, since the sparse matrix to some degree randomly accesses its dense counterpart, and remote cache misses (the proportion of which increases with more processors) are more expensive than local cache misses on DASH. The two types of operations which do not parallelize well on both execution platforms are Cholesky factorization, because of the small size of the matrix to be factored and because of dependency in the computation of the matrix sub-blocks; and vector operations, which suffer from being interleaved with other array operations in the code such that the chances of the needed vectors being in the cache the next time around are small.

NPtime spdupd-schol sysm-mm-v vec
1483.22 1.0022.33 1.9555.07 384.973.14 0.99
2246.56 1.9611.48 1.0727.53 193.481.37 0.69
4122.09 3.965.34 0.5813.38 95.130.54 0.34
693.00 5.203.59 0.539.28 59.870.47 0.27
857.54 8.402.49 0.386.32 43.810.20 0.19
1052.93 9.132.28 0.365.39 36.810.17 0.18
1244.37 10.802.00 0.334.54 30.460.13 0.16
1442.01 11.501.83 0.303.89 27.080.11 0.17
1633.20 14.551.91 0.283.70 24.110.11 0.17
2031.14 15.521.57 0.313.41 20.120.10 0.15
2425.07 19.271.40 0.272.56 17.250.09 0.15
2824.58 19.661.28 0.302.38 15.520.08 0.14
3220.00 24.161.35 0.282.12 13.310.07 0.15

Table 3: Helix Work Time and Distribution on DASH


NP:number of processors d-s:dense-sparse matrix multiplications
time:work time, which is total execution time minus initialization chol:Cholesky factorization
minus input time minus output time sys:system solves
spdup:speedup factor m-m:dense matrix multiplications
m-v: dense matrix-vector multiplications
vec: vector operations


Figure 7: Helix Speedup and Time Distribution on DASH



NPtime spdupd-schol sysm-mm-v vec
1924.57 1.0017.33 0.8333.18 861.053.01 0.61
2446.42 2.079.09 0.5016.90 411.721.26 0.33
4215.95 4.284.67 0.298.35 197.340.29 0.17
6137.95 6.702.58 0.225.09 120.300.21 0.12
8110.48 8.372.29 0.344.73 92.140.16 0.10
1087.98 10.511.90 0.173.13 75.980.09 0.10
1272.60 12.741.71 0.173.01 62.320.12 0.08
1467.83 13.631.70 0.162.62 56.280.07 0.08
1660.02 15.401.53 0.182.31 51.070.07 0.08
2049.09 18.831.42 0.161.93 41.570.06 0.08
2443.93 21.051.43 0.331.62 37.100.05 0.08
3238.14 24.241.17 0.161.37 32.220.04 0.08

Table 4: ribo30S Work Time and Distribution on DASH



Figure 8: ribo30S Speedup and Time Distribution on DASH



NPtime spdupd-schol sysm-mm-v vec
1159.99 1.006.96 0.6919.48 128.860.49 0.33
282.65 1.943.42 0.359.76 66.380.25 0.16
442.20 3.791.65 0.194.93 33.770.13 0.09
632.30 4.951.13 0.153.28 22.530.09 0.06
821.79 7.340.84 0.122.46 17.210.06 0.05
1018.83 8.500.69 0.111.97 13.980.05 0.04
1215.98 10.010.59 0.101.67 11.550.04 0.05
1414.49 11.040.50 0.101.43 10.050.04 0.04
1611.59 13.800.47 0.101.26 8.870.03 0.04

Table 5: Helix Work Time and Distribution on Challenge



Figure 9: Helix Speedup and Time Distribution on Challenge



NPtime spdupd-schol sysm-mm-v vec
1272.53 1.005.37 0.3211.55 253.520.29 0.15
2145.41 1.872.68 0.175.73 134.460.15 0.08
472.56 3.761.33 0.102.88 66.680.08 0.05
650.35 5.410.91 0.082.06 45.190.05 0.03
837.26 7.310.69 0.061.45 33.980.04 0.03
1029.44 9.260.56 0.061.17 26.770.03 0.03
1224.96 10.920.48 0.050.96 22.440.03 0.03
1421.91 12.440.43 0.050.84 19.690.03 0.03
1618.86 14.450.40 0.060.74 16.850.02 0.03

Table 6: ribo30S Work Time and Distribution on Challenge



Figure 10: ribo30S Speedup and Time Distribution on Challenge


For the Helix problem, we see a noticeable dip in the speedup whenever the number of processors is not a power of 2. This is a short-coming of the static scheduling algorithm. Whenever the processors at a node cannot be evenly divided into two groups for the two subtrees, the computation effectively proceeds at the speed of the smaller group because the two groups must synchronize at the node after update of the subtrees. The ribo30S structure does not suffer this problem because its branching factor is larger, allowing for more even division of work.

In comparison with our previous, non-hierarchical version [6], the current application incurs noticeably higher memory overhead. The hierarchical code dynamically allocates nodes since their number and sizes can vary widely. It also handles more general types of constraints which use pointers to link together variable portions of the data structure. Hence data are much more scattered about in the address space. Unrelated data for different nodes may interleave depending on the placement decision of malloc(), and many small data structures, especially the constraints, cannot be properly placed and migrated at page granularity without massive data reorganization. This problem can be ameliorated by better data structure design to reduce fragmentation of logically contiguous data.

 

5. Further Work

We have demonstrated the efficiency of a hierarchical approach in molecular structure estimation and successfully applied parallel computation to its solution. The results of our experiments suggest several areas of further study to extend the effectiveness and functionality of our parallel algorithm.

Currently the application requires the user to specify the entire structure hierarchy (although a simple and non-optimal recursive bisection method is in place to decompose a flat problem specification). A computationally efficient hierarchical decomposition allows most of the constraints to be ascribed to nodes toward the leaf level of the tree. Effective automatic decomposition of a flat structure specification seems to us to require good solution to the graph partitioning problem. We can think of the atoms of the molecule as nodes in a graph and constraints between atoms as edges between the nodes. A heuristic to partition the graph into a small number of loosely coupled subgraphs will lead to an efficient decomposition of the molecular structure because most of the observations will be localized in the sub-structures, which can then be recursively subdivided.

An alternative, bottom-up approach may be to have the user specify the leaf nodes of the structure hierarchy. The leaves usually correspond to the building blocks of the molecular structure such as nucleotides or amino acid residues and already encapsulate a lot of information about locality of interaction. The program can then look for a good hierarchical grouping of the nodes to minimize the number of constraints that need to be applied at the higher levels of the hierarchy.

The results of the Helix problem shows that static scheduling of processors leads to some loss of efficiency in a hierarchy with low branching factors when processors cannot be evenly divided among the subtrees of a node. This is a more difficult problem to solve. Task (constraint) stealing between sibling tree nodes is made difficult by the fact that the computation within a node is an instance of the flat problem, which consists of many phases separated by synchronization among the processors assigned to that node. The stolen task results must be combined with those of the originating node, further increasing computational cost. However, dynamic reassignment of processors to nodes by periodic global synchronization does merit further investigation. This approach will make use of work estimation so that every processor (or processor group) can take roughly the same amount of time to process some number of constraints at the corresponding nodes between global synchronization and processor re-grouping.

From the results on DASH it is evident that the bend in the speedup curve correlates strongly with the increase in the overhead of memory operations. This increase comes from using pointers to link variable portions of data structures and the consequent fragmentation of logically contiguous data. For our application, this situation is not difficult to improve because, although many data structures have variable parts, they have the property that, once instantiated, they remain fixed throughout the execution of the program. More careful memory management to group related data together will improve data locality and should not be difficult to implement.

Finally, we have not examined the impact of hierarchy on convergence. All the computations and comparisons have been made for one complete cycle of application of all the constraints. Multiple iterations are usually required for the solution to settle to a stable value. The difference between the hierarchical organization and the flat computation is in the order of constraint application. Hierarchical computation processes constraints in order of locality of interaction, whereas flat computation is not guided by this property of the application domain. A comparison of three different ways of ordering constraints for flat computation can be found in [1], which as expected shows that incorporating domain knowledge into the mathematical procedure helps convergence. We believe hierarchical organization of constraints should further speed convergence in addition to reducing the computational complexity within an iteration.

 

6. Related Work

In the area of molecular structures there are two lines of research related to the work described here. One is distance geometry [12][13]. It takes as input a set of bounds on the interatomic distances of the molecule. Within these bounds, trial distances are generated and refined until a valid distance matrix results. The first three eigenvectors of a closely related matrix then gives the coordinates of a candidate structure which satisfies the distance bounds. The other goes along the line of energy minimization and molecular dynamics [14][16], which represent interactions between atoms by energy terms. Optimization techniques are applied to find a conformation with the lowest energy. A systematic comparison with respect to speed, accuracy, and sensitivity of these methods can be found in [15].

 

7. Conclusion

Biocomputing applications are an emerging arena in which high-performance computing has increasing relevance. One important component of computational biology is molecular structure determination. We have presented a parallel hierarchical method for estimating molecular structure from multiple sources of uncertain data and demonstrated the application on two shared-address-space multiprocessors using data sets from real biological problems. While we have obtained good speedups on the problems we have examined, the results of our experiments point to three areas of further improvement: automatic structure decomposition, processor load balancing, and data locality. These issues have relevance not only to our own application, but also to parallel hierarchical computation in general. We are currently investigating ways to address them.

 

8. Acknowledgment

We would like to thank John Hennessy for his support. We also thank Radhika Thekkath for helpful comments and Richard Chen for providing us with the 30S ribosome problem. This work is supported in part by ARPA contract DABT63-94-C-0054.

RBA is a Culpeper Medical Scholar and is supported by NIH LM-05652. Computing support has been provided in part by CAMIS resource NIH LM-05305.

 

9. References

 [1] Altman, R. B. A Probabilistic Approach to Determining Biological Structure: Integrating Uncertain Data Sources. Int. J. Human Comp. Studies, 42, pp.593-616, 1995.

 [2] Altman, R. B., C. C. Chen, W. B. Poland, J. P. Singh. Probabilistic Constraint Satisfaction with Non-Gaussian Noise. Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 15-22, San Francisco, CA, 1994.

 [3] Altman, R. B., B. Weiser, H. F. Noller. Constraint Satisfaction Techniques for Modeling Large Complexes: Application to Central Domain of the 16S Ribosomal Subunit. Proceedings of Second International Conference on Intelligent Systems for Molecular Biology, pp.10-18, Stanford, August 1994, AAAI Press.

 [4] Capel, M. S., D. M. Engelman, B. R. Freeborn, M. Kjeldgaard, J. A. Langer, V. Ramakrishnan, D. G. Schindler, D. K. Schneider, B. P. Schoenborn, I. Y. Sillers, et. al. A Complete Mapping of the Proteins in the Small Ribosomal Subunit of Escherichia coli. Science, 238(4832): 1403-6, 1987.

 [5] Capel, M. S., M. Kjeldgaard, D. M. Engelman, P. B. Moore. Positions of S2, S13, S16, S17, S19 and S21 in the 30S Ribosomal Subunit of Escherichia coli. J. Mol. Bio., 200(1): 65-87, 1988.

 [6] Chen, C. C., J. P. Singh, W. B. Poland, R. B. Altman. Parallel Protein Structure Prediction from Uncertain Data. Proceedings of the Supercomputing '94 Conference, pp.570-579, Washington, D. C., 1994.

 [7] Chen, R., D. Fink, R. B. Altman. Computing the Structure of Large Complexes: Applying Constraint Satisfaction Techniques to Modeling the 16S Ribosomal RNA. Biomolecular NMR Spectroscopy (Markley, J. L. and S. J. Opella, eds.), pp.279-299, Oxford University Press, 1995.

 [8] Gelb, A., ed. Applied Optimal Estimation. the MIT Press, Cambridge, MA, 1984.

 [9] Lenoski, D., J. Laudon, K. Gharachorloo, A. Gupta, and J. Hennessy. The Directory-Based Cache Coherence Protocol for the DASH Multiprocessor. Proceedings of the 17th Annual International Symposium on Computer Architecture, pp.148-159, May 1990.

 [10] Stryer, L. Biochemistry. W. H. Freeman, New York, 1991.

 [11] Wuthrich, K. NMR of Proteins and Nucleic Acids. John Wiley and Sons, 1986.

 [12] Crippen, G. M. Distance Geometry and Conformational Calculations. John Wiley and Sons, 1981.

 [13] Havel, T. F., I. D. Kuntz, G. M. Crippen. The Theory and Practice of Distance Geometry. Bulletin of Mathematical Biology, 45(5), pp.665-720, 1983.

 [14] Levitt, M., R. Sharon. Accurate Simulation of Protein Dynamics in Solution. Proceedings of the National Academy of Science, 85, pp.7557-7561, 1988.

 [15] Liu, Y., D. Zhao, R. Altman, O. Jardetzky. A Systematic Comparison of Three Structure Determination Methods from NMR Data: Dependence upon Quality and Quantity of Data. Journal of Biomolecular NMR, 2, pp.373-388, 1992.

 [16] Nemethy, G., H. A. Scheraga. Theoretical Studies of Protein Conformation by Means of Energy Computations. FASEB Journal, 4, pp.3189-3197, November 1990.