Tel: 0151 794 4751 Fax: 0151 794 4754 [Updated on 23-Nov-1995] \/\/ \/\/ \/\/
-
10:00 - 10:30
Registration and morning coffee [Room : G4]
-
-
10:30 - 11:25
Prof Christopher Paige (McGill University, Canada)
The sensitivity of the Cholesky factor
We examine how the Cholesky factor R in A = R'R changes when a
symmetric change is made to A.
We find R is often less sensitive than we previously thought.
This is particularly true if R is obtained from permuted A, P'AP = R'R,
using the standard symmetric pivoting.
We show this by introducing an optimal condition number k(A) for the
problem, and
comparing it to earlier condition estimators for various symmetric
positive definite A.
But k(A) is neither easy to understand nor compute.
Fortunately there is a nice
simple analysis which provides a more understandable, though weaker,
condition number.
This not only allows efficient computation of estimates of k(A), but also
gives a good understanding of the above observations.
(*)
Joint research with Xiao-Wen Chang, Computer Science, McGill University, &
G. W. (Pete) Stewart, Computer Science, University of Maryland.
-
11:30 - 11:55
Mr Jason A Roberts, (Chester College)
A comparison of Adomian's decomposition method and a conventional
numerical method for the solution of certain predator-prey model
equations
Olek(1994, SIAM Review, v36(3), 480-488)
describes the application of Adomian's decomposition method
to derive on accurate series solution for multispecies Lotka-Volterra
equations. In this paper we consider the application of the method to a
more realistic model taken from the work
of Murray(1989, Mathematical Biology, Springer-Verlag)
and compare our results with those obtained by the
Runge-Kutta-Fehlberg 4/5 method using the Matlab ODE solver.
-
-
12:00 - 12:25
Mr Denis Hutchinson (Leeds University)
New parallel conjugate direction methods for unconstrained optimization
In this paper we consider parallel non-gradient methods based
on the construction of conjugate directions. The methods of Chazan and
Miranker, Sloboda, and Sutti are
all based on the parallel subspace theorem which was exploited in the
classical serial method of Smith (1962),
as is the new Bassiri-Hutchinson algorithm presented in this paper.
However, in extensive testing the new algorithm proved
superior to
Chagan and Miranker, several improved versions of Sloboda and the
standard Sutt-method. However, if the parallel exploration phase of the
Sutti method is
unsuccessful the algorithm becomes essentially serial. Finally, the
importance of the downhill property
in the context of these methods is discussed.
-
-

12:30 - 12:55
Dr K Vince Fernando (NAG Ltd, Oxford)
Computing eigenvectors and eigenvalues of tridiagonal matrices
We consider the solution of the homogeneous equation (J-aI)x = 0
where J is a tridiagonal matrix, "a" is a known eigenvalue, and x is the
unknown eigenvector corresponding to "a". Since the system is under-determined,
x could be obtained by setting x_k=1 and solving for the rest of the elements
of x. This method is not entirely new and it can be traced back to the times
of Cauchy (1829). In 1958, Wilkinson demonstrated that the computed x is
highly sensitive to the choice of k; the traditional practice of setting
k = 1 or k = n can lead to disastrous results.
We develop algorithms to find optimal k which requires a LDU and a UDL
factorisation of J-aI based on the theory developed by Fernando (1995)
for general dense matrices. We have also discovered new formulae for the
determinant of H-tI, where H is a more general Hessenberg matrix,
which give better numerical results when the shifted matrix is nearly singular.
These formulae could be developed to obtain efficient eigenvalue solvers
based on standard zero finders such as Newton and Laguerre methods.
These algorithms also give orthogonal eigenvectors even if the matrix has
eigenvalue clusters.
The routines for solving eigenproblems are embarrassingly parallel
and hence suitable for modern architectures.
-
-
13:00 - 14:00
Lunch [Common Room : 1st floor]
-
-
14:00 - 14:55
Prof Gene Golub (Stanford University, USA)
Componentwise estimates of errors
In many situations, it is desirable to estimate elements of a
solution
vector for a linear system of equations. In addition, one seeks to
estimate some linear combination of the solution components. In this
talk, we shall show how these estimates can be obtained. Our basic
tools are numerical quadrature and the Lanczos algorithm.
-
-
15:00 - 15:20
Afternoon tea [Room : G4]
-
-
15:20 - 15:45
Prof Gwynne Evans (De Montfort University, Leicester)
Semi-stable methods for the high order evaluation
of irregular oscillatory integrals
Many recent developments have occurred in the quadrature area
on oscillatory integrands. The case of regular oscillations
of the form sin(wx) or cos(wx) is well understood, but when w is no
longer a large constant but itself a function of x the classical
methods for the regular case break down, and the problem needs
approaching in a new light. The initial methods for the irregular
problem all suffered by being limited to low point numbers by
stability considerations. This talk describes a new method
which is applicable to high point number despite the resulting weights
being subject to instability. The crucial point is that the
residues involved are small which will allow the quadrature rule
to operate.
-
-
15:50 - 16:15
Mr Abdul Yaakub, ( Loughborough University of Technology)
A new 4th order Runge-Kutta method based on the centroidal
mean (CeM) formula
In this paper new Runge-Kutta formulae of orders 3 and
4 based on the Centroidal Mean are derived . Numerical evidence
confirming the accuracy of the new method and comparison with
alternative Runge - Kutta formula based on arithmetic mean,
contra-harmonic mean and harmonic mean are also presented.
-
-
16:20 - 16:45
Dr Ian Smith (Liverpool University)
The effectiveness of drop-tolerance
based incomplete Cholesky preconditioners
Incomplete Cholesky factorisation, or IC(0), preconditioners have been widely
used in association with the Conjugate Gradient (CG) Method as a means of
improving the convergence rate of the solver. Such methods are effective but
may not be optimal if overall cost minimisation is the goal. The introduction
of fill, based on either position or magnitude (drop tolerance) allows
a trade off between preconditioner construction cost and solver cost such that
the sum is minimised. This report examines drop tolerance based incomplete
Cholesky preconditioners in detail and aims to provide useful heuristics for
their application to a broad class of problems. The effect of fill on CG
convergence is first examined. Comparisons are then drawn between drop
tolerance and IC(0) preconditioners for a range of test systems, using both
natural and
artificial reorderings. The minimum cost goal is the pursued further by
combining position and magnitude based fill. Finally the effect of fill on
parallel implementations is briefly discussed.
-