Date: Monday 13 September 2010
Location: University of Birmingham
The IMA and the University of Birmingham are pleased to announce the 2nd IMA Conference on Numerical Linear Algebra and Optimization. Future meetings will be held biennially. The meeting is co-sponsored by
whose members will receive the IMA members’ registration rate.
The success of modern codes for large-scale optimisation is heavily dependent on the use of effective tools of numerical linear algebra. On the other hand, many problems in numerical linear algebra lead to linear, nonlinear or semidefinite optimisation problems. The purpose of the conference is to bring together researchers from both communities and to find and communicate points and topics of common interest.
Conference topics include any subject that could be of interest to both communities, such as:
- Direct and iterative methods for large sparse linear systems.
- Eigenvalue computation and optimisation.
- Large-scale nonlinear and semidefinite programming.
- Effect of round-off errors, stopping criteria, embedded iterative procedures.
- Optimisation issues for matrix polynomials.
- Fast matrix computations.
- Compressed/sparse sensing.
- PDE-constrained optimisation.
- Applications and real time optimisation.
General enquiries concerning conference arrangements should be sent to: Lizzi Lake (email: email@example.com)
Institute of Mathematics and its Applications, Catherine Richards House, 16 Nelson Street, Southend-on-Sea, Essex, England SS1 1EF. Tel: +44 (0)1702 354020.
Roy Mathias, University of Birmingham (co-chair)
Michal Kočvara, University of Birmingham (co-chair)
Iain Duff, Rutherford Appleton Laboratory
Nick Gould, Rutherford Appleton Laboratory
Daniel Loghin, University of Birmingham
Alastair Spence, University of Bath
Zdeněk Strakoš, Charles University, Prague
Philippe Toint, University of Namur
Larry Biegler, Carnegie Mellon University.
Nonlinear Programming Strategies for Distillation Optimization
Abstract: Distillation remains one of the most widely used methods for separation of chemical components; it has been estimated that distillation processes alone requires about a quarter of the energy consumed by the US manufacturing sector. Hence, it is not surprising that distillation optimization represents an important activity in process engineering. On the other hand, their optimization models also present a number of challenges for numerical solution. This talk explores the distillation application from the perspective of a rich source of features relevant to current topics in nonlinear programming. First, optimization models for distillation form scalable NLPs in a number of ways, through the number of stages or trays (blocks), the number of chemical species (size of blocks), phase and chemical equilibrium (degree of nonlinearity/rank deficiency)). Moreover, there are interesting ways to combine distillation sections, so the problem structure evolves from block tridiagonal to a blocked structure with arbitrary off-diagonal interconnections. Second, the need to model disappearing phases on distillation trays leads to interesting MPEC formulations. Finally, when dynamic behavior of distillation models is considered, quite innocent optimization formulations can lead to high index formulations and singular control. The development of efficient, large-scale NLP algorithms allows the formulation of optimization models that begin to tackle these challenges with fast and reliable solution strategies. These will be demonstrated on distillation models that incorporate all of the features discussed above.
Biography: Lorenz T. (Larry) Biegler is currently the Bayer Professor of Chemical Engineering at Carnegie Mellon University, which he joined after receiving his PhD from the University of Wisconsin in 1981. His research interests lie in computer aided process engineering and include flowsheet optimization, optimization of systems of differential and algebraic equations, and optimization algorithms for nonlinear estimation and control. Prof. Biegler has held visiting positions at Argonne National Laboratory, Sandia National Laboratory, Zhejiang University, the University of Dortmund, the University of Heidelberg, and the University of Wisconsin. He has authored or co-authored over 250 archival publications, and, with Ignacio Grossmann and Art Westerberg, coauthored the textbook Systematic Methods of Chemical Process Design.
Nick Higham, University of Manchester
Adrian Lewis, Cornell University
Volker Mehrmann, Technische Universität Berlin
Mike Saunders, Stanford University (Joint work with David Fong (iCME, Stanford University).
LSMR: An iterative algorithm for sparse least-squares problems
Abstract: For nearly 30 years, LSQR has been the standard iterative solver for large rectangular systems Ax ~= b. It is analytically equivalent to symmetric CG on the normal equations, and it reduces norm(rk) monotonically, where rk = b – A xk is the k-th residual vector.
LSMR is equivalent to applying MINRES to the normal equations, so that norm(A’rk) decreases monotonically. In practice we observe that norm(rk) and the Stewart backward error norm(A’rk)/norm(rk) are also monotonic, and the backward error is usually very close to optimal. Thus if iterations need to be terminated early, it is safer to use LSMR.
Both methods are based on the Golub-Kahan bidiagonalization process (a short-term recurrence for generating vectors uk and vk). Experiments show that if the vectors vk are reorthogonalized, then the vectors uk remain orthogonal, and (almost) vice versa.
Matlab and Fortran 90 implementations of LSMR are available (http://www.stanford.edu/group/SOL/software.html), with local reorthogonalization of vk as an option. Plots of various quantities on a range of large test problems illustrate the desirable properties of LSMR.
Biography: Michael Saunders specializes in numerical optimization and scientific computation. He is a core faculty member of SOL and iCME at Stanford University. He teaches a graduate class on Large-scale Numerical Optimization and is widely known for his contributions to mathematical algorithms and software, including the linear equation solvers SYMMLQ, MINRES, LSQR, LUSOL, LUMOD and the constrained optimization packages MINOS, LSSOL, NPSOL, QPOPT, SNOPT, SQOPT, PDCO.
Valeria Simoncini, Università di Bologna
Jared Tanner, University of Edinburgh
Andy Wathen, University of Oxford.
Iterative linear solvers for PDE-constrained Optimization problems involving fluid flow
Abstract: The numerical approximation of Partial Differential Equation (PDE) problems leads typically to large dimensional linear or linearised systems of equations. For problems where such PDEs provide only a constraint on an Optimization problem (so-called PDE-constrained Optimization problems), the systems are many times larger in dimension.
We will discuss the solution of such problems by preconditioned iterative techniques in particular where the PDEs in question are the Stokes and Navier-Stokes equations describing incompressible fluid flow.
Biography: Andy Wathen is Reader in Numerical Analysis at Oxford University, UK and Tutor in Applied Mathematics at New College. His batchelors degree in Mathematics from Oxford in 1980 was followed by a PhD from Reading University in 1984. After a short period in the Computer Science Department at Stanford University and as a postdoc in Reading he became a lecturer at Bristol University in 1986 and subsequently moved to Oxford in 1996. His research interests lie at the interface of Numerical Linear Algebra and discretization methods for partial differential equations, in particular the Finite Element Method as exemplified by his book ‘Finite Elements and Fast Iterative Solvers’ written jointly with Howard Elman and David Silvester. For the past several years his research has focused on preconditioning for iterative solution methods in the context of large scale Scientific Computing involving PDEs. A particular interest has been in saddle-point systems because of their wide applicability.