Let f = (f_{1},...,f_{n}) be a system with n equations and n complex unknowns. An approximate zero of f is an affine point such that the successive iterations of Newton's method converge quadratically to an exact zero of the system.
I will present an Average Las Vegas numerical procedure, joint work with L. M. Pardo, guaranteed to produce approximate zeros of systems of equations in polynomial time, on the average.
I will also introduce some very recent results, joint work with M. Shub, that suggest the existence of much faster algorithms, with average running time almost linear in the size of the input. The search of such an algorithm is linked to the study of a geometrical problem: The geodesics in the "condition number metric". Open questions, both from complexity and geometrical perspectives, will be discussed.
This lecture will describe older and very recent work [2], [4] in which Bailey, Bradley and I hunted for various desired generating functions for zeta functions and then were able to methodically prove our results.

 


 (1) 
As I hope to show, discovering (1) and then proving
(2) formed one of the most satisfying experimental mathematics experiences I have had. I will also describe more recent work to appear in [3,2008] regarding
 (2) 
Program families are wellknown in software engineering, and generic algorithms are pervasive in computer algebra. Nevertheless, in computer algebra libraries, one finds an inordinate amount of code duplication, even at the level of named algorithms. Some techniques are already in use to deal with this issue, with C++ template programming in common use in Linbox and postfacto extensions in Aldor.
By combining ideas from program families and generic algorithms, a careful study of the variations of wellknown algorithms allows us to reunite these offshoots back into a coherent family, where all members find their place. This talk will mostly concentrate on showing how many dimensions exist in simple families of algorithms, with LU decomposition being the guiding example. While previous work has clearly documented between 4 and 6 dimensions, we have discovered at least 19 separate "dimensions of variation" in that same family. As time permits, we will show that our chosen implementation method (higherorder modules of code generators in MetaOCaml) possesses some technical advantages over other solutions.
This is joint work with Oleg Kiselyov.
Block Wiedemann and Lanczos algorithms are now available as part of the Linbox package. These have been proved to be reliable in limited cases, or more generally if matrix preconditioners are used to avoid pathological inputs. On the other hand, considerably simpler heuristics (based on the same ideas) have been used with considerable success for various numbertheoretic computation. This motivates work to generalize the proofs of reliability that we now have by weakening or eliminating assumptions about the input upon which these proofs currently depend.
Various claims about the reliability of such algorithms depend on bounds for the expected nullity of various "block Hankel" matrices. These are matrices that are generated using a matrix A Î F^{N ×N} with entries in a field F  generally part of the input for the given problem  as well as blocks of vectors u, v Î F^{N ×k} that are randomly generated. Given A, u, and v, the (i,j)th block of the matrix that is of interest is the k ×k matrix u^{T} A^{i+j} v.
Bounds on the nullity of such matrices are now available in a reasonably general case, namely, that the number of nontrivial invariant factors of the matrix A is less than the "blocking factor" k. In this talk I will sketch a proof that this assumption about invariant factors is not, in fact, necessary: Useful bounds on the nullity of the above block Hankel matrices can be obtained for arbitrary matrices A Î F^{N ×N}.
The set of vocabulary, background assumptions, and rules that govern an application of mathematics is called its context. The context of an application in deductive mathematics is traditionally formalized as an axiomatic theory in which mathematical knowledge is represented declaratively as a set of axioms. The context of an application in computational mathematics is often formalized as an algorithmic theory in which mathematical knowledge is represented procedurally as a set of algorithms. The background assumptions of an algorithmic theory and the specifications of the algorithms are usually not an explicit part of an algorithmic theory.
As a result, an algorithmic theory can be used to perform computations, but it cannot be used to understand what the results of the computations mean. A biform theory can represent mathematical knowledge both declaratively and procedurally. In particular, it can include algorithms equipped with precise specifications of their inputoutput relationships. In this talk, we will explain what a biform theory is and how a biform theory can be used to formalized the context in computational mathematics. We will also briefly introduce Chiron, a generalpurpose logic based on set theory that is especially wellsuited for expressing biform theories.
Some of the most compelling theoretical work in computer algebra of the past decade has been on computations with lacunary or supersparse polynomials. Computer algebra systems (and mathematicians!) are good at representing multivariate polynomials of very high degree but only a few terms in the form of a linked list of coefficient/exponent tuples. However, the repertoire of algorithms for computing with polynomials in this representation is limited, and the area is fraught with intractable problems. Recent work of Lenstra, Kaltofen and Koiran has extended the reach of fast algorithms. We seek here to further augment the available toolkit.
We give new algorithms for two important problems. First, we give an algorithm to interpolate an unknown integer polynomial in the sparsest shifted power basis. We assume that we are given a function or "black box" for evaluating that polynomial at a point modulo a prime or at a (complex) root of unity. Second, the question of functionally decomposing a univariate polynomial is investigated when the input is lacunary. A new algorithm is proposed to compute sparse decompositions. Some interesting connections are noted to longstanding open problems studied by Erdös, Schinzel and others.
This is work with Daniel Roche (Waterloo).
The regulator is an important invariant of a real quadratic number field, due to its close connection to fundamental solutions of the Pell equation and to cryptographic applications using such fields. We will describe stateoftheart methods for computing the regulator that are effective for fields with discriminants as large as 100 decimal digits. These methods use the indexcalculus strategy, and as such require timeconsuming linear algebra calculations over the integers, which will be highlighted in this talk.
Combinatorial Design Theory is a rich source of challenging systems of polynomial equations, mainly of dimension zero. We will describe some classes of such systems as well as some of the different methods that are used to tackle them. These methods come from different disciplines, such as Computer Algebra, Artificial Intelligence, Combinatorics and Discrete Mathematics, while Supercomputing is often a useful aid.
Problems such as the construction of a sparse black box polynomial and finding poles of Padé approximants are examples of a generalized eigenvalue problem for Hankel matrices. In this talk we discuss the the numerical issues that come up when such problems are given in fixed precision arithmetic. A basic tool in this study is information on the condition number of Vandermonde matrices with complex entries.
This is joint work with B. Beckermann and G. Golub.
A new Maple package RootFinding[Parametric] for solving parametric systems of polynomial equations and inequalities is described. The main idea for solving such a system is as follows. The parameter space R^{d} is divided into two parts: the discriminant variety W and its complement R^{d} \W. The discriminant variety is a generalization of the wellknown discriminant of a univariate polynomial and contains all those parameter values leading to nongeneric solutions of the system. The complement R^{d} \W can be expressed as a finite union of open cells such that the number of real solutions of the input system is constant on each cell. In this way, all parameter values leading to generic solutions of the system can be described systematically. The underlying techniques used are Gröbner bases, polynomial real root finding, and Cylindrical Algebraic Decomposition. This package offers a friendly interface for scientists and engineers to solve parametric problems, as illustrated by an example from control theory.
Consider a rational matrix, particularly one whose entries have large numerators and denominators but which is presented as a product of very sparse matrices with relatively small entries. We will discuss symbolicnumeric hybrid strategies to compute the inertia (the number of positive, negative, and zero eigenvalues) of such a matrix in the nonsingular case that effectively exploits the product structure. The method works by computing the numerical LU decomposition of the product with a succession of floating point QR decompositions of the small, sparse, factors in the product representation.
This is joint work with B. David Saunders, and David H. Wood in the Department of Computer Science at the University of Delaware.
Triangular decompositions are one of the most studied techniques for solving polynomial systems symbolically. Invented by J. F. Ritt in the early '30s for systems of differential polynomials, their stride started in the late '80s with the method of W. T. Wu dedicated to algebraic systems. Different concepts and algorithms extended the work of Wu. At the end of the '90s the notion of a regular chain, introduced by M. Kalkbrener, led to important algorithmic improvements. The era of polynomial system solvers based on triangular decompositions could commence.
Since 2000, several promising complexity results and algorithmic discoveries have stimulated the development of implementation techniques. These have permitted fast polynomial arithmetic and parallelism to speed up key subroutines substantially. Today, new algorithms for computing triangular decompositions are being invented to make the best use of these techniques. I anticipate that triangular decomposition solvers will become an invaluable and irreplaceable support to scientific computing.
Families of orthogonal polynomials satisfy recurrence relations which are often the key to the design of fast algorithms. In this talk, we consider conversion problems, from an orthogonal basis to the monomial basis and conversely. Using the recurrence relation on orthogonal polynomials, one easily comes up with efficient algorithms for the direct conversion. We propose a fast algorithm for the converse direction, exploiting classical continued fractions identities and algorithm transposition techniques to reach a quasilinear complexity.
Joint work with Bruno Salvy and Alin Bostan.
Like the better known Jordan form, the Frobenius form of a square matrix A over a field K is a unique representative of the set of all matrices similar to A. The Frobenius form captures completely the geometric structure of a matrix, and clearly reveals invariants such as the minimal and characteristic polynomial.
The problem of computing the form has been very well studied. Let 2 < q £ 3 be such that O(n^{q}) operations in K are sufficient to multiply together two matrices in K^{n×n}. Over a sufficiently large field, with #K ³ n^{2}, Giesbrecht's (1993) randomized algorithm computes the Frobenius form F, together with a similarity transformation matrix U such that F = U^{1}AU, using an expected number of O(n^{q} logn) operations in K. More recently, Eberly (2000) describes a randomized algorithm, applicable over a field K of any size, that computes both the form and a transformation matrix using an expected number of O(n^{q} logn) operations in K.
In this talk I describe a new randomized algorithm for computing the Frobenius form. Over a sufficiently large field the new algorithm uses an expected number of O(n^{q}) field operations, thus improving by a factor of logn on previously known results. Once the Frobenius form itself has been computed, a similarity transformation matrix can be constructed using an additional O(n^{q} loglogn) field operations.
Joint work with Clément Pernet.
We present algorithms and an implementation framework for solving polynomial systems symbolically and in parallel.
Symbolic methods are powerful tools in scientific computing. The implementation of symbolic solvers is, however, highly difficult. Indeed, they are extremely space and time consuming when applied to large examples. The increasing availability of parallel and distributed architectures offers the opportunity to realize highperformance solvers that can largely expand the range of solvable problems.
We have developed a highlevel categorical parallel framework, written in the Aldor language, to support highperformance computer algebra on symmetric multiprocessor machines and multicores. A componentlevel parallel solver for computing triangular decompositions has been realized using this framework. Our solutions address the following challenges:
Our work in progress focuses on the study of efficient memory usage in parallel symbolic solving. Memory consumption appears to be one of the bottlenecks in our parallel solver: the concurrent threads or processes incur more space usage than a sequential execution due to distribution and/or duplication of mathematical objects. We discuss approaches to face this new challenge.