
A Levy process is a process with independent increments (a generalization of a random walk). Due to the rich structure and computational efficiency, these processes are being successfully applied to model various phenomena in Mathematical Finance and other areas of Science. In this talk we will discuss the problem of computing numerically the function p(x;t,y)the joint density of the first passage time and the overshoot of a Levy process. This function can be used to price barrier and lookback options, it is also the main building block in various structural models in Credit Risk. It is known that p(x;t,y) satisfies a linear partial integrodifferential equation; moreover, using the WienerHopf theory it can be expressed as a fivedimensional integral transform of known quantities. We will discuss the drawbacks associated with each of these approaches, and then we will introduce a new method, based on the combination of the above two. The PIDE method is used to obtain local information about p(x;t,y) at t=0, while the WienerHopf methods provide us with the global information in the form of the moments of this function. As the numerical example we will discuss the Normal Inverse Gaussian process.
Pharmacokinetic and pharmacodynamic (PK/PD) indices are increasingly being used in the microbiological field to assess efficacy of a dosing regimen. Contrary to methods using MIC, PKPDbased methods reflect the in vivo conditions and are more predictive of efficacy. Unfortunately, these methods are based on the use of one static pharmacokinetic value such as AUC or Cmax and may thus lead to biased efficiency information when inter or intraindividual variability exists.
In this work I will discuss the opportunity to evaluate the efficacy of a treatment by adjusting classical breakpoints estimation methods to the situation of variable PK profile. We propose here a logical generalisation of the classic AUC methods by introducing the weighted efficacy function. We will formulate these methods for both antibiotic classes: concentrationdependent and timedependent. Using two drug models, we will illustrate how the newly introduced method can be applied to accurately estimate breakpoints.
This is a joint work with D. Gogore Bi and F. Nekka.
For a given symmetric positive definite matrix A Î R^{n×n}, we develop a fast and backward stable algorithm to approximate A by a symmetric positivedefinite semiseparable matrix, accurate to any prescribed tolerance. In addition, this algorithm preserves the product, A Z, for a given matrix Z Î R^{n×d}, where d << n. Our algorithm guarantees the positivedefiniteness of the semiseparable matrix by embedding an approximation strategy inside a Cholesky factorization procedure to ensure that the Schur complements during the Cholesky factorization all remain positive definite after approximation. It uses a robust directionpreserving approximation scheme to ensure the preservation of A Z. We present experimental numerical results and discuss potential implications of our work.
This talk is devoted to the convergence analysis of the reservoir technique coupled with finite volume flux schemes approximating nonlinear hyperbolic conservation laws (J. Sci. Comput. 31(2007), 419458; Eur. J. Mech. B 27(2008), 643664). After a presentation of this method, we prove its longtime convergence, accuracy and its TVD property for some general 1d configurations. Proofs are based on a precise study of the treatment by the reservoir technique of shock and rarefaction waves. Some numerical simulations will be provided to illustrate the analytical results.
This is a joint work with Prof. S. Labbé (Université Joseph Fourier, Grenoble).
Numerical simulations of path integrals by stochastic differential equations (SDEs) sometimes show stability problems. An example of an illposed problem is the case of onemode BoseEinstein condensation in a coherent state representation [1]. In this case, the numerical solution of the resulting SDE becomes unstable after a relatively short time for most numerical methods and can blow up in finite time if uncontrolled.
To improve the results, new numerical methods are being developed, but it appears that the choice of the SDE integrator alone is insufficient to guarantee stability. One reason for this is that the drift depends on a conformal martingale which can have arbitrary phase and amplitude. Furthermore, the related FokkerPlanck equation of the problem turns out to be of mixed type with hyperbolic regions. It appears that regularization techniques combined with implicit SDE solvers and extrapolation methods can yield significant stability improvements.
Increasing demands on the complexity of scientific models coupled with increasing demands for their scalability are placing programming models on equal footing with the numerical methods they implement in terms of significance. A recurring theme across several major scientific software development projects involves defining abstract data types (ADTs) that closely mimic mathematical abstractions such as scalar, vector, and tensor fields. In languages that support userdefined operators and/or overloading of intrinsic operators, coupling ADTs with a set of algebraic and/or integrodifferential operators results in an ADT calculus. This talk will analyze ADT calculus using three tool sets: objectoriented design metrics, computational complexity theory, and information theory. It will be demonstrated that ADT calculus leads to highly cohesive, loosely coupled abstractions with codesizeinvariant data dependencies and minimal information entropy. The talk will also discuss how these results relate to software flexibility and robustness.
Due to an incomplete picture of the underlying physics, the simulation of dense granular flow remains a difficult computational challenge. Currently, modeling in practical and industrial situations would typically be carried out by using the DiscreteElement Method (DEM), individually simulating particles according to Newton's Laws. The contact models in these simulations are stiff and require very small timesteps to integrate accurately, meaning that even relatively small problems require days or weeks to run on a parallel computer. These bruteforce approaches often provide little insight into the relevant collective physics, and they are infeasible for applications in realtime process control, or in optimization, where there is a need to run many different configurations much more rapidly.
Based upon a number of recent theoretical advances, a general multiscale simulation technique for dense granular flow will be presented, that couples a macroscopic continuum theory to a discrete microscopic mechanism for particle motion. The technique can be applied to arbitrary slow, dense granular flows, and can reproduce similar flow fields and microscopic packing structure estimates as in DEM. Since forces and stress are coarsegrained, the simulation technique runs two to three orders of magnitude faster than conventional DEM. A particular strength is the ability to capture particle diffusion, allowing for the optimization of granular mixing, by running an ensemble of different possible configurations