Laboratoire de Probabilités, Statistique et Modélisation
Présentation
Le Laboratoire de Probabilités, Statistique et Modélisation, dans sa forme actuelle, a résulté, au 1er janvier 1999, de la fusion de l'ancien Laboratoire de probabilité de l'université Paris 6 avec l'équipe de Probabilités et statistique de l'université Paris Diderot.
Le laboratoire compte environ 70 enseignants-chercheurs permanents, 50 thésards, une équipe administrative de 6 personnes. Il accueille de plus les activités de deux masters deuxième année, ce qui représente plus de 200 étudiants chaque année.
La thématique du laboratoire s'inscrit dans le domaine des mathématiques appliquées et a pour objet la modélisation, la description et l'estimation des phénomènes aléatoires. Les thèmes de recherche abordés ici concernent des domaines très variés et recouvrent aussi bien des mathématiques fondamentales que des applications dans des domaines aussi divers que la médecine, les sciences humaines, l'astrophysique, les assurances ou la finance...
Thèmes de recherche
1. Théorie ergodique et systèmes dynamiques
2. Modélisation stochastique
3. Mouvement brownien et calcul stochastique
4. Statistiques
Equipes de recherche
Le laboratoire comprend six équipes :
- Théorie ergodique et systèmes dynamiques,
- Modélisation stochastique,
- Mouvement brownien et calcul stochastique,
- Statistique,
- Probabilités numériques et mathématiques financières,
- Probabilités-statistiques-biologie.
[hal-00536723] Non-Local Methods with Shape-Adaptive Patches (NLM-SAP)
Date: 16 nov 2010 - 18:12
Desc: We propose in this paper an extension of the Non-Local Means (NL-Means) denoising algorithm. The idea is to replace the usual square patches used to compare pixel neighborhoods with various shapes that can take advantage of the local geometry of the image. We provide a fast algorithm to compute the NL-Means with arbitrary shapes thanks to the fast Fourier transform. We then consider local combinations of the estimators associated with various shapes by using Stein's Unbiased Risk Estimate (SURE). Experimental results show that this algorithm improve the standard NL-Means performance and is close to state-of-the-art methods, both in terms of visual quality and numerical results. Moreover, common visual artifacts usually observed by denoising with NL-Means are reduced or suppressed thanks to our approach.
[hal-00768526] Partial Splitting of Longevity and Financial Risks: The Longevity Nominal Choosing Swaptions
Date: 21 déc 2012 - 17:50
Desc: In this paper, we introduce a new structured financial product: the so-called Life Nominal Chooser Swaption (LNCS). Thanks to such a contract, insurers could keep pure longevity risk and transfer a great part of interest rate risk underlying annuity portfolios to financial markets. Before the issuance of the contract, the insurer determines a confidence band of survival curves for her portfolio. An interest rate hedge is set up, based on swaption mechanisms. The bank uses this band as well as an interest rate model to price the product. At the end of the first period (e.g. 8 to 10 years), the insurer has the right to enter into an interest rate swap with the bank, where the nominal is adjusted to her (re-forecasted) needs. She chooses (inside the band) the survival curve that better fits her anticipation of future mortality of her portfolio (during 15 to 20 more years, say) given the information available at that time. We use a population dynamics longevity model and a classical two-factor interest rate model %two-factor Heath-Jarrow-Morton (HJM) model for interest rates to price this product. Numerical results show that the option offered to the insurer (in terms of choice of nominal) is not too expensive in many real-world cases. We also discuss the pros and the cons of the product and of our methodology. This structure enables insurers and financial institutions to remain in their initial field of expertise.
[hal-01306340] Neural Random Forests
Date: 2 avr 2018 - 13:17
Desc: Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries than trees. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems.
[hal-00966808] A new method for estimation and model selection: $\rho$-estimation
Date: 27 Mar 2014 - 12:21
Desc: The aim of this paper is to present a new estimation procedure that can be applied in many statistical frameworks including density and regression and which leads to both robust and optimal (or nearly optimal) estimators. In density estimation, they asymptotically coincide with the celebrated maximum likelihood estimators at least when the statistical model is regular enough and contains the true density to estimate. For very general models of densities, including non-compact ones, these estimators are robust with respect to the Hellinger distance and converge at optimal rate (up to a possible logarithmic factor) in all cases we know. In the regression setting, our approach improves upon the classical least squares from many aspects. In simple linear regression for example, it provides an estimation of the coefficients that are both robust to outliers and simultaneously rate-optimal (or nearly rate-optimal) for large class of error distributions including Gaussian, Laplace, Cauchy and uniform among others.
[hal-00629929] Loss-Based Risk Measures
Date: 7 oct 2011 - 01:57
Desc: Starting from the requirement that risk measures of financial portfolios should be based on their losses, not their gains, we define the notion of loss-based risk measure and study the properties of this class of risk measures. We characterize loss-based risk measures by a representation theorem and give examples of such risk measures. We then discuss the statistical robustness of estimators of loss-based risk measures: we provide a general criterion for qualitative robustness of risk estimators and compare this criterion with sensitivity analysis of estimators based on influence functions. Finally, we provide examples of statistically robust estimators for loss-based risk measures.
Autres contacts
U.F.R. Mathématiques
Sophie-Germain
75013 PARIS