logo EDITE Andres ALMANSA
Identité
Andres ALMANSA
État académique
Thèse soutenue
Titulaire d'une HDR (ou équivalent)
Laboratoire: personnel permanent
Direction de thèses (depuis 2007)
1.2
Encadrement de thèses (depuis 2007)
1.2
Voisinage
Ellipse bleue: doctorant, ellipse jaune: docteur, rectangle vert: permanent, rectangle jaune: HDR. Trait vert: encadrant de thèse, trait bleu: directeur de thèse, pointillé: jury d'évaluation à mi-parcours ou jury de thèse.
Productions scientifiques
TGMAM:CIARP2009
Morphological Shape Context: Semi-locality and Robust Matching in Shape Recognition
14th Iberoamerican Congress on Pattern Recognition (CIARP 2009), Guadalajara, Mexico, pp. 129--136 2009-11
oai:hal.archives-ouvertes.fr:hal-00624757
Accurate Subpixel Point Spread Function Estimation from scaled image pairs
In most digital cameras, and even in high-end digital SLRs, the acquired images are sampled at rates below the Nyquist critical rate, causing aliasing effects. This work introduces an algorithm for the subpixel estimation of the point spread function of a digital camera from aliased photographs. The numerical procedure simply uses two fronto-parallel photographs of any planar textured scene at different distances. The mathematical theory developed herein proves that the camera PSF can be derived from these two images, under reasonable conditions. Mathematical proofs supplemented by experimental evidence show the well-posedness of the problem and the convergence of the proposed algorithm to the camera in-focus PSF. An experimental comparison of the resulting PSF estimates shows that the proposed algorithm reaches the accuracy levels of the best non-blind state-of-the-art methods.
preprint 2012-02-22
oai:hal.archives-ouvertes.fr:hal-00540637
The non-parametric sub-pixel local point spread function estimation is a well posed problem
Most medium to high quality digital cameras (DSLRs) acquire images at a spatial rate which is several times below the ideal Nyquist rate. For this reason only aliased versions of the cameral point-spread function (PSF) can be directly observed. Yet, it can be recovered, at a sub-pixel resolution, by a numerical method. Since the acquisition system is only locally stationary, this PSF estimation must be local. This paper presents a theoretical study proving that the sub-pixel PSF estimation problem is well-posed even with a single well chosen observation. Indeed, theoretical bounds show that a near-optimal accuracy can be achieved with a calibration pattern mimicking a Bernoulli(0.5) random noise. The physical realization of this PSF estimation method is demonstrated in many comparative experiments. They use an algorithm estimating accurately the pattern position and its illumination conditions. Once this accurate registration is obtained, the local PSF can be directly computed by inverting a well conditioned linear system. The PSF estimates reach stringent accuracy levels with a relative error in the order of 2-5%. To the best of our knowledge, such a regularization-free and model-free sub-pixelPSF estimation scheme is the first of its kind.PSF
International Journal of Computer Visionpeer-reviewed article 2011-06-09
oai:hal.archives-ouvertes.fr:hal-00647995
Meaningful Matches in Stereovision
This paper introduces a statistical method to decide whether two blocks in a pair of of images match reliably. The method ensures that the selected block matches are unlikely to have occurred "just by chance.'' The new approach is based on the definition of a simple but faithful statistical "background model" for image blocks learned from the image itself. A theorem guarantees that under this model not more than a fixed number of wrong matches occurs (on average) for the whole image. This fixed number (the number of false alarms) is the only method parameter. Furthermore, the number of false alarms associated with each match measures its reliability. This "a contrario" block-matching method, however, cannot rule out false matches due to the presence of periodic objects in the images. But it is successfully complemented by a parameterless "self-similarity threshold." Experimental evidence shows that the proposed method also detects occlusions and incoherent motions due to vehicles and pedestrians in non simultaneous stereo.
IEEE Transactions on Pattern Analysis and Machine Intelligencepeer-reviewed article 2012-05
oai:hal.archives-ouvertes.fr:hal-00583120
Boruvka Meets Nearest Neighbors
Computing the minimum spanning tree (MST) is a common task in the pattern recognition and the computer vision fields. However, little work has been done on efficient general methods for solving the problem on large datasets where graphs are complete and edge weights are given implicitly by a distance between vertex attributes. In this work we propose a generic algorithm that extends the classical Boruvka's algorithm by using nearest neighbors search structures to reduce significantly time and memory performances. The algorithm can also compute in a straightforward way approximate MSTs thus further improving speed. Experiments show that the proposed method outperforms classical algorithms on large low-dimensional datasets by several orders of magnitude. Finally, to illustrate the usefulness of the proposed algorithm, we focus on a classical computer vision problem: image segmentation. We modify a state-of-the-art local graph-based clustering algorithm, thus permitting a global scene analysis.
preprint 2011-04-04
oai:hal.archives-ouvertes.fr:hal-00671759
How Accurate Can Block Matches Be in Stereo Vision?

This article explores the subpixel accuracy attainable for the disparity computed from a rectified stereo pair of images with small baseline. In this framework we consider translations as the local deformation model between patches in the images. A mathematical study first shows how discrete block-matching can be performed with arbitrary precision under Shannon–Whittaker conditions. This study leads to the specification of a block-matching algorithm which is able to refine disparities with subpixel accuracy. Moreover, a formula for the variance of the disparity error caused by the noise is introduced and proved. Several simulated and real experiments show a decent agreement between this theoretical error variance and the observed root mean squared error in stereo pairs with good signal-to-noise ratio and low baseline. A practical consequence is that under realistic sampling and noise conditions in optical imaging, the disparity map in stereo-rectified images can be computed for the majority of pixels (but only for those pixels with meaningful matches) with a $1/20$ pixel precision.

SIAM Journal on Imaging Sciencespeer-reviewed article 2011-03
oai:hal.archives-ouvertes.fr:hal-00497000
DEBLURRING OF IRREGULARLY SAMPLED IMAGES BY TV REGULARIZATION IN A SPLINE SPACE
We present here an algorithm for restoration of irregularly sampled images with blur and noise. The good accuracy of non-quadratic regularizers in this type of problems was shown in recent articles, but their computational cost is prohibitive because the approximation space was trigonometric polynomials. Here we model the image as a cubic spline and prevent instability phenomena due to irregularity and blur by minimizing the total variation with a quadratic data-fitting term. The algorithm is the well-known Forward-Backward which is well adapted to our l1-l2 problem. We compare our method to the existing ones, including very efficient non-quadratic ones based on Fourier models. Our results are equivalent in term of SNR to the best existing method, but it is 20 to 50 times faster.
preprint 2010-07-02
oai:tel.archives-ouvertes.fr:tel-00011765
Sur quelques problèmes mathématiques en analyse d'images et vision stéréoscopique
.
habilitation à diriger des recherches 2005-12-01
oai:tel.archives-ouvertes.fr:tel-00665725
Echantillonnage, interpolation et détection. Applications en imagerie satellitaire.
Cette thèse aborde quelques-uns des problèmes qui surviennent dans la conception d'un système complet de vision par ordinateur : de l'échantillonnage à la détection de structures et leur interprétation. La motivation principale pour traiter ces problèmes a été fournie par le CNES et la conception des satellites d'observation terrestre, ainsi que par les applications de photogrammétrie et vidéo-surveillance chez Cognitech, Inc. pendant les étapes finales de ce travail, mais les techniques développées sont d'une généralité suffisante pour présenter un intérêt dans d'autres systèmes de vision par ordinateur. Dans une première partie nous abordons une étude comparative des différents systèmes d'échantillonnage d'images sur un réseau régulier, soit carré soit hexagonal, à l'aide d'une mesure de résolution effective, qui permet de déterminer la quantité d'information utile fournie par chaque pixel du réseau, une fois que l'on a séparé les effets du bruit et du repliement spectral. Cette mesure de résolution est utilisée à son tour pour améliorer des techniques de zoom et de restauration basées sur la minimisation de la variation totale. Ensuite l'étude comparative est poursuivie en analysant dans quelle mesure chacun des systèmes permet d'éliminer les perturbations du réseau d'échantillonnage dues aux micro-vibrations du satellite pendant l'acquisition. Après une présentation des limites théoriques du problème, nous comparons les performances des méthodes de reconstruction existantes avec un nouvel algorithme, mieux adapté aux conditions d'échantillonnage du CNES. Dans une deuxième partie nous nous intéressons à l'interpolation de modèles d'élévation de terrain, dans deux cas particuliers: l'interpolation de lignes de niveau, et l'étude des zones dans lesquelles une méthode de corrélation à partir de paires stéréo ne fournit pas des informations fiables. Nous étudions les liens entre les méthodes classiques utilisées en sciences de la terre tels que Krigeage ou distances géodésiques, et la méthode AMLE, et nous proposons une extension de la théorie axiomatique de l'interpolation qui conduit à cette dernière. Enfin une évaluation expérimentale permet de conclure qu'une nouvelle combinaison du Krigeage avec l'AMLE fournit les meilleures interpolations pour les modèles de terrain. Enfin nous nous intéressons à la détection d'alignements et de leurs points de fuite dans une image, car ils peuvent être utilisés aussi bien pour la construction de modèles d'élévation urbains, que pour résoudre des problèmes de photogrammétrie et calibration de caméras. Notre approche est basée sur la théorie de la Gestalt, et son implémentation effective récemment proposée par Desolneux-Moisan-Morel à l'aide du principe de Helmholtz. Le résultat est un détecteur de points de fuite sans paramètres, qui n'utilise aucune information a priori sur l'image ou la caméra.
PhD thesis 2002-12-09
oai:hal.archives-ouvertes.fr:hal-00940192
Robust Multi-image Processing With Optimal Sparse Regularization
Sparse modeling can be used to characterize outlier type noise. Thanks to sparse recovery theory, it was shown that 1-norm super-resolution is robust to outliers if enough images are captured. Moreover, sparse modeling of signals is a way to overcome ill-posedness of under-determined problems. This naturally leads to the question: does an added sparsity assumption on the signal will improve the robustness to outliers of the 1-norm super-resolution, and if yes, how strong should this assumption be? In this article, we review and extend results of the literature to the robustness to outliers of overdetermined signal recovery problems under sparse regularization, with a convex variational formulation. We then apply them to general random matrices, and show how the regularization parameter acts on the robustness to outliers. Finally, we show that in the case of multi-image processing, the structure of the support of signal and noise must be studied precisely. We show that the sparsity assumption improves robustness if outliers do not overlap with signal jumps, and determine how the regularization parameter can be chosen.
preprint 2014-01-31
oai:hal.archives-ouvertes.fr:hal-00937795
Video Inpainting of Complex Scenes
We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask, and can deal with a wider variety of situations than is handled by previous work.
preprint 2014-01-28
oai:hal.archives-ouvertes.fr:hal-00927007
Robust Automatic Line Scratch Detection in Films
Line scratch detection in old films is a particularly challenging problem due to the variable spatio-temporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture and slanted or partial scratches. Comparisons show significant advantages over previous work.
preprint 2014-01-10
oai:hal.archives-ouvertes.fr:hal-00835739
Quantification de la robustesse de la super-résolution par minimisation L1
Cet article étudie la robustesse aux bruits de type outlier dans le cadre de la super-résolution. On utilise l'équivalence entre reconstruction parcimonieuse et robustesse aux outliers pointée par Candès et Tao. Elle permet de déduire des bornes sur le nombre d'images acquises en fonction de la quantité d'outliers qui garantissent une reconstruction parfaite par minimisation L1.
23ème Colloque Gretsi (Gretsi 2013)conference, seminar, workshop communication 2013-09-03
oai:hal.archives-ouvertes.fr:hal-00838927
Towards fast, generic video inpainting
Achieving globally coherent video inpainting results in reasonable time and in an automated manner is still an open problem. In this paper, we build on the seminal work by Wexler et al. to propose an automatic video inpainting algorithm yielding convincing results in greatly reduced computational times. We extend the PatchMatch algorithm to the spatio-temporal case in order to accelerate the search for approximate nearest neighbours in the patch space. We also provide a simple and fast solution to the well known over-smoothing problem resulting from the averaging of patches. Furthermore, we show that results similar to those of a supervised state-of-the-art method may be obtained on high resolution videos without any manual intervention. Our results indicate that globally coherent patch-based algorithms are feasible and an attractive solution to the difficult problem of video inpainting.
preprint 2013-06-26
oai:hal.archives-ouvertes.fr:hal-00803695
Outlier Removal Power of the L1-Norm Super-Resolution
Super-resolution combines several low resolution images having different sampling into a high resolution image. L1-norm data fit minimization has been proposed to solve this problem in a robust way. The outlier rejection capability of this methods has been shown experimentally for super-resolution. However, existing approaches add a regularization term to perform the minimization while it may not be necessary. In this paper, we recall the link between robustness to outliers and the sparse recovery framework. We use a slightly weaker Null Space Property to characterize this capability. Then, we apply these results to super resolution and show both theoretically and experimentally that we can quantify the robustness to outliers with respect to the number of images.
Lecture notes in computer science Scale Space and Variational Methods in Computer Vision 4th International Conference, SSVM 2013,conference proceeding 2013-06-03
oai:hal.archives-ouvertes.fr:hal-00803806
On the Role of Contrast and Regularity in Perceptual Boundary Saliency
Mathematical Morphology proposes to extract shapes from images as connected components of level sets. These methods prove very suitable for shape recognition and analysis. We present a method to select the perceptually significant (i.e., contrasted) level lines (boundaries of level sets), using the Helmholtz principle as first proposed by Desolneux et al. Contrarily to the classical formulation by Desolneux et al. where level lines must be entirely salient, the proposed method allows to detect partially salient level lines, thus resulting in more robust and more stable detections. We then tackle the problem of combining two gestalts as a measure of saliency and propose a method that reinforces detections. Results in natural images show the good performance of the proposed methods.
Journal of Mathematical Imaging and Vision ISSN:0924-9907article in peer-reviewed journal 2013-01-24
oai:hal.archives-ouvertes.fr:hal-00763984
On the amount of regularization for super-resolution reconstruction
Modern digital cameras are quickly reaching the fundamental physical limit of their native resolution. Super-resolution (SR) aims at overcoming this limit. SR combines several images of the same scene into a high resolution image by using differences in sampling caused by camera motion. The main difficulty encountered when designing SR algorithms is that the general SR problem is ill-posed. Assumptions on the regularity of the image are then needed to perform SR. Thanks to advances in regularization priors for natural images, producing visually plausible images becomes possible. However, regularization may cause a loss of details. Therefore, we argue that regularization should be used as sparingly as possible, especially when the restored image is needed for further precise processing. This paper provides principles guiding the local choice of regularization parameters for SR. With this aim, we give an invertibility condition for affine SR interpolation. When this condition holds, we study the conditioning of the interpolation and affine motion estimation problems. We show that these problems are more likely to be well posed for a large number of images. When conditioning is bad, we propose a local total variation regularization for interpolation and show its application to multi-image demosaicking.
preprint 2012-12-11
oai:hal.archives-ouvertes.fr:hal-00824670
On the Amount of Regularization for Super-Resolution Interpolation
Super-resolution (SR) aims at combining a number of aliased images of the same scene into a higher resolution image by using the difference in sampling caused by camera motion. As the problem of SR is generally ill-posed, techniques developed in the literature often rely on hypotheses on the regularity of the image. In this paper, we try to minimize these assumptions for the interpolation part of super-resolution. We describe situations where super-resolution interpolation is invertible and/or well conditioned. We first study the interpolation problem for large numbers of images when motions are pure translations. Then, we look at the more generic problem of super-resolution interpolation with translations and rotations. We give a simple condition on the number of images and zoom factor for perfect recovery of the high resolution image. We also study the conditioning in the critical case and propose regularization methods which adapts to local sampling variations. Thus, we avoid the generation of artifacts when the acquired data is noisy.
20th European Signal Processing Conference 2012 (EUSIPCO 2012) 20th European Signal Processing Conference 2012conference proceeding 2012-08-27
oai:hal.archives-ouvertes.fr:hal-00631620
Automatically finding clusters in normalized cuts

Normalized Cuts is a state-of-the-art spectral method for clustering. By applying spectral techniques, the data becomes easier to cluster and then k-means is classically used. Unfortunately the number of clusters must be manually set and it is very sensitive to initialization. Moreover, k-means tends to split large clusters, to merge small clusters, and to favor convex-shaped clusters. In this work we present a new clustering method which is parameterless, independent from the original data dimensionality and from the shape of the clusters. It only takes into account inter-point distances and it has no random steps. The combination of the proposed method with normalized cuts proved successful in our experiments.

Pattern Recognition ISSN:0031-3203article in peer-reviewed journal 2011