Comunicació presentada a: IEEE International Conference on Image Processing 2016, celebrada del 25 l 28 de setembre de 2016 a Phoenix, Estats Units d'Amèrica. ; We propose a method for the color stabilization of cinema shots coming from different cameras that use unknown logarithmic encoding curves. The log-encoding curves are approximated by a concatenation of gamma-curves, whose values are accurately computed using image matches. The color stabilization procedure, based on the generic color processing pipeline of a digital camera, can be performed after the estimation of the encoding curves, and it also requires the existence of image matches. Our work can be applied in different scenarios such as multi-camera shoots, native-3D cinema, or color grading in post-production. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government, grant ref. TIN2015-71537-P, and by the Icrea Academia Award.
Gamut mapping transforms the color gamut of an image to that of a target device. Two cases are usually considered: gamut reduction (target gamut smaller than source gamut), and gamut extension (target gamut larger than the source gamut). Less attention is devoted to the more general case, when neither gamut is fully included in the other. In this work we unify and expand two recent methods for gamut extension and reduction, so as to simultaneously perform both forms of gamut mapping in di erent regions of the same image without introducing color artifacts or halos. We demonstrate the usefulness of this approach for the traditional gamut mapping problem, and also how the proposed method can be used to adapt the color palette of an image so that it is closer to that of a given reference image. Results are compared with the state-of-the-art and validated through user tests and objective metrics. ; This work is supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government FEDER Fund, grant ref. TIN2015-71537-P(MINECO/FEDER,UE), and by the Icrea Academia Award. The work of J. Vazquez-Corral was supported by the Spanish government under Grant IJCI-2014-19516
Comunicació presentada a la AIC 2015 Tokyo Midterm Meeting, celebrada a Tòquio (Japó) del 19 a 22 de maig de 2015. ; Gamut mapping transforms the color of an input image within the range of a target device. A huge amount of research has been devoted to two subproblems that arise from this general one: gamut reduction and gamut extension. Gamut reduction algorithms convert the input image to a new gamut that fits inside the one of the image , i.e. the gamuts' intersection is equal to the target gamut, while gamut extension algorithms convert the input image to a gamut that embodies the original image gamut, i.e. the gamuts' intersection is equal to the source gamut. In contrast to the two aforementioned cases, very little attention has been paid to the most general problem, where the intersection of source and target gamut is not equal to one of the two gamuts. In this paper we address this most general problem of gamut mapping between any two gamuts presenting any possible intersection. To deal with this problem we unify the gamut extension and gamut reduction algorithms presented in Zamir –et al- (Zamir 2014), which are based in the perceptually inspired variational framework of Bertalmío –et al- (Bertalmío 2007) that presents three competing terms; an attachment to the original data, a term for not-modifying the perchannel image mean (i.e. not modifying the white point), and a contrast enhancement term. In particular, in this paper we show how by defining a smooth transition on the contrast enhancement parameter over the chromaticity diagram we can simultaneously reduce the input gamut in some chromatic areas while increasing it in some other without introducing neither color artifacts nor halos. ; This work was supported by the European Research Council, Starting Grant ref.306337, by the Spanish Government, grant red.TIN2012-38112, and by the Icrea Academia Award.
Blind gamma estimation is the problem of estimating the gamma function that is applied to a linear image both for perceptual reasons and for the compensation of the non-linear behavior of displays. Gamma values change both inter- and intra-camera. In the latter case, the change comes from the use of different scene settings. In this letter we propose a new approach that relies on the use of more than a single image from the same scene. We estimate the gammas for all the different images at the same time with a method based on exploiting the structure of the standard in-camera processing pipeline. Our results improve over the state-of-the-art. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government, grant ref. TIN2012-38112, and by the Icrea Academia Award.
Visual attention retargeting attempts to modify an image such that the viewer's attention is directed to specific regions. Goals include highlighting a particular object or hiding possible problems in the image. In this work, we show that we can pose the visual retargeting problem in terms of gamut mapping. In short, visual attention retargeting can be achieved by performing gamut extension in those regions that we want to highlight and gamut reduction in the other regions. ; Authors are supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government FEDER Fund, grant ref. TIN2015-71537-P(MINECO/FEDER,UE), and by the Icrea Academia Award. The work of J. Vazquez-Corral was supported by the Spanish government under Grant IJCI-2014- 19516.
Comunicació presentada a: SMPTE 2017 Annual Technical Conference and Exhibition, celebrat del 23 al 26 d'octubre de 2017 a Los Angeles, Estats Units d'Amèrica. ; Gamut mapping transforms colors of the original (image or video) content to the color palette of the display device with the simultaneous goals of (a) reproducing content accurately while preserving the artistic intent of the original content's creator and (b) exploiting the full color rendering potential of the target display device. The rapid advancement in display technologies has created a pressing need to develop automatic and fast gamut mapping algorithms that can deal with imagery intended for both conventional and emerging displays. In this paper, we propose a novel framework based on retinal and color perception models from vision science that offers a functionality to perform both gamut reduction and gamut extension, while preserving hue and taking into account the analysis of the colors of the input image. We evaluate the performance of the proposed framework visually and by using a perceptually-based error metric, according to which the gamut-mapped results of our framework outperform those of the state-of-the-art methods. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE), and by the Icrea Academia Award. The work of Javier Vazquez-Corral was supported by the Spanish government grant IJCI-2014-19516.
Comunicació presentada a: SMPTE 2017 Annual Technical Conference and Exhibition, celebrat del 23 al 26 d'octubre de 2017 a Los Angeles, Estats Units d'Amèrica. ; In cinema and TV it is quite usual to have to work with footage coming from several cameras, which show noticeable color differences among them even if they are all the same model. In TV broadcasts, technicians work in camera control units so as to ensure color consistency when cutting from one camera to another. In cinema post-production, colorists need to manually colormatch images coming from different sources. Aiming to help perform this task automatically, the Academy Color Encoding System (ACES) introduced a color management framework to work within the same color space and be able to use different cameras and displays; however, the ACES pipeline requires to have the cameras characterized previously, and therefore does not allow to work 'in the wild', a situation which is very common. We present a color stabilization method that, given two images of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves (logarithmic or gamma), is able to correct the colors of one of the images making it look as if it was captured with the other camera. Our method is based on treating the in-camera color processing pipeline as a combination of a 3x3 matrix followed by a non-linearity, which allows us to model a color stabilization transformation among two shots as a linear-nonlinear function with several parameters. We find corresponding points between the two images, compute the error (color difference) over them, and determine the transformation parameters that minimize this error, all automatically without any user input. The method is fast and the results have no spurious colors or spatio-temporal artifacts of any kind. It outperforms the state of the art both visually and according to several metrics, and can handle very challenging real-life examples. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P(MINECO/FEDER,UE), and by the Icrea Academia Award. The work of J. Vazquez-Corral was supported by the Spanish government under Grant IJCI-2014-19516.
Comunicació presentada a: SMPTE 2016 Annual Technical Conference and Exhibition, celebrada a Los Angeles, Estats Units d'Amèrica, del 25 al 27 d'octubre de 2016. ; While wide color gamut (WCG) capabilities are a key element of emerging display and projection technologies, at present most image content is recorded using standards such as DCI-P3 for cinema or BT.709 for TV that have a reduced color gamut. Therefore, there is a need for gamut extension methods that process regular content and allow to appreciate the full color potential of new displays, improving user experience. We present a gamut extension algorithm that is based on visual perception models and explicitly takes into account low chromatic colors such as skin tones. It produces results that look natural, are free of artifacts of any kind, and outperform the state of the art. The method is fast, allowing for operation interaction if needed. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE), and by the Icrea Academia Award.
Gamut mapping is the problem of transforming the colors of image or video content so as to fully exploit the color palette ofthe display device where the content will be shown, while preserving the artistic intent of the original content's creator. In particular in thecinema industry, the rapid advancement in display technologies has created a pressing need to develop automatic and fast gamutmapping algorithms. In this paper we propose a novel framework that is based on vision science models, performs both gamut reductionand gamut extension, is of low computational complexity, produces results that are free from artifacts and outperforms state-of-the-artmethods according to psychophysical tests. Our experiments also highlight the limitations of existing objective metrics for the gamutmapping problem. ; Thiswork has received funding from the European Union'sHorizon 2020 research and innovation programme undergrant agreement number 761544 (project HDR4EU) andunder grant agreement number 780470 (project SAUCE),and by the Spanish government and FEDER Fund, grantref. TIN2015-71537-P (MINECO/FEDER,UE). The work of J.Vazquez-Corral was supported by the Spanish governmentunder Grant IJCI-2014-19516
We present a color matching method that, given two different views of the same scene taken by two cameras with unknown settings and unknown internal parameter values, and encoded with unknown non-linear curves, is able to correct the colors of one of the images making it look as if it was captured under the other camera's settings. Our method is based on treating the in-camera color processing pipeline as a matrix multiplication followed by a non-linearity. This allows us to model a color stabilization transformation among the two shots by estimating several parameters. The method is fast and the results have no spurious colors. It outperforms the state-of-the-art both visually and according to several metrics, and can handle very challenging real-life examples. ; This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 761544 (project HDR4EU) and under grant agreement number 780470 (project SAUCE), and by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE).
Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods. ; Horizon 2020 Framework Programme (761544, 780470); Engineering and Physical Sciences Research Council (EP/028730, EP/M001768); Spanish Government MINECO and Feder Fund (PGC2018-099651-B-I00).
In this paper we present a novel approach to the problem of exposure fusion of a stack of pictures for the generation of high dynamic range (HDR) radiance maps. All exposure fusion approaches, when applied on 8-bit non-RAW pictures, perform photometric/ncalibration by estimating and inverting the camera response function, which is assumed to be a channelwise-independent function which does not change with the exposure. Our experiments show that these assumptions do not always hold and that the camera may automatically introduce changes (in gain, white balance, gamma correction value) from one exposure to the next when performing the non-linear operations involved in recording pictures in non-RAW formats such as JPEG. The net result is that HDR radiance maps obtained from exposure fusion of non-linear data may have substantially more error than if computed directly from the linear, RAW data. Our proposed method overcomes this problem and compensates for the changes introduced by the camera by matching the color correction and gamma correction transforms of all pictures to those of a reference picture in the stack, providing a clear improvement in terms of PSNR with respect to the classical method of Debevec and Malik. ; This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government, grant ref. TIN2012-38112, and by the Icrea Academia Award
Multiple-exposure approaches for high dynamic range (HDR) image generation share a set of building assumptions: that color channels are independent and that the camera response function (CRF) remains constant while changing the exposure. The first contribution of this paper is to highlight how these assumptions, which were correct for film photography, do not hold in general for digital cameras. As a consequence, results of multiexposure HDR methods are less accurate, and when tone-mapped they often present problems like hue shifts and color artifacts. The second contribution is to propose a method to stabilize the CRF while coupling all color channels, which can be applied to both static and dynamic scenes, and yield artifact-free results that are more accurate than those obtained with state-of-the-art methods according to several image metrics. ; This work has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement number 761544 (project HDR4EU) and under grant agreement number 780470 (project SAUCE), and by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE). The work of J. Vazquez-Corral was supported by the Spanish government under Grant IJCI-2014-19516.
Multiple-exposure approaches for high dynamic range (HDR) image generation share a set of building assumptions: that color channels are independent and that the camera response function (CRF) remains constant while changing the exposure. The first contribution of this paper is to highlight how these assumptions, which were correct for film photography, do not hold in general for digital cameras. As a consequence, results of multiexposure HDR methods are less accurate, and when tonemapped they often present problems like hue shifts and color artifacts. The second contribution is to propose a method to stabilize the CRF while coupling all color channels, which can be applied to both static and dynamic scenes, and yield artifact-free results that are more accurate than those obtained with state-of-the-art methods according to several image metrics. ; The work of the authors was supported by the European Unions Horizon 2020 research and innovation programme grant 761544 (project HDR4EU) and grant 780470 (project SAUCE) and by the Spanish government and FEDER fund grant TIN2015-71537-P (MINECO/FEDER,UE). The work of the second author was supported by the Spanish government under grant IJCI-2014-19516.
This work was supported by the European Research Council, Starting Grant ref. 306337, by the Spanish government and FEDER Fund, grant ref. TIN2015-71537-P (MINECO/FEDER,UE), and by the Icrea Academia Award. The work of Javier Vazquez-Corral was supported by the Spanish government grant IJCI-2014-19516.