Complex-Valued Disparity: Unified Depth Model of Depth from Stereo, Depth from Focus, and Depth from Defocus Based on the Light Field Gradient.

08:00 EDT 8th October 2019 | BioPortfolio

Summary of "Complex-Valued Disparity: Unified Depth Model of Depth from Stereo, Depth from Focus, and Depth from Defocus Based on the Light Field Gradient."

This paper proposes a unified depth model based on the light field gradient, in which estimated disparity is represented by the complex number. The complex-valued disparity by the proposed depth model can be represented in both the Cartesian and polar coordinates. In the Cartesian representation, the proposed depth model is represented by real and imaginary parts of the disparity. The real part can be used for disparity estimation with respect to the in-focus plane, whereas the imaginary part represents the non-Lambertian-ness. In the polar representation, the proposed depth model is expressed by the disparity magnitude and disparity angle. The disparity magnitude shows the relationship among depth from stereo, depth from focus, and depth from defocus, whereas the disparity angle shows whether or not the bundles of rays are flipped with respect to the in-focus plane. For disparity analysis, we present the real response, imaginary response, magnitude response, and angle response, which are represented by the three-dimensional volume. Experimental results on synthetic and real light field images show that the real and magnitude responses of the proposed depth model are valid for local disparity estimation.


Journal Details

This article was published in the following journal.

Name: IEEE transactions on pattern analysis and machine intelligence
ISSN: 1939-3539


DeepDyve research library

PubMed Articles [11833 Associated PubMed Articles listed on BioPortfolio]

Areal differences in depth cue integration between monkey and human.

Electrophysiological evidence suggested primarily the involvement of the middle temporal (MT) area in depth cue integration in macaques, as opposed to human imaging data pinpointing area V3B/kinetic o...

Going From RGB to RGBD Saliency: A Depth-Guided Transformation Model.

Depth information has been demonstrated to be useful for saliency detection. However, the existing methods for RGBD saliency detection mainly focus on designing straightforward and comprehensive model...

A framework for learning depth from a flexible subset of dense and sparse light field views.

In this paper, we propose a learning based depth estimation framework suitable for both densely and sparsely sampled light fields. The proposed framework consists of three processing steps: initial de...

Border-ownership-dependent tilt aftereffect for shape defined by binocular disparity and motion parallax.

Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviours is a fundamental task of the brain. Neurophysiological work has revealed a class of ...

Speed change discrimination for motion in depth using constant world and retinal speeds.

Motion at constant speed in the world maps into retinal motion very differently for lateral motion and motion in depth. The former is close to linear, for the latter, constant speed objects accelerate...

Clinical Trials [3558 Associated Clinical Trials listed on BioPortfolio]

Estimating Patient Size From a Single Radiograph

A computational model has been created to estimate the abdominal depth of a patient from a single x-ray image. The model has been tested using phantoms and found to be accurate; this study...

Estimation of CPR Chest Compression Depth

Optimal chest compression depth during CPR is 4.56cm which is at variance with the current guidelines of 5.0-6.0cm. A change in guidelines is only worthwhile if healthcare professionals ca...

Modeling and Closed-loop Control of Depth of Anaesthesia

The study evaluates the effect of anaesthetic agents to depth of anaesthesia. An improved PK-PD model wil be developed that will provide the basis for understanding the mechanisms, simulat...

fPAM for the in Vivo Depth Measurement of Pigmented Lesions and Melanoma Depth

The investigators propose the use of functional photoacoustic microscopy (fPAM) to evaluate both benign and malignant pigmented lesions for tumor depth. By comparing fPAM analysis to histo...

Ultrasound Determination of Needle Depth in Epidurals in Adult Patients

The introduction of local anesthetics and other medications into the epidural space is a principal technique in provision of anesthesia in many procedures. Typically the anesthetist access...

Medical and Biotech [MESH] Definitions

Devices for examining the interior of the eye, permitting the clear visualization of the structures of the eye at any depth. (UMDNS, 1999)

An extremely stable inhalation anesthetic that allows rapid adjustments of anesthesia depth with little change in pulse or respiratory rate.

Perception of three-dimensionality.

A severe emotional disorder of psychotic depth characteristically marked by a retreat from reality with delusion formation, HALLUCINATIONS, emotional disharmony, and regressive behavior.

Those procedures designed to widen the zone of attached gingiva and deepen the vestibular depth which will facilitate the clearance of the area for natural food passage, and provide access for toothbrushing and interdental stimulation.

Quick Search

DeepDyve research library

Searches Linking to this Article