Current Issue Cover
使用柯西分布点扩散函数模型的单幅散焦图像深度恢复

明英1, 蒋晶珏2(1.武警政治学院基层政治工作系政工信息化教研室, 上海 200435;2.武汉大学计算机学院, 武汉 430079)

摘 要
目的 当前,大多数单幅散焦图像的3维(3D)场景深度恢复方法,通常使用高斯分布描述点扩散函数(PSF)模型,依据图像边缘散焦模糊量与场景深度的对应关系获得稀疏深度图,采用不同的扩展方法得到整个场景图像的全深度图.鉴于现有方法的深度恢复结果还不够精准,对各种噪声干扰还不够健壮,提出一种基于柯西分布的点扩散函数模型计算物体图像边缘散焦模糊量的方法.方法 将输入的单幅散焦图像分别用两个柯西分布重新模糊,利用图像边缘两次重新模糊图像间梯度比值和两个柯西分布的尺度参数,可以计算出图像中边缘处的散焦模糊量.使用matting内插方法将边缘模糊量扩展到整个图像,即可恢复场景的全深度图.结果 将原始Lenna图像旋转并加入高斯噪声以模拟图像噪声和边缘位置误差,用原图与噪声图比较了柯西分布图像梯度比值与高斯分布图像梯度比值的平均误差.使用多种真实场景图像数据,将本文方法与现有的多种单幅散焦图像深度恢复方法进行了比较.柯西分布图像梯度比值的平均误差要小于高斯分布图像梯度比值的平均误差.本文方法能够从非标定单幅散焦图像中较好地恢复场景深度,对图像噪声、不准确边缘位置和邻近边缘具有更好的抗干扰能力.结论 本文方法可以生成优于现有基于高斯模型等方法的场景深度图.同时,也证明了使用非高斯模型建模PSF的可行性和有效性.
关键词
Depth recovery from a single defocused image using a Cauchy-distribution-based point spread function model

Ming Ying1, Jiang Jingjue2(1.Department of Political Work, CAPF Political Institute, Shanghai 200435, China;2.School of Computer Science, Wuhan University, Wuhan 430079, China)

Abstract
Objective This study aims to address the challenging problem of recovering the 3D depth of a scene from a single image. Most current approaches for depth recovery from a single defocused image model the point spread function as a 2D Gaussian function. However, these methods are influenced by noise, and a high quality of recovery is difficult to achieve.Method Unlike the previous depth calculations from defocus methods, we propose an approach to estimate the amount of spatially varying defocus blurs at the locations of image edges on the basis of a Cauchy distribution point-spread function model. The input defocused image is reblurred twice with two respective Cauchy distribution kernels. The amount of defocus blur at edge locations can be obtained from the ratio between the gradients of the two re-blurred images and the two scale parameters of Cauchy distribution. A full depth map is recovered by propagating the blur amount at edge locations to the entire image via matting interpolation.Result The original “Lenna” image and a rotated noise “Lenna” image are used, and a Gaussian noise is used to simulate the image noise and edge position error. Then, the average error of the Cauchy gradient ratio is compared with that of a Gaussian gradient ratio. Various real-scene image data are also used to compare our depth recovery results with those of existing methods. Experimental results show that the average error of the Cauchy gradient ratio is less than that of a Gaussian gradient ratio. Experimental results on several real images demonstrate the effectiveness of our method in estimating the defocus map from a single defocused image.Conclusion Our method is robust to image noise, inaccurate edge location, and interference of neighboring edges. The proposed method can generate more accurate scene depth maps as compared with most existing methods based on a Gaussian model. Our results also demonstrate that a non-Gaussian model for DSF is feasible.
Keywords

订阅号|日报