Myopic deconvolution of retina images acquired by adaptative optics through bayesian estimation.
When an optical system is designed, a theoretical Point Spread Function (PSF) is associated with it. The system built according to this design is calibrated, and the experimental PSF slightly differs from the theoretical one (usually slightly degraded due to the imperfections of the manufacturing processes). For classic usecases of optical systems, like classic photography the in-use PSF will be the experimental one with a good approximation (modulo the aging of the system). However for specific usescases involving middles with variating characteristics (such as optical indices), the PSF is modified with each specific middle. One way of correcting this change of the PSF, due to the changes in the measuring environment, it to introduce adaptative optics (AO) in the design of the optical system. AO will characterize the perturbation (air temperature, winds for astronomical observation involving light propagation in the atmosphere, or of the eye lens and eyeball liquid for retina scanning) and reconstruct the wavefront so that the usecase PSF is partly corrected from this.
This AO design produces better resolved images that can be then improved with digital processing, usually involving deconvolution of the acquired image by the usecase PSF of the system. For deconvolution of images acquired in classic (static) middles, the experimental PSF is used, however here, the estimate of the PSF is to be found to perform such filtering. This situation is called blind deconvolution (or myopic deconvolution since the image still contain information about the PSF). For example, the scheme used in [1] uses some available a priori information on the PSF, namely, its positivity and estimates of its ensemble mean and PSD.
In retina imagery, the acquired images are 3 dimensional, however the acquired images are only 2 dimensional (in imagery in general, this concept still holds). So, to look at a specific plane within the depth of focus of the imaging system (outside, the object only contributes by adding background photon flux to the relevant image), it is possible to use the associated PSF. In first approximation, this PSF can be linearly computed between two PSF, in focus and in depth of focus for example. [[2]]
[[2]]
In stochastic approaches the object is seen as one realization of a stochastic process. The object is endowed with an a priori distribution
This leads to two commonly used object estimation methods: the MAP estimation and the MMSE estimation. On the one hand, the MAP estimation defines the restored object as the most probable object, given the data: $$ \hat{\mathbf{o}}_{\text {map }}=\underset{\mathbf{o}}{\arg \max } p(\mathbf{o} \mid \mathbf{i}) $$
On the other hand, the MMSE estimator is defined as the one that minimizes, on average, the distance with the true object: $$ \hat{\mathbf{o}}{\mathrm{mmse}}=\underset{\hat{o}}{\arg \min } E\left(|\hat{\mathbf{o}}-\mathbf{o}|^2\right) $$ where $E()$ stands for the mathematical expectation with respect to the object and to the image noise. It can be shown that this estimator is the mean object with respect to the a posteriori probability distribution ${ }^{23,24}$ : $$ \hat{\mathbf{o}}{\text {mmse }}=E(\mathbf{o} \mid \mathbf{i})=\int \mathbf{o} p(\mathbf{o} \mid \mathbf{i}) \mathrm{d} \mathbf{o} $$
In general, the calculation of the MMSE estimator is not tractable unless the estimator is assumed to be linear. The minimization of Eq. (5) under this assumption leads to the Wiener filter.
$\begin{aligned} p(\mathbf{o} \mid \mathbf{i}) & \propto p(\mathbf{i} \mid \mathbf{o}) p(\mathbf{o}) \ & \propto \exp \left[-1 / 2(\mathbf{i}-H \mathbf{o})^t R_n^{-1}(\mathbf{i}-H \mathbf{o})\right] \ & \times \exp \left[-1 / 2\left(\mathbf{o}-\mathbf{o}{\mathbf{m}}\right)^t R_o^{-1}\left(\mathbf{o}-\mathbf{o}{\mathbf{m}}\right)\right],\end{aligned}$
Choosing the regularization function consist in finding the right model for the regularization parameter, ie the PSD of the a priori distribution (of the object here). The maximization criterion is composed of two terms classicaly, the likelihood terms, usually a least square term, and the regularization function.
We therefore generalized the deconvolution scheme to the case of myopic deconvolution, in which both the object and the PSF have to be restored.
The last term (regularization on the PSF) cannot be ignored, otherwise the myopic deconvolution usually leads to the trivial solution: a Dirac function for the PSF and an object equal to the image. $\mathrm{PSD}h$ is expressed simply as a function of the first two moments of the OTF: $$ \operatorname{PSD}h(f)=E\left[\left|\tilde{\mathbf{h}}(f)-\tilde{\mathbf{h}}{\mathrm{m}}(f)\right|^2\right]=E\left[|\tilde{\mathbf{h}}(f)|^2\right]-\left|\tilde{\mathbf{h}}{\mathrm{m}}(f)\right|^2 $$
The restoration quality can be quantitatively evaluated by the calculation of a distance to the true object
In astronomy, the estimated objects are the object and the PSF, while in retina imagery, these are the object and the parameter
$$ \mathbf{i}{3 \mathrm{D}}=\mathbf{h}{3 \mathrm{D}} *{3 \mathrm{D}} \mathbf{o}{3 \mathrm{D}}+\mathbf{n} $$
$$ \mathbf{i}{2 \mathrm{D}}=\mathbf{h}{2 \mathrm{D}} *{2 \mathrm{D}} \mathbf{o}{2 \mathrm{D}}+\mathbf{n} $$
We assume that our object is shift invariant along the optical axis:
$$
o_{3 \mathrm{D}}(x, y, z)=o_{2 \mathrm{D}}(x, y) \alpha(z)
$$
where
with
where,
The noise on the images is mainly photon noise which has a Poisson distribution. However, AO retinal images are dominated by a strong and quite homogeneous background. In the following, we will therefore assume that the noise is stationary white Gaussian with a variance
$\hat{\mathbf{o}}(\alpha, \theta)=\left(\mathbf{H}^t \mathbf{H}+\sigma^2 \mathbf{R}_0^{-1}\right)^{-1}\left(\mathbf{H}^t \mathbf{i}+\sigma^2 \mathbf{R}0^{-1} \mathbf{o}{\mathrm{m}}\right)$
Since the matrices
where
$$ \begin{aligned} & J_{\text {jmap }}^{\prime}(\alpha)=\frac{1}{2} N^2 \ln S_{\mathrm{n}}+\frac{1}{2} \sum_v \ln S_{\mathrm{o}}(v) \ & \quad+\frac{1}{2} \sum_v \frac{1}{S_{\mathrm{o}}(v)} \frac{\left|\tilde{i}(v)-\tilde{h}(v) \tilde{o}{\mathrm{m}}(v)\right|^2}{|\tilde{h}(v)|^2+\frac{S_0}{S{\mathrm{o}}(v)}} \end{aligned} $$
The joint MAP solution is thus the pair
How can we find
A simulated image is built in the following manner: $$ \mathbf{i}=\left(\alpha * \mathbf{h}{\mathrm{foc}}+(1-\alpha) \mathbf{h}{\mathrm{defoc}}\right) * \mathbf{o}+\mathbf{n} $$
The defocus is equal to
We assume for the sake of this simulation that the object PSD
[1] Jean-Marc Conan, Laurent M. Mugnier, Thierry Fusco, Vincent Michau, and Gerard Rousse | Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra | APPLIED OPTICS / Vol. 37, No. 21 / 20 July 1998 | © 1998 Optical Society of America
[2] L. Blanco1, L. M. Mugnier1 | Marginal blind deconvolution of adaptive optics retinal images | 7 November 2011 / Vol. 19, No. 23 / OPTICS EXPRESS 23227 | © 2011 OSA