使用简单图像拼接组装图像#

此示例演示了如何在刚体运动假设下组装一组图像。

from matplotlib import pyplot as plt
import numpy as np
from skimage import data, util, transform, feature, measure, filters, metrics


def match_locations(img0, img1, coords0, coords1, radius=5, sigma=3):
    """Match image locations using SSD minimization.

    Areas from `img0` are matched with areas from `img1`. These areas
    are defined as patches located around pixels with Gaussian
    weights.

    Parameters
    ----------
    img0, img1 : 2D array
        Input images.
    coords0 : (2, m) array_like
        Centers of the reference patches in `img0`.
    coords1 : (2, n) array_like
        Centers of the candidate patches in `img1`.
    radius : int
        Radius of the considered patches.
    sigma : float
        Standard deviation of the Gaussian kernel centered over the patches.

    Returns
    -------
    match_coords: (2, m) array
        The points in `coords1` that are the closest corresponding matches to
        those in `coords0` as determined by the (Gaussian weighted) sum of
        squared differences between patches surrounding each point.
    """
    y, x = np.mgrid[-radius : radius + 1, -radius : radius + 1]
    weights = np.exp(-0.5 * (x**2 + y**2) / sigma**2)
    weights /= 2 * np.pi * sigma * sigma

    match_list = []
    for r0, c0 in coords0:
        roi0 = img0[r0 - radius : r0 + radius + 1, c0 - radius : c0 + radius + 1]
        roi1_list = [
            img1[r1 - radius : r1 + radius + 1, c1 - radius : c1 + radius + 1]
            for r1, c1 in coords1
        ]
        # sum of squared differences
        ssd_list = [np.sum(weights * (roi0 - roi1) ** 2) for roi1 in roi1_list]
        match_list.append(coords1[np.argmin(ssd_list)])

    return np.array(match_list)

数据生成#

对于此示例,我们生成一个略微倾斜的噪声图像列表。

img = data.moon()

angle_list = [0, 5, 6, -2, 3, -4]
center_list = [(0, 0), (10, 10), (5, 12), (11, 21), (21, 17), (43, 15)]

img_list = [
    transform.rotate(img, angle=a, center=c)[40:240, 50:350]
    for a, c in zip(angle_list, center_list)
]
ref_img = img_list[0].copy()

img_list = [
    util.random_noise(filters.gaussian(im, sigma=1.1), var=5e-4, rng=seed)
    for seed, im in enumerate(img_list)
]

psnr_ref = metrics.peak_signal_noise_ratio(ref_img, img_list[0])

图像配准#

注意

此步骤使用 使用 RANSAC 进行稳健匹配 中描述的方法执行,但可以应用 图像配准 部分中的任何其他方法,具体取决于您的问题。

在列表中的所有图像上检测参考点。

min_dist = 5
corner_list = [
    feature.corner_peaks(
        feature.corner_harris(img), threshold_rel=0.001, min_distance=min_dist
    )
    for img in img_list
]

第一张图像中检测到的 Harris 角点被选为参考点。然后,将其他图像上检测到的点与参考点匹配。

img0 = img_list[0]
coords0 = corner_list[0]
matching_corners = [
    match_locations(img0, img1, coords0, coords1, min_dist)
    for img1, coords1 in zip(img_list, corner_list)
]

一旦所有点都被配准到参考点,就可以使用 RANSAC 方法估计稳健的相对仿射变换。

src = np.array(coords0)
trfm_list = [
    measure.ransac(
        (dst, src),
        transform.EuclideanTransform,
        min_samples=3,
        residual_threshold=2,
        max_trials=100,
    )[0].params
    for dst in matching_corners
]

fig, ax_list = plt.subplots(6, 2, figsize=(6, 9), sharex=True, sharey=True)
for idx, (im, trfm, (ax0, ax1)) in enumerate(zip(img_list, trfm_list, ax_list)):
    ax0.imshow(im, cmap="gray", vmin=0, vmax=1)
    ax1.imshow(transform.warp(im, trfm), cmap="gray", vmin=0, vmax=1)

    if idx == 0:
        ax0.set_title("Tilted images")
        ax0.set_ylabel(f"Reference Image\n(PSNR={psnr_ref:.2f})")
        ax1.set_title("Registered images")

    ax0.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
    ax1.set_axis_off()

fig.tight_layout()
Tilted images, Registered images

图像组装#

可以使用已配准图像相对于参考图像的位置获得合成图像。为此,我们定义了参考图像周围的全局域,并将其他图像定位在这个域中。

定义了一个全局变换,通过简单的平移将参考图像移动到全局域图像中

margin = 50
height, width = img_list[0].shape
out_shape = height + 2 * margin, width + 2 * margin
glob_trfm = np.eye(3)
glob_trfm[:2, 2] = -margin, -margin

最后,通过将全局变换与相对变换合成,获得其他图像在全局域中的相对位置

Reconstructed image (PSNR=36.18)

脚本的总运行时间:(0 分钟 1.040 秒)

由 Sphinx-Gallery 生成的图库