Microsoft Research team Ziyu Wan, Bo Zhang, and more have developed a new AI-based algorithm for restoring old photos that suffer from severe degradation through a deep learning approach.
Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize.
Their new technique proposes a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, they train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces. And the translation between these two latent spaces is learned with synthetic paired data.
This translation generalizes well to real photos because the domain gap is closed in the compact latent space. To address multiple degradations mixed in one old photo, they designed a global branch with a partial nonlocal block targeting to the structured defects, such as scratches and dust spots, and a local branch targeting to the unstructured defects, such as noises and blurriness. The two branches are fused in the latent space, leading to improved capability to restore old photos from multiple defects. The proposed method outperforms state-of-the-art methods in terms of visual quality for old photos restoration.
See the technique demonstrated in video below:
Unfortunately, Microsoft has not made a demo site available to try out the technology, but hopefully, the company will take the hint.
Read much more detail at Microsoft here.