TY - GEN
T1 - Super-Resolution of Multiple Sentinel-2 Images Using Composite Loss Function
AU - Liu, Shuai
AU - Fonseca, Jose M.
AU - Mora, Andre
N1 - info:eu-repo/grantAgreement/FCT/Concurso de avaliação no âmbito do Programa Plurianual de Financiamento de Unidades de I&D (2017%2F2018) - Financiamento Base/UIDB%2F00066%2F2020/PT#
info:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDP%2F00066%2F2020/PT#
Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In the current domain of image super-resolution (SR), particularly concerning satellite imagery processing based on mainstream deep learning methodologies, most algorithms typically employ loss functions such as L1 loss (Mean Absolute Error) or L2 loss (Mean Squared Error) based on pixel value differences when training network models. However, when magnifying the details of the resulting images, there is often a blurring effect at the edges where different ground conditions, such as city roads, buildings, and various terrains, intersect, making it difficult to distinguish the edges of these different objects. On the other hand, in our previous experiments, using the Perceptual Loss function (based on calculating perceptual error of images) for training models yielded images with improved visual quality, allowing for better object distinction. Nevertheless, although the object edges did not appear blurred, the transitions at the edges were somewhat abrupt and distorted. Therefore, in this paper, a composite loss function that combines L1 loss and Perceptual Loss is proposed, aiming to leverage their advantages to enhance the visual quality of objects in satellite images while avoiding edge blurring and achieving higher object discrimination. Additionally, we continue to explore the optimization of the SGNET (Sentinel-2 Google-Earth-Pro Network) architecture to improve the image super-resolution results.
AB - In the current domain of image super-resolution (SR), particularly concerning satellite imagery processing based on mainstream deep learning methodologies, most algorithms typically employ loss functions such as L1 loss (Mean Absolute Error) or L2 loss (Mean Squared Error) based on pixel value differences when training network models. However, when magnifying the details of the resulting images, there is often a blurring effect at the edges where different ground conditions, such as city roads, buildings, and various terrains, intersect, making it difficult to distinguish the edges of these different objects. On the other hand, in our previous experiments, using the Perceptual Loss function (based on calculating perceptual error of images) for training models yielded images with improved visual quality, allowing for better object distinction. Nevertheless, although the object edges did not appear blurred, the transitions at the edges were somewhat abrupt and distorted. Therefore, in this paper, a composite loss function that combines L1 loss and Perceptual Loss is proposed, aiming to leverage their advantages to enhance the visual quality of objects in satellite images while avoiding edge blurring and achieving higher object discrimination. Additionally, we continue to explore the optimization of the SGNET (Sentinel-2 Google-Earth-Pro Network) architecture to improve the image super-resolution results.
KW - Google Earth Pro
KW - Multiple Images Super Resolution
KW - Perceptual Loss
KW - Sentinel-2
KW - SGNET
KW - Super Resolution
UR - http://www.scopus.com/inward/record.url?scp=85202347581&partnerID=8YFLogxK
U2 - 10.1109/YEF-ECE62614.2024.10625336
DO - 10.1109/YEF-ECE62614.2024.10625336
M3 - Conference contribution
AN - SCOPUS:85202347581
T3 - Proceedings - 8th International Young Engineers Forum on Electrical and Computer Engineering, YEF-ECE 2024
SP - 26
EP - 30
BT - Proceedings - 8th International Young Engineers Forum on Electrical and Computer Engineering, YEF-ECE 2024
PB - Institute of Electrical and Electronics Engineers (IEEE)
T2 - 8th International Young Engineers Forum on Electrical and Computer Engineering
Y2 - 5 July 2024 through 5 July 2024
ER -