IEEE Transactions on Circuits and Systems for Video Technology - new TOC TOC Alert for Publication# 76
- IEEE Transactions on Circuits and Systems for Video Technology Publication Informationel febrero 5, 2024 a las 1:16 pm
- Editor-in-Chief Messageel febrero 5, 2024 a las 1:16 pm
It’s a great honor and privilege for me to assume the role of the Editor-in-Chief of IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), which started in January 2024. I have been involved in the editorial services of TCSVT for over 23 years in various capacities. I attended my first TCSVT editorial board meeting at ISCAS 2000, and since then I served as an Associate Editor of TCSVT for four terms, working with Editor-in-Chiefs, Prof. Weiping Li, Prof. Thomas Sikora, Prof. Chang Wen Chen, Dr. Hamid Gharavi, and Prof. Dan Schonfeld. I also served as a Guest Editor of TCSVT three times and served as the Associate Editor-in-Chief of TCSVT under Editor-in-Chief Prof. Feng Wu from January 2020 to December 2021. Conducting services for TCSVT has been an important academic activity across my whole career. Therefore, I look forward to this exciting yet challenging opportunity to lead TCSVT to the next level.
- IEEE Circuits and Systems Society Informationel febrero 5, 2024 a las 1:16 pm
- Table of Contentsel febrero 5, 2024 a las 1:16 pm
- CCAFusion: Cross-Modal Coordinate Attention Network for Infrared and Visible Image Fusionel julio 7, 2023 a las 2:07 pm
Infrared and visible image fusion aims to generate one image with comprehensive information. It can maintain rich texture characteristics and thermal information. However, for existing image fusion methods, the fused images either sacrifice the salience of thermal targets and the richness of textures or introduce the interference of useless information like artifacts. To alleviate these problems, an effective cross-modal coordinate attention network for infrared and visible image fusion called CCAFusion is proposed in this paper. To fully integrate complementary features, the cross-modal image fusion strategy based on coordinate attention is designed, which consists of the feature-awareness fusion module and the feature-enhancement fusion module. Moreover, a multiscale skip connection-based network is employed to obtain multiscale features in the infrared image and the visible image, which can fully utilize the multi-level information in the fusion process. To reduce the discrepancy between the fused image and the input images, a multiple constrained loss function including the base loss and the auxiliary loss is developed to adjust the gray-level distribution and ensure the harmonious coexistence of structure and intensity in fused images, thereby preventing the pollution of useless information like artifacts. Extensive experiments conducted on widely used datasets demonstrate that our CCAFusion achieves superior performance over state-of-the-art image fusion methods in both qualitative evaluation and quantitative measurement. Furthermore, the application to salient object detection reveals the potential of our CCAFusion for high-level vision tasks, which can effectively boost the detection performance.