In 18 day(s), 5 hour(s) and 49 minute(s): Our team is on break until January 7, 2026. Inquiries will be addressed shortly after our return. Thank you for your patience and happy holidays!
Repository logo
 

LarTap: A Luminance-Aware Framework With Text-Correlation Priors for Multi-Exposure Image Fusion

aut.relation.endpage1
aut.relation.issue99
aut.relation.journalIEEE Transactions on Circuits and Systems for Video Technology
aut.relation.startpage1
aut.relation.volumePP
dc.contributor.authorWang, Enlong
dc.contributor.authorLi, Jiawei
dc.contributor.authorYan, Tiantian
dc.contributor.authorLei, Jia
dc.contributor.authorZhou, Shihua
dc.contributor.authorWang, Bin
dc.contributor.authorLiu, Jinyuan
dc.contributor.authorKasabov, Nikola K
dc.date.accessioned2025-05-01T03:17:16Z
dc.date.available2025-05-01T03:17:16Z
dc.date.issued2025-04-21
dc.description.abstractConventional imaging devices often struggle to produce high-dynamic-range (HDR) images that accurately represent natural scenes. To overcome this limitation, multi-exposure image fusion (MEF) techniques have been introduced as a viable solution. Existing MEF approaches aim to enhance performance by optimizing or searching architectures. However, they face challenges in precise feature extraction and scene reconstruction, leading to distortion in the fused images. Additionally, most methods do not adequately address luminance variations across different image regions, which may result in the loss of essential details. To address these challenges, we present a novel luminance-aware MEF framework that integrates text-correlation priors (LarTap). By embedding textual information into fusion process, the proposed framework enhances content extraction and comprehension. Specifically, it consist of two key components: the text-image correlation network (N1) and the multi-exposure fusion network (N2). First, N1 performs correlation training to achieve a holistic alignment between text and image pairs. Its iterative vision encoders (VEs) generate text-correlated prior knowledge to facilitate the fusion process in N2. Second, N2 leverages these priors for scene reconstruction and dynamically adjusts luminance based on comparative perception. Extensive experiments on multiple datasets demonstrate that LarTap outperforms state-of-the-art methods.
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, ISSN: 1051-8215 (Print); 1558-2205 (Online), Institute of Electrical and Electronics Engineers (IEEE), PP(99), 1-1. doi: 10.1109/tcsvt.2025.3562564
dc.identifier.doi10.1109/tcsvt.2025.3562564
dc.identifier.issn1051-8215
dc.identifier.issn1558-2205
dc.identifier.urihttp://hdl.handle.net/10292/19130
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urihttps://doi.org/10.1109/tcsvt.2025.3562564
dc.rightsThis article has been accepted for publication in IEEE Transactions on Circuits and Systems for Video Technology. This is the author's version which has not been fully edited and content may change prior to final publication.
dc.rights.accessrightsOpenAccess
dc.subject40 Engineering
dc.subject46 Information and Computing Sciences
dc.subject4603 Computer Vision and Multimedia Computation
dc.subject4605 Data Management and Data Science
dc.subject4607 Graphics, Augmented Reality and Games
dc.subject0801 Artificial Intelligence and Image Processing
dc.subject0906 Electrical and Electronic Engineering
dc.subjectArtificial Intelligence & Image Processing
dc.subject4006 Communications engineering
dc.subject4009 Electronics, sensors and digital hardware
dc.titleLarTap: A Luminance-Aware Framework With Text-Correlation Priors for Multi-Exposure Image Fusion
dc.typeJournal Article
pubs.elements-id601331

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
LarTap_A_Luminance-aware_Framework_with_Text-correlation_Priors_for_Multi-Exposure_Image_Fusion.pdf
Size:
8.32 MB
Format:
Adobe Portable Document Format
Description:
Journal article
Loading...
Thumbnail Image
Name:
Wang et al._2025_LarTap.pdf
Size:
10.12 MB
Format:
Adobe Portable Document Format
Description:
Evidence for verification