TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FCN: Fusing Exponential and Linear Cross Network for Click...

FCN: Fusing Exponential and Linear Cross Network for Click-Through Rate Prediction

Honghao Li, Yiwen Zhang, Yi Zhang, Hanwei Li, Lei Sang, Jieming Zhu

2024-07-18Click-Through Rate Prediction
PaperPDFCode(official)Code(official)

Abstract

As an important modeling paradigm in click-through rate (CTR) prediction, the Deep & Cross Network (DCN) and its derivative models have gained widespread recognition primarily due to their success in a trade-off between computational cost and performance. This paradigm employs a cross network to explicitly model feature interactions with linear growth, while leveraging deep neural networks (DNN) to implicitly capture higher-order feature interactions. However, these models still face several key limitations: (1) The performance of existing explicit feature interaction methods lags behind that of implicit DNN, resulting in overall model performance being dominated by the DNN; (2) While these models claim to capture high-order feature interactions, they often overlook potential noise within these interactions; (3) The learning process for different interaction network branches lacks appropriate supervision signals; and (4) The high-order feature interactions captured by these models are often implicit and non-interpretable due to their reliance on DNN. To address the identified limitations, this paper proposes a novel model, called Fusing Cross Network (FCN), along with two sub-networks: Linear Cross Network (LCN) and Exponential Cross Network (ECN). FCN explicitly captures feature interactions with both linear and exponential growth, eliminating the need to rely on implicit DNN. Moreover, we introduce the Self-Mask operation to filter noise layer by layer and reduce the number of parameters in the cross network by half. To effectively train these two cross networks, we propose a simple yet effective loss function called Tri-BCE, which provides tailored supervision signals for each network. We evaluate the effectiveness, efficiency, and interpretability of FCN on six benchmark datasets. Furthermore, by integrating LCN and ECN, FCN achieves a new state-of-the-art performance.

Results

TaskDatasetMetricValueModel
Click-Through Rate PredictionKKBoxAUC0.8557FCN
Click-Through Rate PredictionAvazuAUC0.797FCN
Click-Through Rate PredictionAvazuLogLoss0.3695FCN
Click-Through Rate PredictionMovieLens 1MAUC0.9074DCNv3
Click-Through Rate PredictionMovieLens 1MLog Loss0.3001DCNv3
Click-Through Rate PredictioniPinYouAUC0.7856FCN
Click-Through Rate PredictioniPinYouLogLoss0.005535FCN
Click-Through Rate PredictionCriteoAUC0.8162FCN
Click-Through Rate PredictionCriteoLog Loss0.4358FCN
Click-Through Rate PredictionKDD12AUC0.8098FCN
Click-Through Rate PredictionKDD12Log Loss0.1494FCN

Related Papers

Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15GIST: Cross-Domain Click-Through Rate Prediction via Guided Content-Behavior Distillation2025-07-07An Audio-centric Multi-task Learning Framework for Streaming Ads Targeting on Spotify2025-06-23MoE-MLoRA for Multi-Domain CTR Prediction: Efficient Adaptation with Expert Specialization2025-06-09DLF: Enhancing Explicit-Implicit Interaction via Dynamic Low-Order-Aware Fusion for CTR Prediction2025-05-25Revisiting Feature Interactions from the Perspective of Quadratic Neural Networks for Click-through Rate Prediction2025-05-23Field Matters: A lightweight LLM-enhanced Method for CTR Prediction2025-05-201$^{st}$ Place Solution of WWW 2025 EReL@MIR Workshop Multimodal CTR Prediction Challenge2025-05-06