Description
Base boosting is a generalization of gradient boosting, which fits a hybrid additive and varying coefficient model.
-
Namely, gradient boosting fits an additive model: \begin{equation} h(X ; { \alpha, \theta}) = \alpha_{0} + \sum_{k=1}^{K} \alpha_{k} b(X ; \theta_{k}), \end{equation} where the boosting mechanism begins optimization in function space at a constant model.
-
In contrast, base boosting fits the hybrid additive and varying coefficient model: \begin{equation} h(X ; { \alpha, \theta}) = \gamma(X) + \sum_{k=1}^{K} \alpha_{k} b(X ; \theta_{k}), \end{equation} where the boosting mechanism begins optimization in function space at a base model, which may be a non-constant model.
A special case is the coordinate functional: \begin{equation} \gamma(X) = \pi_{j}(X) = X_{j} \end{equation} where denotes a prediction generated by the base model.
- This setup facilitates knowledge transfer between the base model and boosting mechanism.