#ML
Duan T, Avati A, Ding DY, Thai KK, Basu S, Ng AY, et al. NGBoost: Natural Gradient Boosting for probabilistic prediction. arXiv [cs.LG]. 2019. Available: http://arxiv.org/abs/1910.03225
(I had it on my reading list for a long time. However, I didn't read it until today because the title and abstract are not attractive at all.)
But this is a good paper. It goes deep to dig out the fundamental reasons why some methods work and others don't.
When inferring probability distributions, it is straightforward to come up with methods with parametrized distributions (statistical manifolds). Then, by tuning the parameters, we adjust the distribution to fit our dataset the best.
The problem is the choice of the objective function and optimization methods. This paper mentioned a most generic objective function and a framework to optimize the model along the natural gradient instead of just the gradient w.r.t. the parameters.
Different parametrizations of the objective is like coordinate transformations and chain rule only works if the transformations are in a "flat" space but such "flat" space is not necessarily a good choice for a high dimensional problem. For a space that is approximately flat in a small region, we can define distance like what we do in differential geometry[^1]. Meanwhile, just like "covariant derivatives" in differential geometry, some kind of covariant derivative can be found on statistical manifolds and they are called "natural derivatives".
Descending in the direction of natural derivatives is navigating the landscape more efficiently.
[^1]: This a Riemannian space
Duan T, Avati A, Ding DY, Thai KK, Basu S, Ng AY, et al. NGBoost: Natural Gradient Boosting for probabilistic prediction. arXiv [cs.LG]. 2019. Available: http://arxiv.org/abs/1910.03225
(I had it on my reading list for a long time. However, I didn't read it until today because the title and abstract are not attractive at all.)
But this is a good paper. It goes deep to dig out the fundamental reasons why some methods work and others don't.
When inferring probability distributions, it is straightforward to come up with methods with parametrized distributions (statistical manifolds). Then, by tuning the parameters, we adjust the distribution to fit our dataset the best.
The problem is the choice of the objective function and optimization methods. This paper mentioned a most generic objective function and a framework to optimize the model along the natural gradient instead of just the gradient w.r.t. the parameters.
Different parametrizations of the objective is like coordinate transformations and chain rule only works if the transformations are in a "flat" space but such "flat" space is not necessarily a good choice for a high dimensional problem. For a space that is approximately flat in a small region, we can define distance like what we do in differential geometry[^1]. Meanwhile, just like "covariant derivatives" in differential geometry, some kind of covariant derivative can be found on statistical manifolds and they are called "natural derivatives".
Descending in the direction of natural derivatives is navigating the landscape more efficiently.
[^1]: This a Riemannian space