Bayesian wrapper would address the points raised in the paper. Machine learning for financial engineering pdf you wanted to incorporate these AI algorithms into a Bayesian framework then I think it’s much more effective to treat the algorithms as further steps in data reduction. For example, train some neural net, treat the outputs of the net as the actual measurement, and then add the trained neural net to your likelihood.
My response: yes, I give that advice too, and I’ve used this method in consulting problems. Recently we had a pleasant example in which we started by using the output from the so-called machine learning as a predictor, then we fit a parametric model to the machine-learning fit, and now we’re transitioning toward modeling the raw data. Some interesting general lessons here, I think. It could also be spun into a neurological narrative.
In particular, machine-learning-type methods tend to be crap at extrapolation and can have weird flat behavior near the edge of the data. IOW, it’s a setup for overfitting if there’s enough data. And in a high-dimensional situation, there won’t be much data near the edges, so the fits will probably be poor there. I wonder if, in real animals, the brain has a method for reducing the number of elements in the hidden layers to reduce these issues for particular cases. Don’t use the classification but the underlying decision value and enter it into logistic regression. That was common back in the 90s. We really couldn’t see the forest for the trees—rather than just saying we’re conditioning high-dimensional predicors with a rank-reduced SVD and then using logistic regression, there’s a pile of tortured math and even more roundabout justification.
The real contribution of that paper was disambiguation. So weird how small the world is. I first became aware of Jennifer Chu-Carroll because I followed the math blog of her husband Mark, but I didn’t realize what sort of work she’d done. I discover she’s worked with you on NLP.
I have been wondering how to quantify uncertainty in predictions from machine learning algorithms. How relevant is Bayesian in the deep learning? I have not seen much paper published in Bayesian deep learning. It turns out that dropout, a very useful regularization method without a good theoretical justification, can be justified as a stochastic variational Bayes approximation to a fully Bayesian analysis of a deep neural network model.
It’s a true picture of my mental model of data analysis. How can you be sure of that? Sound great, thanks for the link. Also, Zhusuan just came out a few days ago, I haven’t read through it. Please let me know if you have tried it!