August 2008
Dbn_Tutorial
(via)Topics: Energy models, causal generative models vs. energy models in overcomplete ICA, contrastive divergence learning, score matching, restricted Boltzmann machines, deep belief networks
June 2008
YouTube - The Next Generation of Neural Networks
(via)In the 1980's, new learning algorithms for neural networks promised to
solve difficult classification tasks, like speech or object recognition,
by learning many layers of non-linear features. The results were
disappointing for two reasons: There was never enough labeled data to
learn millions of complicated features and the learning was much too slow
in deep neural networks with many layers of features. These problems can
now be overcome by learning one layer of features at a time and by
changing the goal of learning. Instead of trying to predict the labels,
the learning algorithm tries to create a generative model that produces
data which looks just like the unlabeled training data. These new neural
networks outperform other machine learning methods when labeled data is
scarce but unlabeled data is plentiful. An application to very fast
document retrieval will be described.
March 2007
IBM Research | IBM Haifa Labs| Machine learning for healthcare (EuResist)
(via)Generative-discriminative Hybrid Technique
We plan to use a technique that combines two kinds of learning algorithms: discriminative and generative. We plan to employ Bayesian networks in the generative phase, and SVM in the discriminative phase.
Algorithms under the generative framework try to find a statistical model that best represents the data. The predictions are then based on the likelihood scores derived from the model. This category includes algorithms such as Hidden Markov Models (HMM) [1], Gaussian Mixture Models (GMM) [2] and more complicated graphical models such as Bayesian networks [3].
1
(3 marks)