Abstract : Deep belief networks are a powerful way to model complex probability distributions. However, it is diﬃcult to learn the structure of a belief network, particularly one with hid-den units.
The Indian buﬀet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single inﬁnitely wide hidden layer.
Here, we introduce the cascading Indian buﬀet process(CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference.
We use the CIBP prior with the nonlinear Gaussian belief net-work framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.
Please find the following attachments"ADAMS e- learning seminar report/pdf/ppt download" here......