Ratio divergence learning using target energy in restricted Boltzmann machines: Beyond Kullback-Leibler divergence learning
{{output}}
We propose ratio divergence (RD) learning for discrete energy-based models, a method that utilizes both training data and a tractable target energy function. We apply RD learning to restricted Boltzmann machines (RBMs), which are a minimal model that satisfies... ...