# October 17th, 2012 @ 11:30am Shay Cohen (Columbia)

**Consistent and Efficient Algorithms for Latent-Variable PCFGs**

In the past few years, there has been an increased interest in the machine learning community in spectral algorithms for estimating models with latent variables. Examples include algorithms for estimating mixture of Gaussians or for estimating the parameters of a hidden Markov model.

Until the introduction of spectral algorithms, the EM algorithm has been the mainstay for estimation with latent variables, but because with EM there is no guarantee of convergence to the global maximum of the likelihood function, EM generally does not provide consistent estimates for the model parameters. Spectral algorithms, on the other hand, are often shown to be consistent.

In this talk, I am interested in presenting a spectral algorithm for latent-variable PCFGs, a model widely used for parsing in the NLP community. This model augments the nonterminals in a PCFG grammar with latent states. These latent states re-fine the nonterminal category in order to capture subtle syntactic nuances in the data. This model has been successfully implemented in state-of-the-art parsers such as the Berkeley parser.

Our spectral algorithm for latent-variable PCFGs is based on a novel tensor formulation designed for inference with PCFGs. This tensor formulation yields an “observable operator model” for PCFGs which can be readily used for spectral estimation.

The algorithm we developed is considerably faster than EM, as it makes only one pass over the data. Statistics are collected from the data in this pass, and singular value decomposition is performed on matrices containing these statistics. Our algorithm is also provably consistent in the sense that, given enough samples, it will estimate probabilities for test trees close to their true probabilities under the latent-variable PCFG model.

If time permits, I will also present a method aimed at improving the efficiency of parsing with latent-variable PCFGs. This method relies on tensor decomposition of the latent-variable PCFG. This tensor decomposition is approximate, and therefore the new parser is an approximate parser as well. Still, the quality of approximation can be guaranteed theoretically by inspecting how errors from the approximation propagate in the parse trees.