We study the problem of learning graphical models with latent variables. We give the $first$ efficient algorithms for learning: 1) ferromagnetic Ising models with latent variables under $arbitrary$ external fields, and 2) ferromagnetic Potts model with latent variables under unidirectional non-negative external field. Our algorithms have optimal dependence on the dimension but suffer from a sub-optimal dependence on the underlying sparsity of the graph. Our results rely on two structural properties of the underlying graphical models. These in turn allow us to design an influence function which can be maximized greedily to recover the structure of the underlying graphical model. These structural results may be of independent interest.