1

Learning Neural Networks with Two Nonlinear Layers in Polynomial Time

We give a polynomial-time algorithm for learning neural networks with one layer of sigmoids feeding into any Lipschitz, monotone activation function (e.g., sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm …

Learning One Convolutional Layer with Overlapping Patches

We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches. Additionally, our algorithm requires only mild conditions on the underlying …

Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks

We consider the problem of learning function classes computed by neural networks with various activations (e.g. ReLU or Sigmoid), a task believed to be computationally intractable in the worst-case. A major open problem is to understand the minimal …

Reliably Learning the ReLU in Polynomial Time

We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form $x \rightarrow \max(0, w \cdot x)$ with $w \in S^{n−1}$. Our algorithm works in the challenging Reliable Agnostic learning …