Surbhi Goel

Surbhi Goel

Postdoc Researcher

Microsoft Research, New York City

I am currently a postdoctoral researcher at Microsoft Research NYC in the Machine Learning group. In Spring 2023, I will be starting as the Magerman Term Assistant Professor of Computer and Information Science at University of Pennsylvania.

My research interests lie at the intersection of theoretical computer science and machine learning, with a focus on developing theoretical foundations for modern machine learning paradigms especially deep learning.

Prior to joining MSR, I obtained my Ph.D. in the Computer Science department at the University of Texas at Austin advised by Adam Klivans. My dissertation was awarded UTCS’s Bert Kay Dissertation award. My Ph.D. research was generously supported by the JP Morgan AI Fellowship and several fellowships from UT Austin. During my PhD, I visited IAS for the Theoretical Machine learning program and the Simons Institute for the Theory of Computing at UC Berkeley for the Foundations of Deep Learning program (supported by the Simons-Berkeley Research Fellowship). Before that, I received my Bachelors degree from Indian Institute of Technology (IIT) Delhi majoring in Computer Science and Engineering.

For prospective students who are interested in working with me: if you are a UPenn student, send me an email; if you are applying for grad school this cycle, please apply to UPenn CIS and list me as a potential research advisor.

Download my resumé.

Interests
  • Theory
  • Machine Learning
Education
  • PhD in Computer Science, 2020

    University of Texas at Austin

  • MS in Computer Science, 2019

    University of Texas at Austin

  • BTech in Computer Science and Engineering, 2015

    Indian Institute of Technology, Delhi

Recent Publications & Preprints

(2022). Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms. NeurIPS 2022.

PDF Cite

(2022). Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit. NeurIPS 2022.

PDF Cite

(2022). Inductive Biases and Variable Creation in Self-Attention Mechanisms. ICML 2022.

PDF Cite

(2022). Understanding Contrastive Learning Requires Incorporating Inductive Biases. ICML 2022.

PDF Cite

(2022). Anti-Concentrated Confidence Bonuses for Scalable Exploration. ICLR 2022.

PDF Cite

(2022). Investigating the Role of Negatives in Contrastive Representation Learning. AISTATS 2022.

PDF Cite

Outreach

Co-organizer
Co-founded this community building and mentorship initiative for the learning theory community. Co-organized mentorship workshops at ALT 2021, COLT 2021, ALT 2022, and Fall 2022. Co-organized a graduate applications support program in collaboration with WiML-T.
Mentor

Professional Services

Program Committee
Program Committee
Program Committee
Virtual Experience Chair
Co-organized the virtual part of the hybrid conference, including the 2-day virtual-only program.
Program Committee
Program Committee
Treasurer