Search this site
Embedded Files
Eran Malach
  • Home
  • Selected Publication
  • Blog Posts
Eran Malach
  • Home
  • Selected Publication
  • Blog Posts
  • More
    • Home
    • Selected Publication
    • Blog Posts

The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains

With: Ben Edelman, Ezra Edelman, Surbhi Goel, Nikolaos Tsilivis

NeurIPS 2024

Transcendence: Generative Models Can Outperform The Experts That Train Them

With: Edwin Zhang, Vincent Zhu, Naomi Saphra, Anat Kleiman, Ben Edelman, Milind Tambe, Sham Kakade

NeurIPS 2024

Repeat after me: Transformers are better than state space models at copying

With: Samy Jelassi, David Brandfonbrener and Sham Kakade

ICML 2024

Auto-Regressive Next Token Predictors are Universal Learners

ICML 2024

Pareto frontiers in neural feature learning: Data, compute, width, and luck

With: Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Cyril Zhang

NeurIPS 2023

Knowledge distillation: Bad models can be good role models

With: Gal Kaplun, Preetum Nakkiran, Shai Shalev-Shwartz

NeurIPS 2022

Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit

With: Boaz Barak, Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Cyril Zhang

NeurIPS 2022

Efficient Learning of CNNs using Patch Based Features

With: Alon Brutzkus, Amir Globerson, Alon Regev Netzer, Shai Shalev-Shwartz

ICML 2022

When Hardness of Approximation Meets Hardness of Learning

With: Shai Shalev-Shwartz

JMLR 2022

On the Power of Differentiable Learning versus PAC and SQ Learning

With: Emmanuel Abbe, Pritish Kamath, Colin Sandon, Nathan Srebro

NeurIPS 2021 (Spotlight)

Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels

With: Pritish Kamath, Emmanuel Abbe, Nathan Srebro

ICML 2021 (Spotlight)

The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

With: Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir

COLT 2021

Computational Separation Between Convolutional and Fully-Connected Networks

With: Shai Shalev-Shwartz

ICLR 2021

Learning Parities with Neural Networks

With: Amit Daniely

NeurIPS 2020 (Oral presentation)

The Implications of Local Correlation on Learning some Deep Functions

With: Shai Shalev-Shwartz

NeurIPS 2020

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

With: Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir

ICML 2020

ID3 Learns Juntas for Smoothed Product Distributions

With: Alon Brutzkus, Amit Daniely

COLT 2020

Is Deeper Better only when Shallow is Good?

With: Shai Shalev-Shwartz

NeurIPS 2019

SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data

With: Alon Brutzkus, Amir Globerson, Shai Shalev-Shwartz

ICLR 2018

Decoupling “when to update” from “how to update”

With: Shai Shalev-Shwartz

NIPS 2017

Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse