Talks and Presentations

  1. T. Suzuki, S. Akiyama: Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods. The International Conference on Learning Representations 2021, (virtual), Mar. 2021.

  2. S. Akiyama, T. Suzuki: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting. International Conference of Machine Learning 2021, (virtual), Jul. 2021.

  3. S. Akiyama, T. Suzuki: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting. Japanese Joint Statistical Meeting, Nagasaki (virtual), Sep. 2021.

  4. S. Akiyama, T. Suzuki: Training Two-Layer ReLU Neural Networks in Teacher-Student Settings through Unadjusted Langevin Algorithm, The 24th Information-Based Induction Science Workshop, online, Nov. 2021.

  5. S. Akiyama, T. Suzuki: Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods. Japanese Joint Statistical Meeting, Tokyo, Sep. 2022.

  6. S. Akiyama, K. Oko and T. Suzuki: Benign Overfitting of Two-Layer Neural Networks under Inputs with Intrinsic Dimension. The 24th Information-Based Induction Science Workshop, Ibaraki, Nov. 2022.

  7. K. Oko, S. Akiyama, T. Murata and T. Suzuki: Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method, OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), New Orleans, America, Dec. 2022.