Publications
Preprint
K. Oko, S. Akiyama, T. Suzuki: Diffusion Models are Minimax Optimal Distribution Estimators. [arXiv]
K. Oko, S. Akiyama, T. Murata, T. Suzuki: Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning. [arXiv]
S. Akiyama, M. Obara, Y. Kawase: Optimal design of lottery with cumulative prospect theory. [arXiv]
International Conference Paper (accepted)
T. Suzuki, S. Akiyama: Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods. International Conference on Learning Representations 2021. (selected as splotlight). [arXiv]
S. Akiyama, T. Suzuki: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting. International Conference of Machine Learning 2021. [arXiv]
K. Oko, S. Akiyama, T. Murata and T. Suzuki: Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method, OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), New Orleans, America, Dec. 2022.
S. Akiyama, T. Suzuki: Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods. International Conference on Learning Representations 2023. [arXiv]