快速访问

查看PDF

文章信息

参考文献

[1]D. WANG, X. YAN, Z. DOU, W. HUANG, Y. YANG, and X. DENG, “Approximating N-Player Nash Equilibrium through Gradient Descent, ” arXiv (Cornell University), Jan. 2025, doi: https: //doi.org/10.48550/arxiv.2501.03001.
[2]D. LI, M. YE, L. DING, and S. XU, “Distributed Nash Equilibrium Computation With Uncertain Dynamics and Disturbances, ” IEEE Transactions on Network Science and Engineering, vol. 9, no. 3, pp. 1376–1385, Jan. 2022, doi: https: //doi.org/10.1109/tnse.2022.3142523.
[3]F. S. KHAN, O. OKRUT, K. CANNON, K. H. EL-SAFTY, and N. ELSOKKARY, “Calculating Nash equilibrium on quantum annealers, ” Annals of Operations Research, Jan. 2024, doi: https: //doi.org/10.1007/s10479-023-05700-z.
[4]ZHANG GUOQIANG, ZHAO GUODANG. An Improved Differential Evolution Algorithm for Solving Nash Equilibrium Problem and Generalized Nash Equilibrium Problem [J]. Operations Research and Management Science, 2023, 32(3): 36-42.
[5]D. ZHANG, L. JI, S. ZHAO, and L. WANG, “Variable-sample method for the computation of stochastic Nash equilibrium, ” IISE Transactions, vol. 55, no. 12, pp. 1217–1229, Feb. 2023, doi: https: //doi.org/10.1080/24725854.2022.2163436.
[6]R. SELTEN, “The chain store paradox, ” Theory and Decision, vol. 9, no. 2, pp. 127–159, Apr. 1978, doi: https: //doi.org/10.1007/bf00131770.
[7]T. LI and S. P. SETHI, “A review of dynamic Stackelberg game models, ” Discrete & Continuous Dynamical Systems - B, vol. 22, no. 1, pp. 125–159, 2017, doi: https: //doi.org/10.3934/dcdsb.2017007.
[8]S. RUDER, “An overview of gradient descent optimization algorithms, ” arXiv.org, Jun. 15, 2017. https: //arxiv.org/abs/1609.04747?ref=ruder.io
[9]N. KETKAR, “Stochastic Gradient Descent, ” Deep Learning with Python, pp. 113–132, 2017, doi: https: //doi.org/10.1007/978-1-4842-2766-4_8.
[10]S. KHIRIRAT, H. R. FEYZMAHDAVIAN, and M. JOHANSSON, “Mini-batch gradient descent: Faster convergence under data sparsity, ” IEEE Xplore, Dec. 01, 2017. https: //ieeexplore.ieee.org/document/8264077
[11]Y. LIU, Y. GAO, and W. YIN, “An Improved Analysis of Stochastic Gradient Descent with Momentum, ” Jul. 2020, doi: https: //doi.org/10.48550/arxiv.2007.07989.
[12]R. WARD, X. WU, and L. BOTTOU, “AdaGrad stepsizes: Sharp convergence over nonconvex landscapes, ” arXiv (Cornell University), Jan. 2018, doi: https: //doi.org/10.48550/arxiv.1806.01811.
[13]D. KINGMA and J. BA, “Adam: A Method for Stochastic Optimization, ” Computer Science, 2014, doi: https: //doi.org/10.48550/arXiv.1412.6980.
[14]N. SHI and D. LI, “RMSPROP CONVERGES WITH PROPER HYPERPARAMETER, ” international conference on learning representation, Apr. 2021, Accessed: Jul. 02, 2024. [Online]. Available: https: //par.nsf.gov/biblio/10273075

版权与开放获取声明

作为一本开放获取的学术期刊,所有文章均遵循 Creative Commons Attribution 4.0 International License (CC BY 4.0) 协议发布,允许用户在署名原作者的前提下自由共享与再利用内容。所有文章均可免费供读者和机构阅读、下载、引用与传播,EWA Publishing 不会通过期刊的出版发行向读者或机构收取任何费用。