利用尖端实时走法分析和策略辅助系统,提升您的国际象棋水平
< 脚本 🏆 [#1 Chess Assistant] A.C.A.S(高级国际象棋辅助系统) 的反馈
Right now the current port of Lc0 uses CPU (OpenBLAS/eigen) so only low block/filter sizes are supported.
However I'll see about adding some of those before the new year. I'm also looking to make Lc0 more stable in general.
I probably need to study neural networks a bit and then work out my own port for Lc0 since as far as I know no ports exist publicly which support GPU.
Oh that’s an amazing idea, and if you’re building your own port I recommend you use the ONNX backend because it’s the most actively maintained, has the best network compatibility and can handle GPU and CPU in the best way I would say. And since it’s a web based environment maybe this can help you https://github.com/dignissimus/lc0.js/
And regarding the models I mentioned after a review I’ve noticed that only 2 would work. One is the “T1-256x10-distilled-swa-2432500” which is an official small network which has been listed on the lczero.org. Weights 30 to 40MB similar to TK6430 and was made for CPU and GPU 🙂 and the second one would be the “ID11258-112x9-se“ is an even smaller model: 9 blocks × 112 filters and 20MB and more recent and stronger than TK6430 if I’m not mistaken
I think the first one is a better choice tho, it’s the same size but with a much more modern training. It’s on the lczero.org's Best Net page
Oh and if you need to study NN then the discord of lc0 I think they teach that? I’m not sure but what I’m sure about is that The conversion from Lc0's protobuf format to ONNX is into the engine which might help you hours of work haha
No pressure though I just wanted to give some info, have an amazing day. And I apologize, im a very wordy person.
And I apologize, im a very wordy person.
No worries, I enjoy in-depth discussions.
The conversion from Lc0's protobuf format to ONNX is into the engine
I'm guessing you're talking about leela2onnx? I agree that ONNX format is great, the "Maia 2" engine I added recently uses an ONNX 18x8x8 tensor for the board, similar to many Lc0 models. I'm sure it'll be a bit more complicated than just converting an Lc0 model to ONNX and running it on similar code to "Maia 2" though, so we'll see how that goes. It would be awesome to be able to run any Lc0 model within the browser.
I have my own studies I'm focusing on currently so I'll have to see when I have time to research and test this.
Based on what I know, Lc0 already has the functionality to export to ONNX (using the " - - print- -onnx" option in the engine), so you don't need leela2onnx but yes you still would need a custom code…
If you ever remember the topic, I'd be happy to help you test networks or brainstorm ideas. Good luck with your studies!
I love the script, I really like it, but I'm very curious about the power of the current version of Lc0. I know it can't be used because it's too resource-intensive, but I know that Tk6430 is a smaller, less powerful neural network created by the community using one of the older networks that Lc0 used (T60). And I know there are other community networks that are much more powerful than Tk6430, such as "T1-256x10-distilled-swa-2432500," which is also very small, or "ID11258-112x9-se," which is smaller but more powerful than Tk6430... or some of the latest networks from the BT4 or T82 series, but in a smaller size so they can be run in the script. You can choose not to do it; it's just my curiosity.