Currently it is in progress. (edit Jan.19th 2019)
Until now ( Jan. 19th 2019 ), It can only run on python + tensorflow, with few functions and limitations.
I'm going to make a standalone executable or something, but I don't know when will it be done.
This thread might be updated anytime xD
BAD News:
My accuracy calculation was totally messed up before. Actually I noticed that it was weird before but I didn't thought about it in that specific way...:
tl;dr training acc was messed up with validation acc, actually the model was totally overfitted and do nothing more than random guess.
Current progress:
I created a 2nd version of the model, using (one-hot) embedded vectors instead of name string.
I also introduced permutation invariance to the model by the method presented in [1].
Currently best accuracy was 61.9% on gen7ou (This should not have above problem...it should not).
I'm very glad to receive any advices from you xD (maybe I'll post on r/ml ... idk)
I'll upload some graphs later, probably.
Plans:
I've trained a neural network predicting battle results, which could be understood as an AI predicting battle results.
It learned how to predict results from ( currently ) thousands of battles, in some specific tier.
The input is simple, some string contains pokemon names ( It might be impossible to get moves / spreads / items from replay ? ) , and output some win rate.
For details please see:
https://github.com/linkzeldagg/Pokemon-Showdown-Win-Rate-Prediction
sry for my poor English.
Thanks you reading this nonsense project > <
Until now ( Jan. 19th 2019 ), It can only run on python + tensorflow, with few functions and limitations.
I'm going to make a standalone executable or something, but I don't know when will it be done.
This thread might be updated anytime xD
BAD News:
My accuracy calculation was totally messed up before. Actually I noticed that it was weird before but I didn't thought about it in that specific way...:
tl;dr training acc was messed up with validation acc, actually the model was totally overfitted and do nothing more than random guess.
Things goes wrong with tf.metrics.accuracy. It stores all historical data for computing the accuracy, it does NOT calculate accuracy only over a batch.
Thus, my training accuracy and validation accuracy was all averaged together.
I fixed that bug and found it was nothing more than random guess.
I didn't thought that way although I feel it was really weird. I double-checked all my code to ensure I don't make the validation set dirty, and I found nothing, then I thought everything was fine...
OMG.
Thus, my training accuracy and validation accuracy was all averaged together.
I fixed that bug and found it was nothing more than random guess.
I didn't thought that way although I feel it was really weird. I double-checked all my code to ensure I don't make the validation set dirty, and I found nothing, then I thought everything was fine...
OMG.
Current progress:
I created a 2nd version of the model, using (one-hot) embedded vectors instead of name string.
I also introduced permutation invariance to the model by the method presented in [1].
Currently best accuracy was 61.9% on gen7ou (This should not have above problem...it should not).
I'm very glad to receive any advices from you xD (maybe I'll post on r/ml ... idk)
I'll upload some graphs later, probably.
Structure:
- one-hot vectors of embedded pokemon data as input "x"
- get latent vector for each pokemon through a network "P" (currently -128-dt-)
- combine 6 pokemon vectors to 1 single vector using sum or maxpool and get team vector "t"
- Concat 2 team vector and feed into a classifier network "B" ( p1 or p2 wins ) (currently -512-512-2-)
- The network overfits a lot.
- maxpooling out-performs summation (avg-pooling). Probably because this is a classification? inspired by [2]
- L1 or L2 regularization might not work ... ?
- Best accuracy was achieved via a 0.7 - 0.5 dropout on network B (last 2 layers) with maxpooling.
- Currently the structure varies a lot and lack for experiments so above is just for reference.
[1]. Zaheer, Manzil, et al. "Deep sets." Advances in Neural Information Processing Systems. 2017.
[2]. Qi C R, Su H, Mo K, et al. Pointnet: Deep learning on point sets for 3d classification and segmentation[J]. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017, 1(2): 4.
[2]. Qi C R, Su H, Mo K, et al. Pointnet: Deep learning on point sets for 3d classification and segmentation[J]. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017, 1(2): 4.
Plans:
- Increase data-set size & quality ( probably movesets, items, orders etc.? )
- Any working trick.
- I probably want to mask-out some pokemons to empty while training...? idk if this helps
- any regularization ( l1 & l2 seems not working, I'll try orthogonal / spectral etc. but idk if it helps ... )
- Balance the data-set? (appearance / usage freq.)
- Currently I have no idea on how to do this.
- Other structures
- fc layers don't make me feel good though ...
I've trained a neural network predicting battle results, which could be understood as an AI predicting battle results.
It learned how to predict results from ( currently ) thousands of battles, in some specific tier.
The input is simple, some string contains pokemon names ( It might be impossible to get moves / spreads / items from replay ? ) , and output some win rate.
For details please see:
https://github.com/linkzeldagg/Pokemon-Showdown-Win-Rate-Prediction
sry for my poor English.
Thanks you reading this nonsense project > <
Last edited: