ReneDescartes wrote on 12/08/17 at 17:01:15:
But I agree it doesn't matter; if it didn't beat Stockfish today, it will tomorrow.
Yup, that in the end is "what matters".
I wrote my starting post in a relatively sceptical manner because indeed, maybe Stockfish would have won/drawn if it was the current dev build (rather than the year old commercial release), on a machine suited to its needs (rather than the very weird high thread / almost no RAM hardware), playing with a time control that it's build for (eg it has some time allocation features that provide some Elo, and those just disappear with a set amount of time/move), and with opening book + tablebases enabled (Stockfish creators know those exist & are used, so they obviously don't care as much about the engine performing well in those stages, while AlphaZero never used them and was "created" (created itself?) without them in mind - so obviously just disabling them will favour DeepMind), etc.
But in the end what that would truly accomplish would likely just be slowing down Google; forcing them to put some more work into optimizing the engine - letting them play against itself for longer, adding "classic" engine things that those do right, adding a functionality for usage of openingbooks and tablebases, etc
The big thing here was the showcase that the deep learning / monte carlo approach works for chess, when it was previously thought to peter out quickly (with several older attempts getting stuck around ~2400). This is an engine not optimized for chess at all, which has the very clear potential (if it's not achieved already) of being the strongest in the world; the rest of the finetuning as to whether it's truly capable of beating Stockfish with Black is mostly advertisement business.
Here's to hoping Google either sticks with it or allows other people their hands on it, rather than saying "Ok yeah we did it" and then 'scrapping' the rest of the project - I'd love to play around with it myself.
Vaguely related: It'd be interesting to see how the engine handles being in a worse position; else its concrete use for human players could quickly turn out to be relatively limited. As far as I understand, the AlphaGo program basically imploded on itself when in a "losing" position, because it just plays moves based on winning percentage, and if it evaluated a position as sufficiently bad, it didn't care much anymore about the concrete moves, as none of them would be anywhere close to winning, leading to it collapsing within a few moves from just a slightly worse position. Of course in chess that may very well just not apply at all, due to the huge margin for draws.