Latest Updates:
Page Index Toggle Pages: 1 [2] 3 
Topic Tools
Very Hot Topic (More than 25 Replies) Stockfish dethroned (big time)?! (Read 1089 times)
dfan
God Member
*****
Offline


"When you see a bad move,
look for a better one"

Posts: 691
Location: Boston
Joined: 10/04/05
Re: Stockfish dethroned (big time)?!
Reply #20 - 12/08/17 at 13:22:50
Post Tools
ReneDescartes wrote on 12/08/17 at 12:13:55:
bragesjo wrote on 12/08/17 at 09:08:48:
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?

My presumption is that some pre-testing testing was done and that those were the conditions that produced the best-looking results.

This would surprise me immensely. I'm not going to say it has non-zero probability, but it would be considered a really strong and nasty accusation in the machine-learning community and if they did it they would be aware that they were severely violating academic standards.

I do think it's quite likely that they set up Stockfish naively without worrying much about optimizing its performance, but it would really astonish me if they tried out lots of Stockfishes and picked the one that performed the worst.
  
Back to top
 
IP Logged
 
dfan
God Member
*****
Offline


"When you see a bad move,
look for a better one"

Posts: 691
Location: Boston
Joined: 10/04/05
Re: Stockfish dethroned (big time)?!
Reply #19 - 12/08/17 at 13:13:52
Post Tools
GabrielGale wrote on 12/08/17 at 08:48:25:
@bonsai, without having read the paper as yet, I agree that the whole exercise looks as if it was not optimised, a bit rushed?

I think they were less interested in creating the best possible chess engine than in showing that their approach was sufficient to create a better-than-state-of-the-art chess engine.

For example, adding tablebases would make it stronger (there's basically no downside), but from an academic point of view it's not interesting at all and in fact would lessen the magnitude of the result, because it would test the neural-net/MCTS framework less by not forcing it to learn low-material endgames.

(Maybe if they had beefed up Stockfish more, they would have had to do more work to optimize AlphaZero!)

Quote:
perhaps rushing for bragging rights before end of the year?

The paper was released during NIPS, the biggest AI conference of the year, and I'm sure the timing was not coincidental.
  
Back to top
 
IP Logged
 
ReneDescartes
God Member
*****
Offline


Qu'est-ce donc que je
suis? Une chose qui pense.

Posts: 843
Joined: 05/18/10
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #18 - 12/08/17 at 12:13:55
Post Tools
bragesjo wrote on 12/08/17 at 09:08:48:
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?
When I analyse chess games I give it access to way more ram on my laptop.
It effects gameplay greatly at least on "normal laptops" but what happends with super computers with 64 threads I have no clue or on short time controls...




My presumption is that some pre-testing testing was done and that those were the conditions that produced the best-looking results. Don't forget, an enormous amount of prestige comes with this, worth many millions of dollars in corporate reputation. Google know exactly what they are doing with respect to hardware. AlphaZero may not be ready to beat Stockfish at 40 ply yet. Nevertheless, it's still terrifying that it did this in four hours, even if that's the equivalent of a year on a normal cluster: AlphaZero was not designed for chess but is a generalist.
  
Back to top
 
IP Logged
 
bragesjo
God Member
*****
Offline


Long live the Nimzo Indian

Posts: 1488
Location: Eskilstuna
Joined: 06/30/06
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #17 - 12/08/17 at 09:08:48
Post Tools
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?
When I analyse chess games I give it access to way more ram on my laptop.
It effects gameplay greatly at least on "normal laptops" but what happends with super computers with 64 threads I have no clue or on short time controls...


  
Back to top
WWW  
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 463
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #16 - 12/08/17 at 08:56:45
Post Tools
PS, as has been noted, the man behind DeepMind is a former highly talented junior chess player (no 2 in the world at the time??) who turned to computing.
For your consideration, the GM who currently holds the record of being the 2nd youngest person to become a GM, Parimarjan Negi, also highly regarded chess author and noted chess opening theoretician, is studying at Stanford in computing.
What can we expect ......!!!???
Perhaps Google should snap him up! before graduation ......
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 463
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #15 - 12/08/17 at 08:48:25
Post Tools
@dfan, thanks for your reply. I missed that bit re authors.
I agree that deep learning/machine learning is currently the method. I am not familiar with events after AlphaGo but my initial thot, hence my query, was that deep learning is not that compatible with the current approach of chess engine programmers where it is merely crunching plies and which is why there is a need for GM consultants to provide parameters? I may be wrong??
@bonsai, without having read the paper as yet, I agree that the whole exercise looks as if it was not optimised, a bit rushed? perhaps rushing for bragging rights before end of the year?
I agree that with proper GM trainer as consultant ie providing scaffolding on how to train, the results could have been even more impressive. (yes, imagine Aagaard as consultant!?)
From bonsai's suggestions, I see a great advantages for chess players and for improvement for chess players the world around: if ever this gets to be affordable (trickle-down effect), we are looking at AI-powered personal chess trainers with personalised training regime targeted at specific weaknesses and strengths (throw in deliberate practice) and I think we are possibly looking at human performance beyond elo 3000. Of course this also may mean, GMs are going to get younger and younger. Also think what this means for older chess players who want to improve: Personal trainers.
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
Bonsai
God Member
*****
Offline



Posts: 621
Joined: 03/13/04
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #14 - 12/08/17 at 06:40:22
Post Tools
To some extent what these guys did was not even that optimized for chess (!). You would think it should be possible to do what they did in a more domain specific way and to add some other tweaks to make it even stronger (if nothing else train for longer, train vs. variants of itself and/or really good chess engines). I.e. as a statisticians with a little bit of involvment in machine learning, I suspect that one could train the neural net for longer in more sophisticated ways and get even better performance. I guess they also have not truly fully evaluated its strength in all areas of play. E.g. is there points being left on the table by sub-optimal endgame play, are there openings/pawn structures it does play as well as the rest (and then, how could one fix that, e.g. link in tablebases somehow, make it play lots of endgames or games in specific openening etc.)... Those things could theoretically be weaknesses, unless the program gets into these situation often enough during training.

I'm sure there's a lot of things one could try to achieve here. Perhaps you could also train flashy creative neural nets (perhaps seeding from Tal's games and giving a higher slightly higher score for a flashier game with sacrifices - however you judge that). You might even manage to get more human-like dumbed down AIs with certain personalities. And finally, to really get carried away into a lack of realism: we give a neural net a large database as the data, some opening positions & the matching Quality Chesss books as the training set and then give it a new opening position and hope it writes us a book.  Wink
  
Back to top
 
IP Logged
 
dfan
God Member
*****
Offline


"When you see a bad move,
look for a better one"

Posts: 691
Location: Boston
Joined: 10/04/05
Re: Stockfish dethroned (big time)?!
Reply #13 - 12/07/17 at 21:35:26
Post Tools
GabrielGale wrote on 12/07/17 at 20:21:04:
Two curious question for those with the expertise: Is this so-called deep neural net viable for the future AI?

Deep Learning is arguably the most successful current AI technique (it certainly has the most buzz within the larger field) and shows no sign of becoming obsolete soon. Who knows what will be prevalent in ten years, though.

Quote:
How does this compare to the modest effort of a single postgrad paper last year on similar undertaking (one with Giraffe in the title)?

The Giraffe author (Matthew Lai) joined DeepMind and is a coauthor of the AlphaZero paper. Smiley

Quote:
BTW, DM seems to think the current method of programming chess engine will become obsolete.

It's a little hard to claim that when current-tech engines are already much better than the strongest humans. Certainly programmers vying to create the world's strongest engine will be pretty interested in these techniques, though. When the first AlphaGo paper came out, the best other Go programs jumped hundreds of Elo effectively overnight by copying AlphaGo's ideas, but I think that will be much harder to do in the domain of chess.
  
Back to top
 
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 463
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #12 - 12/07/17 at 20:21:04
Post Tools
Chessbase has a report but I think Chess24 was one of the first ones to report. Their report seems to suggest Stockfish was not too disadvantaged except perhaps in opening books. Caveat: I have yet to read the academic paper. In modern computer chess, opening book is important but bear in mind that int he TCEC, the first two rounds, the openings are fixed by an independent 3P and the contestants have to play form a certain position 4 moves deep.
Dana Mackenzie also has a report and he has some interesting thoughts on the (in)famous Table 2 (my prediction on it future fame). He seems to have read the academic paper and being a professional mathematician, will probably understand more than me. There is no explanation as why AlphaGZ gave up on Caro Kann or the French nor why it seemed to have avoided the Indian Defences or the Sicilian. It seems the QGB was favoured (again without any explanation but good news for the sale of recently published book from QC Smiley). Indeed, one of DM's comment was that the computer programmes are yet unable to articulate "why" which he thinks is crucially a human skill and therefore, ergo, AI is not human yet! (caveat: Also at the same time publicising his later co-authored book on Causation and effect!!).
Personally, I am impressed and think this is an important step.
Two curious question for those with the expertise: Is this so-called deep neural net viable for the future AI?
How does this compare to the modest effort of a single postgrad paper last year on similar undertaking (one with Giraffe in the title)?
BTW, DM seems to think the current method of programming chess engine will become obsolete.
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
gewgaw
God Member
*****
Offline


I love ChessPublishing.com!

Posts: 628
Location: europe
Joined: 09/09/04
Re: Stockfish dethroned (big time)?!
Reply #11 - 12/07/17 at 19:57:54
Post Tools
Very interesting link;
page 19:

Program
AlphaZero Stockfish Elmo
Chess
80k 70,000k
Shogi Go 40k 16k
35,000k
  Table S4: shogi and Go.

Stockfish calculated 70 million position per seconds - should be enough.
Another observation: this algorithm isn't interested in indian positions and Sicilian defence and the stronger it gets, the more it dislikes the french and the Caro-Kann and prefers the Berlin.
  

The older, the better - over 2200 and still rising.
Back to top
 
IP Logged
 
JEH
God Member
*****
Offline


"Football is like Chess,
only without the dice."

Posts: 1393
Location: Reading
Joined: 09/22/05
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #10 - 12/07/17 at 13:40:58
Post Tools
IsaVulpes wrote on 12/06/17 at 12:19:53:
https://arxiv.org/pdf/1712.01815.pdf if you want a more detailed explanation of .. everything


Thanks, very interesting, especially

"Finally, we analysed the chess knowledge discovered by AlphaZero. Table 2 analyses the
most common human openings (those played more than 100,000 times in an online database
of human chess games (1)). Each of these openings is independently discovered and played
frequently by AlphaZero during self-play training. When starting from each human opening,
AlphaZero convincingly defeated Stockfish, suggesting that it has indeed mastered a wide spectrum
of chess play."

and what it discovered  Shocked
  

Those who want to go by my perverse footsteps play such pawn structure with fuzzy atypical still strategic orientations

Clowns to the left of me, jokers to the right, stuck in the middlegame with you
Back to top
 
IP Logged
 
h4rl3k1n
YaBB Newbies
*
Offline


I Love Lurking On ChessPublishing!

Posts: 10
Joined: 09/05/16
Re: Stockfish dethroned (big time)?!
Reply #9 - 12/07/17 at 10:39:09
Post Tools
Interestingly, the machine spends quite some time on overcoming the 2000-2200 threshold. It improved faster before this and faster after this. There seems to be some barrier indeed.
  
Back to top
 
IP Logged
 
MartinC
God Member
*****
Offline


I Love ChessPublishing!

Posts: 1925
Joined: 07/24/06
Re: Stockfish dethroned (big time)?!
Reply #8 - 12/07/17 at 10:15:38
Post Tools
The training is the really computationally intensive bit with neural nets - fairly sure that actually running them once trained doesn't take anything too crazy.

I genuinely didn't think this approach would work quite so well in chess - the brute force search is awfully effective of course. It seems like chess is a bit more interesting than we'd thought Smiley
  
Back to top
 
IP Logged
 
Dink Heckler
God Member
*****
Offline


Love-Forty

Posts: 750
Joined: 02/01/07
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #7 - 12/07/17 at 09:10:24
Post Tools
Discussion of hashes, opening books etc somewhat obscure the bigger point that even if their machine had only attained, say, ELO 2000 in this way, it would have been an immensely impressive demonstration.

The authors went for maximum publicity and maybe cut a few corners in doing so, but any way you want to slice it, this looks incredibly impressive.
  

'Am I any good at tactics?'
'Computer says No!'
Back to top
 
IP Logged
 
Bonsai
God Member
*****
Offline



Posts: 621
Joined: 03/13/04
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #6 - 12/07/17 at 06:59:20
Post Tools
Pretty impressive, even if there seems to be a question of the fairness of the Stockfish comparison. I'm inclined to believe they are not making this up based on this being the DeepMind team, who did create AlphaGo in a similar, if slightly more domain specific, way. It would be fascinating whether it does very differently than current engines somewhere.
  
Back to top
 
IP Logged
 
Page Index Toggle Pages: 1 [2] 3 
Topic Tools
Bookmarks: del.icio.us Digg Facebook Google Google+ Linked in reddit StumbleUpon Twitter Yahoo