Latest Updates:
Page Index Toggle Pages: 1 [2] 3 4 
Topic Tools
Very Hot Topic (More than 25 Replies) Stockfish dethroned (big time)?! (Read 34473 times)
fjd
God Member
*****
Offline



Posts: 551
Location: Ottawa
Joined: 09/22/16
Re: Stockfish dethroned (big time)?!
Reply #39 - 01/19/18 at 19:30:30
Post Tools
For any chess24 premium members, one of the AlphaZero games is featured in Peter Svidler's "best games of 2017", and Jan Gustafsson has a video series out covering its treatment of the White side of the Queen's Indian.
  
Back to top
 
IP Logged
 
dfan
God Member
*****
Offline


"When you see a bad move,
look for a better one"

Posts: 766
Location: Boston
Joined: 10/04/05
Re: Stockfish dethroned (big time)?!
Reply #38 - 12/16/17 at 14:07:18
Post Tools
GabrielGale wrote on 12/16/17 at 02:39:36:
Quote:
There is one other conceptual difference between the way that AlphaZero and other engines do business, which I think is really important. Engines provide an assessment in terms of material equivalent; e.g., White is 1.5 pawns ahead. AlphaZero evaluates the position in terms of its expected winning percentage. If White wins 40 percent of the random games, draws 50 percent, and loses 10 percent, then the evaluation is 0.65 points, the expected number of points that White will score per game. (0.65 = 1 x 0.40 + ½ x 0.50) This is much more logical than evaluating the position in terms of pawns; after all, we play chess to win games, not to win pawns.

What do you think?

All engines provide assessment in centipawns because that's what the UCI protocol supports. I think some of them use winning probability under the hood and convert to centipawns for the UI. I know Houdini at least has operated like this in the past; I don't know if it still does now.
  
Back to top
 
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 471
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #37 - 12/16/17 at 02:39:36
Post Tools
These two comments from DM are interesting:
Quote:
horizon effect? To evaluate a position, it simply plays hundreds of random games from that position. To you or me this may seem like a crazy idea, but actually it makes a certain amount of sense. In some positions there may be only one “correct” way for White to win — but often in these positions Black is visibly in trouble anyway.
Because AlphaZero is playing complete games, albeit random and imperfect ones, it is not susceptible to the horizon effect. Consider, for example, blockaded or “fortress” positions. Give AlphaZero one of these same blockaded positions, and it will see that in the blockaded position White only wins 1 percent of the games and 99 percent end in draws. Therefore, it will avoid the blockaded position. Presto, one of the weaknesses of chess computers goes away.

Quote:
There is one other conceptual difference between the way that AlphaZero and other engines do business, which I think is really important. Engines provide an assessment in terms of material equivalent; e.g., White is 1.5 pawns ahead. AlphaZero evaluates the position in terms of its expected winning percentage. If White wins 40 percent of the random games, draws 50 percent, and loses 10 percent, then the evaluation is 0.65 points, the expected number of points that White will score per game. (0.65 = 1 x 0.40 + ½ x 0.50) This is much more logical than evaluating the position in terms of pawns; after all, we play chess to win games, not to win pawns.

What do you think?
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 471
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #36 - 12/16/17 at 00:41:07
Post Tools
Moving beyond the discussion whether Stockfish was treated unfairly, to the question whether deep learning provides a different, and improved (?), way to learn and analyse chess, Dana Mackenzie has another blog on the issue:
http://www.danamackenzie.com/blog/?p=5072

Also I note that GM Matthew Sadler is also looking at the AlphaMind games. Hopefully he will provide some analyses and conclusions.
I am also hoping that Ken Regan will have a look and report his conclusions own his joint-blog.
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
IsaVulpes
Senior Member
****
Offline


No.

Posts: 345
Joined: 12/09/07
Re: Stockfish dethroned (big time)?!
Reply #35 - 12/12/17 at 13:19:48
Post Tools
https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-autho... Some statements by players; at the bottom is also a long statement by the Stockfish Dev
  
Back to top
 
IP Logged
 
IsaVulpes
Senior Member
****
Offline


No.

Posts: 345
Joined: 12/09/07
Re: Stockfish dethroned (big time)?!
Reply #34 - 12/09/17 at 09:52:02
Post Tools
MartinC wrote on 12/09/17 at 09:38:21:
Well that's actually quite a classic practical use for human players in chess too - create a mess and hope to get lucky Smiley

If that is in reference to my last paragraph, I meant more something like
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
*
If you give this position to Stockfish, it says "White is much better", and wants to play ..Qb7, ..a6, ..Qc7, ..Bb7, etc

If you give this position to AlphaZero, it *potentially* (don't know, none of the games had AZ trying to defend a worse position, perhaps this is better than the Go program in that respect - and as mentioned, with the drawing margin in chess everything may look differently) evaluates the Winning Percentage for ..Qb7 as 0, and basically tosses a coin between playing ..Qb7, or ..Qa4:, or ..Qb5:, or ..Bd5:, or etc (all of which it also evaluates at 0% winning chances), which obviously instantly lose - just to the program it doesn't matter, because it thinks it's lost either way.

IF that were the case, then it'd be very hard to use this program for human analysis, as you'd eg try out an opening novelty, get what looks like a slight advantage, and rather than defending with good moves, the program would just randomly sacrifice pieces and be lost 3 moves later Lips Sealed
« Last Edit: 12/09/17 at 17:58:27 by IsaVulpes »  
Back to top
 
IP Logged
 
MartinC
God Member
*****
Offline


I Love ChessPublishing!

Posts: 2073
Joined: 07/24/06
Re: Stockfish dethroned (big time)?!
Reply #33 - 12/09/17 at 09:38:21
Post Tools
Well that's actually quite a classic practical use for human players in chess too - create a mess and hope to get lucky Smiley

Doing that well would of course require a model of a fallible opponent, which I guess the alpha zero's never really get.
(they train vs themselves!).
  
Back to top
 
IP Logged
 
IsaVulpes
Senior Member
****
Offline


No.

Posts: 345
Joined: 12/09/07
Re: Stockfish dethroned (big time)?!
Reply #32 - 12/09/17 at 09:33:54
Post Tools
ReneDescartes wrote on 12/08/17 at 17:01:15:
But I agree it doesn't matter; if it didn't beat Stockfish today, it will tomorrow.

Yup, that in the end is "what matters".

I wrote my starting post in a relatively sceptical manner because indeed, maybe Stockfish would have won/drawn if it was the current dev build (rather than the year old commercial release), on a machine suited to its needs (rather than the very weird high thread / almost no RAM hardware), playing with a time control that it's build for (eg it has some time allocation features that provide some Elo, and those just disappear with a set amount of time/move), and with opening book + tablebases enabled (Stockfish creators know those exist & are used, so they obviously don't care as much about the engine performing well in those stages, while AlphaZero never used them and was "created" (created itself?) without them in mind - so obviously just disabling them will favour DeepMind), etc.

But in the end what that would truly accomplish would likely just be slowing down Google; forcing them to put some more work into optimizing the engine - letting them play against itself for longer, adding "classic" engine things that those do right, adding a functionality for usage of openingbooks and tablebases, etc

The big thing here was the showcase that the deep learning / monte carlo approach works for chess, when it was previously thought to peter out quickly (with several older attempts getting stuck around ~2400). This is an engine not optimized for chess at all, which has the very clear potential (if it's not achieved already) of being the strongest in the world; the rest of the finetuning as to whether it's truly capable of beating Stockfish with Black is mostly advertisement business.

Here's to hoping Google either sticks with it or allows other people their hands on it, rather than saying "Ok yeah we did it" and then 'scrapping' the rest of the project - I'd love to play around with it myself.

Vaguely related: It'd be interesting to see how the engine handles being in a worse position; else its concrete use for human players could quickly turn out to be relatively limited. As far as I understand, the AlphaGo program basically imploded on itself when in a "losing" position, because it just plays moves based on winning percentage, and if it evaluated a position as sufficiently bad, it didn't care much anymore about the concrete moves, as none of them would be anywhere close to winning, leading to it collapsing within a few moves from just a slightly worse position. Of course in chess that may very well just not apply at all, due to the huge margin for draws.
  
Back to top
 
IP Logged
 
GabrielGale
Senior Member
****
Offline


Who was Thursday?

Posts: 471
Location: Sydney
Joined: 02/28/08
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #31 - 12/08/17 at 21:37:43
Post Tools
FYI:
a reconstructed transcript of a Skype conference call held November 21, 2017 between Nelson Hernandez of TCEC, Robert Houdart (developer of Houdini), Mark Lefler (programmer of Komodo) and GM Larry Kaufman (developer of Komodo).
http://www.chessdom.com/interview-with-robert-houdart-mark-lefler-and-gm-larry-k...

Quote:
I have one final question which you are all uniquely qualified to talk about. What changes do you foresee coming to computer chess in the next five years and what ramifications might they have? Mark, you first.
Mark: I think there is a lot that can still be done in terms of data mining. Taking a game and trying to extract information suggesting evaluation terms or pruning ideas or extensions, things like that. They have already started doing that.
We’re also really interested in Monte Carlo. I mean, what do you do when you now have a 44-core server? What is it going to be next year? At some point more cores don’t help very much. There is a website by Andreas Strangmuller who has done a lot of experiments. We have gone one to two, two to four, up to 32 processors, Stockfish and Komodo might gain 15 to 20 Elo or something going from 16 to 32, Stockfish even less. What do you do with all this hardware to use it more effectively? In Monte Carlo [garbled] statistics that might increase your winning chances. I think those are things you could work on.
Robert: Well, I think we are all waiting for artificial intelligence to pop up in chess after having seen the success of the artificial intelligence approach of Google for the Go game. And so basically what I would expect if some of these giant corporations would be interested is that in the next five years chess also might see that kind of development. For example the artificial intelligence for the evaluation of a position, it could produce some very surprising results in chess. And so, we’re probably waiting for that and then we can retire our old engines. Look at the AlphaChess engine that will be 4000 Elo. [chuckles]
Nelson: Yep, at that point we can all fade back into history. Larry, anything to add?
Larry: Well, I also followed closely the AlphaGo situation. The guy who is the head of it at Google Mind is a chess master himself, Demis Hassabis. Although Go is thought to be a much harder game than chess to beat the best humans at, and they have certainly proven that they can do that, it is so far yet to be proven that a learning program such as the latest one from DeepMind [can replicate that in chess]. Their latest learning program beat the pants off all other, previous Go programs. But that does not apply to chess. Nobody has a self-teaching chess program that can fight with Houdini or Komodo. That’s a fantasy. Maybe that’s the challenge, to get Google to prove that it applies to chess too. But who knows.
  

http://www.toutautre.blogspot.com/
A Year With Nessie ...... aka GM John Shaw's The King's Gambit (http://thekinggambit.blogspot.com.au/)
Back to top
 
IP Logged
 
CanadianClub
Senior Member
****
Offline


Greetings from Catalonia!

Posts: 416
Joined: 11/11/12
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #30 - 12/08/17 at 19:32:52
Post Tools
For me, the results are outstanding. Beating even a stockfish 8 on my 4 core processors of my android smartphone with such AI techniques are simply relevant for the future of computer chess and chess in general. And I agree conditions are not completely fair, but it's not the more relevant thing. I suppose using stockfish and neither komodo nor Houdini is purely to not been accused formally of that lack of fairness by copyrighted engines\companies.

Are THIS useful for us nonprofessional players? Not sure when, but of course it will.

For me, opening play would be the first subject been affected.

We'll see.
  
Back to top
 
IP Logged
 
fling
God Member
*****
Offline


I Love ChessPublishing!

Posts: 1591
Joined: 01/21/11
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #29 - 12/08/17 at 18:51:16
Post Tools
tp2205 wrote on 12/08/17 at 17:35:02:
ReneDescartes wrote on 12/08/17 at 17:01:15:
dfan wrote on 12/08/17 at 13:22:50:
ReneDescartes wrote on 12/08/17 at 12:13:55:
bragesjo wrote on 12/08/17 at 09:08:48:
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?

My presumption is that some pre-testing testing was done and that those were the conditions that produced the best-looking results.

This would surprise me immensely. I'm not going to say it has non-zero probability, but it would be considered a really strong and nasty accusation in the machine-learning community and if they did it they would be aware that they were severely violating academic standards.

I do think it's quite likely that they set up Stockfish naively without worrying much about optimizing its performance, but it would really astonish me if they tried out lots of Stockfishes and picked the one that performed the worst.


Naive? You're talking about Google and some of the greatest AI experts in the world. They don't know what a hash table does? And what machine (from this earth) has 64 processors and 1G of RAM? The artificiality of this artificial intelligence test is glaring.

But I agree it doesn't matter; if it didn't beat Stockfish today, it will tomorrow. I think Kurzweil is fatuous, but this result is still terrifying. Technologies of such power serving the tender mercies and reserved wisdom of those whose hands they fall into...


Agreed. I don't think these result are meant for researchers. (64 processors -- not cores -- processors and 1GB is indeed a joke.) Google wants to sell its services (probably trying to replace IBM's Watson) and "4 hours + Google most powerful machines" >> "years and years of research by many people" is easy to remember. The decision makers in many companies are not researchers so minor points like missing details and weird machine specs may be overlooked.

Until more details emerge I consider the whole thing not much more than reasonably clever advertising.   


The thing is that they wanted to claim AlphaZero crushed the reigning world champ in computer chess. Really clever marketing. But using 1 GB of hash seems to me kinda like claiming you beat the world champ, when you in fact have played Magnus Carlsen in a bullet game, where you were allowed to see the board and Magnus played blindfolded and still had to move his pieces. If you win such a game, for sure you have beaten the world champ, but there is a tiny difference from a real match.
  
Back to top
 
IP Logged
 
Keano
God Member
*****
Offline


Money doesn't talk, it
swears.

Posts: 2915
Location: Toulouse
Joined: 05/25/05
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #28 - 12/08/17 at 18:51:07
Post Tools
I have tried StockFish, Komodo and so on, but I still prefer the evaluations given by Houdini 1.5 Smiley
  
Back to top
 
IP Logged
 
tp2205
Full Member
***
Offline


I Love ChessPublishing!

Posts: 218
Joined: 09/11/11
Re: Stockfish dethroned (big time)?!
Reply #27 - 12/08/17 at 17:48:11
Post Tools
tp2205 wrote on 12/08/17 at 17:35:02:
...
Agreed. I don't think these result are meant for researchers. (64 processors -- not cores -- processors and 1GB is indeed a joke.) Google wants to sell its services (probably trying to replace IBM's Watson) and "4 hours + Google most powerful machines" >> "years and years of research by many people" is easy to remember. The decision makers in many companies are not researchers so minor points like missing details and weird machine specs may be overlooked.

Until more details emerge I consider the whole thing not much more than reasonably clever advertising.   


Sorry some corrections. I looked at the paper for details and they say 64 threads and 1GB hash size. They don't say anything about the number of processors or cores on which the 64 threads were running or the machine memory. Since you can run thousands of threads on one core this are pretty meaningless numbers.
  
Back to top
 
IP Logged
 
tp2205
Full Member
***
Offline


I Love ChessPublishing!

Posts: 218
Joined: 09/11/11
Re: Stockfish dethroned (big time)?!
Reply #26 - 12/08/17 at 17:35:02
Post Tools
ReneDescartes wrote on 12/08/17 at 17:01:15:
dfan wrote on 12/08/17 at 13:22:50:
ReneDescartes wrote on 12/08/17 at 12:13:55:
bragesjo wrote on 12/08/17 at 09:08:48:
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?

My presumption is that some pre-testing testing was done and that those were the conditions that produced the best-looking results.

This would surprise me immensely. I'm not going to say it has non-zero probability, but it would be considered a really strong and nasty accusation in the machine-learning community and if they did it they would be aware that they were severely violating academic standards.

I do think it's quite likely that they set up Stockfish naively without worrying much about optimizing its performance, but it would really astonish me if they tried out lots of Stockfishes and picked the one that performed the worst.


Naive? You're talking about Google and some of the greatest AI experts in the world. They don't know what a hash table does? And what machine (from this earth) has 64 processors and 1G of RAM? The artificiality of this artificial intelligence test is glaring.

But I agree it doesn't matter; if it didn't beat Stockfish today, it will tomorrow. I think Kurzweil is fatuous, but this result is still terrifying. Technologies of such power serving the tender mercies and reserved wisdom of those whose hands they fall into...


Agreed. I don't think these result are meant for researchers. (64 processors -- not cores -- processors and 1GB is indeed a joke.) Google wants to sell its services (probably trying to replace IBM's Watson) and "4 hours + Google most powerful machines" >> "years and years of research by many people" is easy to remember. The decision makers in many companies are not researchers so minor points like missing details and weird machine specs may be overlooked.

Until more details emerge I consider the whole thing not much more than reasonably clever advertising.
  
Back to top
 
IP Logged
 
ReneDescartes
God Member
*****
Offline


Qu'est-ce donc que je
suis? Une chose qui pense.

Posts: 1236
Joined: 05/17/10
Gender: Male
Re: Stockfish dethroned (big time)?!
Reply #25 - 12/08/17 at 17:01:15
Post Tools
dfan wrote on 12/08/17 at 13:22:50:
ReneDescartes wrote on 12/08/17 at 12:13:55:
bragesjo wrote on 12/08/17 at 09:08:48:
The one thing I still dont understand is why Stockfish got 64 threads and only 1 GB ram?

My presumption is that some pre-testing testing was done and that those were the conditions that produced the best-looking results.

This would surprise me immensely. I'm not going to say it has non-zero probability, but it would be considered a really strong and nasty accusation in the machine-learning community and if they did it they would be aware that they were severely violating academic standards.

I do think it's quite likely that they set up Stockfish naively without worrying much about optimizing its performance, but it would really astonish me if they tried out lots of Stockfishes and picked the one that performed the worst.


Naive? You're talking about Google and some of the greatest AI experts in the world. They don't know what a hash table does? And what machine (from this earth) has 64 processors and 1G of RAM? The artificiality of this artificial intelligence test is glaring.

But I agree it doesn't matter; if it didn't beat Stockfish today, it will tomorrow. I think Kurzweil is fatuous, but this result is still terrifying. Technologies of such power serving the tender mercies and reserved wisdom of those whose hands they fall into...
  
Back to top
 
IP Logged
 
Page Index Toggle Pages: 1 [2] 3 4 
Topic Tools
Bookmarks: del.icio.us Digg Facebook Google Google+ Linked in reddit StumbleUpon Twitter Yahoo