Disclaimer: Very unclear basis on everything right now!
According to its developers, AlphaZero (as far as I understand it, the updated version of AlphaGo, the Go program which beat a human elite player for the first time in history) learned chess "in 4 hours", then played a 100 game match with Stockfish 8, and scored +28-0=72 (including 3 wins with black)!
The time control was supposedly 1minute/move, with AZ pulling farther away from Stockfish the longer the timecontrol happened to be.
It uses a drastically different algorithm; rather than going through 70 million positions per second (Stockfish), it checks just 80 thousand, but uses its "deep neural network" to focus much more efficiently on more promising variations (plays like a human, so to speak).
If all of this is true, and it ever makes its way into human hands (currently its not exactly available, and very unclear how well it will fare if it eg were to use less than 64 cores), that would be big news indeed
https://lichess.org/study/EOddRjJ8 You can check out 10 sample games of the 100 game match here (G7+G8 are two of the three Black wins)
https://arxiv.org/pdf/1712.01815.pdf if you want a more detailed explanation of .. everything
Right now it looks a vague bit suspicious to me, as we have no independent party confirming anything, and the games seem a bit strange -
https://lichess.org/0LUhNlLB here for example is one of Stockfish's White losses, which the quick Lichess Analysis (also running Stockfish) evaluates as having 5 Inaccuracies + 1 Mistake & 15 ACPL; basically unheard of in elite Engine chess. That it may misevaluate the AlphaZero program if that is indeed stronger than itself I can understand, but I see little reason as to why Stockfish would criticize *its own* moves to that extend (to compare: Analysis of G94 in Houdini vs Stockfish from TCEC 2016
https://lichess.org/Fp9UmSGv)