Having followed this thread for a bit, I would say that something like the following might work reasonably well:
Take all games from a person, but look only at the part of the game where computer help was most likely (i.e. exclude part of the game where there are already games in the database and also once computer evaluations are hugely in favour of one side - even a cheater might play out the K+2P vs. K endgame without assistance)
Then you want to see how often the person matches the computer's (let's say Rybka's) choice particularly in positions where there are a lot of nearly equally good moves. As observed previously it is more plausible that decisive tactics might more often be seen than that someone evaluates positional nuances exactly as Rybka does.
Top 3 matching or similar methods make sense in part because a cheater might let his computer run for hours, which is of course infeasible for someone trying to check lots of games for potential cheating. As a result the shorter time you can invest in checking a position will result firstly in some more or less random fluctuations in the evalutions as well as in the checker having somewhat poorer moves available.
All of this then needs to be put into a proper statistical model, which will then be able to tell you how comparable the percentage of computer moves is compared to
- pre-computer age high-level correspondence chess
- very recent computer age correspondence chess (perhaps not too high-level, so as to look at players who might be very likely to accept a lot of computer suggestions - let's say 2100 to 2300 rating?)
In particular you might use the comparison between those two things to actually figure out what kind of statistical model really fits this data well. Some logistic regression model (fit/no fit) might be an option.
However even if you conclude that the play matches computer play better than past high-level CC, then you would have the problem to decide why that is, either
- the player in question has a different style that more closely matches the computer (I guess less likely, but at least possible)
- the player in question is more precise than past CC players (hopefully accounted for if one e.g. looks at only situations where there were a lot of very similarly evaluated moves, but the person let the computer make the "positional" decisions)
- The player is quite strong and the game sample just happens to be "unlucky" so that there are a lot of matches in moves with the computer.
- The player used computer help.
What you would of course have a hard time detecing is a player that makes all the positional decisions himself only checking whether the computer sees something important...