Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4
The human eye can clearly tell the difference between a white object and a grey object in any lighting condition. However, it is generally believed that it is impossible for a computer to tell the difference between a grey and a white object when fed in information from a camera sensor when the object covers the entire field of view. I've been told that it is impossible to tell them apart unless there is some other detail in the field of view that could be used to compare against. Is there an algorithmic way to tell a grey surface apart from a white surface under any lighting condition?