Visual judgments critically depend on (1) the detection of meaningful items from cluttered backgrounds and (2) the discrimination of an item from highly similar alternatives. Learning and experience are known to facilitate these processes, but the specificity with which these processes operate is poorly understood. Here we use psychophysical measures of human participants to test learning in two types of commonly used tasks that target segmentation (signal-in-noise, or “coarse” tasks) versus the discrimination of highly similar items (feature difference, or “fine” tasks). First, we consider the processing of binocular disparity signals, examining performance on signal-in-noise and feature difference tasks after a period of training on one of these tasks. Second, we consider the generality of learning between different visual features, testing performance on both task types for displays defined by disparity, motion, or orientation. We show that training on a feature difference task also improves performance on signal-in-noise tasks, but only for the same visual feature.
Product Marketing Video