Common Dorsal Stream Substrates for the Mapping of Surface Texture to Object Parts and Visual Spatial Processing

Valentinos Zachariou, Christine V Nikas, Zaid N Safiullah, Marlene Behrmann, Roberta Klatzky, Leslie G Ungerleider, Valentinos Zachariou, Christine V Nikas, Zaid N Safiullah, Marlene Behrmann, Roberta Klatzky, Leslie G Ungerleider

Abstract

Everyday objects are often composed of multiple parts, each with a unique surface texture. The neural substrates mediating the integration of surface features on different object parts are not fully understood, and potential contributions by both the ventral and dorsal visual pathways are possible. To explore these substrates, we collected fMRI data while human participants performed a difference detection task on two objects with textured parts. The objects could either differ in the assignment of the same texture to different object parts ("texture-location") or the types of texture ("texture-type"). In the ventral stream, comparable BOLD activation levels were observed in response to texture-location and texture-type differences. In contrast, in a priori localized spatial processing regions of the dorsal stream, activation was greater for texture-location than texture-type differences, and the magnitude of the activation correlated with behavioral performance. We confirmed the reliance of surface texture to object part mapping on spatial processing mechanisms in subsequent psychophysical experiments, in which participants detected a difference in the spatial distance of an object relative to a reference line. In this task, distracter objects occasionally appeared, which differed in either texture-location or texture-type. Distracter texture-location differences slowed detection of spatial distance differences, but texture-type differences did not. More importantly, the distracter effects were only observed when texture-location differences were presented within whole shapes and not between separated shape parts at distinct spatial locations. We conclude that both the mapping of texture features to object parts and the representation of object spatial position are mediated by common neural substrates within the dorsal visual pathway.

Trial registration: ClinicalTrials.gov NCT00001360.

Figures

Figure 1.
Figure 1.
(A) Sample stimulus display of the main task of the fMRI study, consisting of two object images, each of which is overlaid with two texture features. Participants were asked to compare the presented objects and decide if they differed in terms of their texture. (B) Sample stimulus display with a texture-location difference between the two objects. On the left chair, the backrest area is covered with texture A and the seat area with texture B. On the right chair, the backrest area has the B texture and the seat area has the A texture. (C) Sample stimulus display with a texture-type difference between the two objects. On the left chair, the backrest area is covered with texture A and the seat area with texture B. On the right chair, the backrest area and seat are covered with textures C and D, which differ from A and B. D. All 22 objects used in Experiments 1 and 2.
Figure 2.
Figure 2.
(A) Two sample object outline stimuli. (B) Two sample texture cube stimuli. (C) A series of three sample trials from a texture cube block of the texture localizer, with fixation interleaved. The last trial depicts different items, and a response is required. (D) A series of three sample trials from an object outline block of the texture localizer, with fixation interleaved. The last trial depicts different items and a response is required. (E) A series of three sample trials from a distance-matching block of the location localizer, interleaved with fixation. The last trial depicts a distance mismatch between the ball and line across the two panels and a response is required. (F) A series of three sample trials from a brightness-matching block of the location localizer, interleaved with fixation. The last trial depicts a brightness mismatch on the brightness of the ball across the two panels and a response is required.
Figure 3.
Figure 3.
(A) Cortical activation map (magnitude of activity; difference in beta-weight coefficients) revealed by the fMRI contrast of texture cubes > object outlines from the texture localizer. (B) Cortical activation map revealed by the fMRI contrast of texture cubes > distance-matching from the texture and location localizer tasks. (C) Cortical activation map revealed by the fMRI contrast of distance-matching > brightness-matching from the location localizer.
Figure 4.
Figure 4.
(A) Cortical activation maps (magnitude of activity; difference in beta-weight coefficients) revealed by the fMRI contrast of texture-location > texture-type, constrained within the localizer ROIs. (B) Cortical activation maps revealed by the fMRI contrast of texture-location > texture-type, at the level of the whole brain. In A and B, positive activations (yellow–orange) correspond to regions more active for texture-location difference detections compared to texture-type difference detections. Negative activations (cyan-blue) correspond to regions more active for texture-type compared to texture-location difference detections. The cyan outlines illustrate the brain regions that comprise the location localizer ROI, the yellow outlines illustrate the ROI identified by the fMRI contrast of texture cubes > distance matching (the extended ventral stream ROI), and the green outlines illustrate the texture localizer ROI.
Figure 5.
Figure 5.
(A) Correlation maps revealed by the interaction term in the linear mixed model contrasting the brain activity–behavior correlations between the two texture-type tasks. Separate analyses were run within each of the localizer ROIs. Positive activations (yellow–orange) correspond to regions, within a localized ROI, where the correlation between brain activity (beta-weight coefficients) and RT (msec) was stronger for texture-location compared to texture-type difference detections (the unit is difference in r values: the difference in the correlation between texture-location and RT minus the correlation between texture-type and RT). Negative activations (cyan–blue) correspond to regions, within a localized ROI, where the correlation between brain activity (beta-weight coefficients) and RT (msec) was stronger for texture-type compared to texture-location difference detections. (B) Correlation maps of brain activity from the fMRI contrast of texture-location > texture-type (difference in beta-weight coefficients) correlated with RT (msec) for texture-location. No significant brain activity–behavior correlation was observed between the same activity and RT (msec) for texture-type. Warm colors indicate positive brain activity–behavior correlations, whereas cool colors indicate negative correlations. (C) Summary of B with average activity (from the fMRI contrast texture-location > texture-type) extracted separately for each participant using the group level distance estimation localizer ROIs and correlated with RT (msec). The left panel depicts the correlation with RT (msec) for texture-location change detections, and the right panel depicts the correlation with RT (msec) for texture-type change detections.
Figure 6.
Figure 6.
(A) Sample stimulus display of Experiment 2. Each panel contained two objects separated by a black horizontal line. The rows of the two panels always matched with respect to the shape of the objects depicted, although the corresponding objects themselves could differ in terms of their texture features. (B) Stimulus display with one distance difference between the object pairs. A distance difference, if present, could be either easy or difficult to detect. (C) Stimulus display with a texture-location difference between the object pairs. (D) Stimulus display with a texture-type difference between the object pairs. (E) Stimulus display with two differences between the pairs, one in texture (texture-location depicted) and one in distance relative to the reference line. (F) Enlarged texture-location and texture-type differences. Gray brackets in the figure are used to highlight the corresponding objects with a difference between them.
Figure 7.
Figure 7.
(A) ACC (% correct) and (B) RT (in msec) for Section 1 trials of Experiment 2 with a texture-type distracter difference, a texture-location distracter difference, or no difference (identical pairs) between the panels. In Section 1 of Experiment 2, participants performed spatial distance judgments (distance of objects relative to a reference line) and texture differences between objects acted as distracters. C and D depict ACC and RT, respectively, for Section 2 trials of Experiment 2 where differences in spatial distance (labeled “Location” on the graphs) acted as distracters and texture differences were the target. The error bars denote ±1 SE.
Figure 8.
Figure 8.
(A) Sample stimulus displays from Experiment 3. The left panel illustrates a sample whole-shape trial, and the right panel illustrates a sample parts trial. (B) Sample stimulus displays (left, whole shapes; right, parts condition) with one distance difference between the shape pairs. (C) A sample texture-type difference between two shapes, illustrated both in whole-shape and parts configuration. (D) A sample texture-location difference between two shapes, illustrated both in whole-shape and parts configuration. C and D, which appear within the stimulus displays shown in A and B, are depicted enlarged here for clarity. (E) A sample of six shapes (out of 18) together with their constituent parts configurations used in Experiment 3.
Figure 9.
Figure 9.
(A) ACC (% correct) and (B) RT (in msec) for Section 1 trials of Experiment 3 with a texture-type distracter difference, a texture-location distracter difference, or no differences (identical). In Section 1, participants performed spatial distance judgments and texture differences between shapes acted as distracters. C and D depict ACC (% correct) and RT (in msec), respectively, for Section 2 trials of Experiment 3. In Section 2, texture differences were always targets, and there were no distracter differences present. The error bars denote ±1 SE.

Source: PubMed

3
Abonnere