The Activity Inventory: an adaptive visual function questionnaire

Robert W Massof, Lohrasb Ahmadian, Lori L Grover, James T Deremeik, Judith E Goldstein, Carol Rainey, Cathy Epstein, G David Barnett, Robert W Massof, Lohrasb Ahmadian, Lori L Grover, James T Deremeik, Judith E Goldstein, Carol Rainey, Cathy Epstein, G David Barnett

Abstract

Purpose: The Activity Inventory (AI) is an adaptive visual function questionnaire that consists of 459 Tasks nested under 50 Goals that in turn are nested under three Objectives. Visually impaired patients are asked to rate the importance of each Goal, the difficulty of Goals that have at least some importance, and the difficulty of Tasks that serve Goals that have both some importance and some difficulty. Consequently, each patient responds to an individually tailored set of questions that provides both a functional history and the data needed to estimate the patient's visual ability. The purpose of the present article is to test the hypothesis that all combinations of items in the AI, and by extension all visual function questionnaires, measure the same visual ability variable.

Methods: The AI was administered to 1880 consecutively-recruited low vision patients before their first visit to the low vision rehabilitation service. Of this group, 407 were also administered two other visual function questionnaires randomly chosen from among the Activities of Daily Living Scale (ADVS), National Eye Institute Visual Functioning Questionnaire (NEI VFQ), 14-item Visual Functioning Index (VF-14), and Visual Activities Questionnaire (VAQ). Rasch analyses were performed on the responses to each VFQ, on all responses to the AI, and on responses to various subsets of items from the AI.

Results: The pattern of fit statistics for AI item and person measures suggested that the estimated visual ability variable is not unidimensional. Reading-related and other items requiring high visual resolution had smaller residual errors than expected and mobility-related items had larger residual errors than expected. The pattern of person measure residual errors could not be explained by the disorder diagnosis. When items were grouped into subsets representing four visual function domains (reading, mobility, visual motor, visual information), and separate person measures were estimated for each domain as well as for all items combined, visual ability was observed to be equivalent to the first principal component and accounted for 79% of the variance. However, confirmatory factor analysis showed that visual ability is a composite variable with at least two factors: one upon which mobility loads most heavily and the other upon which reading loads most heavily. These two factors can account for the pattern of residual errors. High product moment and intraclass correlations were observed when comparing different subsets of items within the AI and when comparing different VFQs.

Conclusions: Visual ability is a composite variable with two factors; one most heavily influences reading function and the other most heavily influences mobility function. Subsets of items within the AI and different VFQs all measure the same visual ability variable.

Figures

FIGURE 1.
FIGURE 1.
Person-item map that illustrates the relationship between the distribution of person measures (visual ability histogram on the left) and the distribution of item measures (required visual ability histogram on the right). For the item measures, the white bars on the right represent the relative distribution of required visual ability among the 50 Goals and the black bars on the right represent the relative distribution of required visual ability among the 459 Tasks.
FIGURE 2.
FIGURE 2.
Scatter plot of item measures (ordinate) for AI Goals (large gray circles) and Tasks (small black circles) vs. corresponding infit mean square transformed to a standard normal deviate (z-score) relative to the expected χ2 distribution (abscissa). The expected transformed infit mean square is 0.0 and the expected standard deviation is 1.0. The dashed vertical lines bound the region ±2 SD around the expected value.
FIGURE 3.
FIGURE 3.
Scatter plot of item measures (ordinate) for AI Tasks vs. infit mean square transformed to a standard normal deviate (z-score) relative to the expected χ2 distribution. Each Task was classified according to visual function: reading (filled circles), visual information (asterisks), visual motor (open circles), or mobility (gray circles).
FIGURE 4.
FIGURE 4.
Factor plot for person measures estimated from responses to Tasks in representing different visual functions: reading (solid triangle), visual motor(open square), visual information (open circle), mobility (open triangle), and all Tasks combined for visual ability (filled circles). The cosine of the angle between any pair of vectors corresponds to the partial correlation between those measures when all but these two factors are fixed. The dashed line represents the location of the first principal component.
FIGURE 5.
FIGURE 5.
Scatter plot of visual ability person measures estimated from difficulty ratings of AI Goals and Tasks (ordinate) vs. infit mean squares transformed to a standard normal deviate (z-score) for the expected χ2 distribution (abscissa). The vertical dashed lines bound ±2 SD around the expected value of the transformed infit mean square.
FIGURE 6.
FIGURE 6.
Histogram of transformed infit mean squares plotted in Fig. 5 (solid line) plotted along with the expected standard normal density function (dashed line). The difference between distributions is one of kurtosis (3.83 for the transformed infit mean squares).
FIGURE 7.
FIGURE 7.
Scatter plot of person measures vs. transformed infit mean squares estimated from difficulty ratings of AI Tasks and Goals. The three most prevalent disorder diagnoses are compared: age-related macular degeneration (black circles), glaucoma (gray circles), and diabetic retinopathy (open circles). There is no significant difference among these diagnostic groups in either visual ability of infit mean square distributions.
FIGURE 8.
FIGURE 8.
Scatter plot of person measures estimated from difficulty ratings of AI Tasks vs. person measures estimated from difficulty ratings of AI Goals. All of the points would fall on the solid identity line if the two sets of estimates were in perfect agreement.
FIGURE 9.
FIGURE 9.
Left panel, Scatter plot of person measures estimated from difficulty ratings of the subset of AI Goals that fall under the Social Interactions objective vs. person measures estimated from difficulty ratings of the subset of AI Goals that fall under the Daily Living objective (left panel). All points would fall on the solid identity line if the two sets of measures were in perfect agreement. Middle panel, Similar scatter plot comparing person measures estimated from AI Recreation Goal difficulty ratings vs. Daily Living Goal difficulty ratings. Right panel, Same as the other two panels except comparing person measures estimated from AI Recreation Goal difficulty ratings vs. Social Interactions Goal difficulty ratings.
FIGURE 10.
FIGURE 10.
Same as Fig. 9 but for person measures estimated from subsets of Tasks that fall under each of the three Objectives.
FIGURE 11.
FIGURE 11.
Scatter plot of factor contribution index vs. person measure infit mean square (expressed as a z-score). There is a slight negative correlation (r = −0.17) with the trend represented by the regression line. This trend indicates that worse person measure fit is associated with a greater proportion of mobility TASK items rated by the low vision subject. However, the regression line accounts for only about 3% of the variability in the z-transformed infit mean square distribution.
FIGURE 12.
FIGURE 12.
Scatter plots of person measures estimated from ratings of items in each of the four VFQs vs. person measures estimated from difficulty ratings of AI Goals and Tasks: ADVS (upper left panel), NEI VFQ (upper right panel), VAQ (lower left panel), and VF-14 (lower right panel). The solid identity line is plotted along with the data in each panel.

Source: PubMed

3
Abonnere