This study was designed to evaluate the reliability across formats of four different questionnaires that are commonly used in hearing aid rehabilitation and research. Hearing-aid users are commonly old as a group and are likely to use computers and the Internet to a lesser extent than younger age groups, for whom Internet administration of questionnaires has been validated previously e.g. . The present study indicates that at least a proportion of hearing-aid users is willing and has the capacity to use the Internet when completing questionnaires. It is worth noting that the invitation letters were sent out to an equal number of men and women, but a majority of the responses were from men (80%), which may reflect more willingness to use the Internet among men in this age group.
Our results showed that there were no order effects for format presentation. The lack of order effects indicates that it does not matter whether the participants filled out the online or the paper versions of the questionnaires first.
The results of the current study are consistent with previous studies with other target groups [26, 27] in which comparisons have been made between the two administration formats. Scores on the outcome measures showed no significant differences across questionnaire formats for three out of the four questionnaires administered in the current study. The psychometric results for the HHIE questionnaire are consistent with what has been reported in previous studies when using the HHIE in the paper format [15, 28]. A significant main effect of format showed that the participants in general rated a higher HHIE score of 3.9 points, on a scale of 100 points, in the Internet format than in the paper format. Earlier studies have hinted that participants may reveal more about themselves when communicating via a computer . This might explain the higher degree of hearing difficulties reported by participants using the online format compared with the paper format. The effect size of the difference between the formats was small therefore it depends on the context where the questionnaire will be used in, which the actual relevance of a difference of this magnitude is . Buchanan’s suggestion  that separate norms should be derived for Internet-based and paper-based questionnaires may therefore be relevant for hearing-related measures as well, and important when it comes to the HHIE questionnaire.
Internal consistencies as evaluated using Cronbach’s α were well above .70 for all questionnaires (.75–.92). According to earlier research, this indicates that the internal consistency and reliability for each questionnaire across formats is good (IOI-HA, SADL & HADS) or excellent (HHIE) . The Cronbach’s α values from the online versions of the questionnaires were in line with results from earlier studies with the paper-and-pencil format, all of which presented the questionnaires to a similar population and using the same language as the current study [15, 17, 22, 27, 32, 33].
Pearson’s product–moment correlation results of 0.74 and above indicate high reliability across the two forms of administration . The results from the questionnaires in this study are well within the acceptable range for validity tests of this nature . The correlations for two of the questionnaires (HHIE and IOI-HA) were above 0.74, and the correlations for the remaining two questionnaires (SADL and HADS) were below 0.74 but still significant. The relatively small number of test subjects could account for the somewhat lesser correlations obtained from the SADL and HADS surveys. The 3-week interval between questionnaire administrations could have lowered the correlations for the HADS questionnaire because separating the administration dates required participants to report their moods for two different weeks . The interval between test 1 and 2 was determined by the experimenters to be short enough to exclude clinical change but long enough to reduce recall bias. It is likely that interval length affected questionnaire results, and follow-up studies should examine other interval options.
In this study, the total score for each questionnaire was analysed. For three of the four used questionnaires (HHIE, SADL and HADS), the questions can be divided into sub-categories to analyse the differences according to administration format, but that analysis was not performed because of the low power in this study. The conclusions drawn in this study are therefore based on the total score for each questionnaire and not on any subscales.
The participants were all clinical patients recruited via the public health care system, which suggests that the group is representative for the hearing aid population. However, possession of an e-mail account was required for participation in the study. This requirement limits generalisability because the results from this experiment can only be generalised to hearing impaired adults having some type of regular Internet activity, even though they are clinically representative. Including more female participants in the present study would have improved the generalisability of the conclusions. An equal number of men and women were invited to participate in the study, but more men than women responded, indicating that men may be more willing than women to participate in computer-related research studies.