0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation | Clinical Sciences

Readability Assessment of Online Ophthalmic Patient Information FREE

Matthew R. Edmunds, MRCP, MRCOphth1; Robert J. Barry, BMedSci(Hons), MBChB1; Alastair K. Denniston, MA, MRCP, FRCOphth, PhD1,2
[+] Author Affiliations
1Academic Unit of Ophthalmology, College of Medical and Dental Sciences, University of Birmingham, Birmingham and Midland Eye Centre, Birmingham, England
2University Hospitals Birmingham NHS Foundation Trust, Birmingham, England
JAMA Ophthalmol. 2013;131(12):1610-1616. doi:10.1001/jamaophthalmol.2013.5521.
Text Size: A A A
Published online

Importance  Patients increasingly use the Internet to access information related to their disease, but poor health literacy is known to impact negatively on medical outcomes. Multiple agencies have recommended that patient-oriented literature be written at a fourth- to sixth-grade (9-12 years of age) reading level to assist understanding. The readability of online patient-oriented materials related to ophthalmic diagnoses is not yet known.

Objective  To assess the readability of online literature specifically for a range of ophthalmic conditions.

Design and Setting  Body text of the top 10 patient-oriented websites for 16 different ophthalmic diagnoses, covering the full range of ophthalmic subspecialties, was analyzed for readability, source (United Kingdom vs non–United Kingdom, not for profit vs commercial), and appropriateness for sight-impaired readers.

Main Outcomes and Measures  Four validated readability formulas were used: Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning Fog Index (GFOG). Data were compared with the Mann-Whitney test (for 2 groups) and Kruskal-Wallis test (for more than 2 groups) and correlation was assessed by the Spearman r.

Results  None of the 160 webpages had readability scores within published guidelines, with 83% assessed as being of “difficult” readability. Not-for-profit webpages were of significantly greater length than commercial webpages (P = .02) and UK-based webpages had slightly superior readability scores compared with those of non-UK webpages (P = .004 to P < .001, depending on the readability formula used). Of all webpages evaluated, only 34% included facility to adjust text size to assist visually impaired readers.

Conclusions and Relevance  To our knowledge, this is the first study to assess readability of patient-focused webpages specifically for a range of ophthalmic diagnoses. In keeping with previous studies in other medical conditions, we determined that readability scores were inferior to those recommended, irrespective of the measure used. Although readability is only one aspect of how well a patient-oriented webpage may be comprehended, we recommend the use of readability scoring when producing such resources in the future. Minimum readability policies and inclusion of facilities within webpages to maximize viewing potential for visually impaired readers are important to ensure that online ophthalmic patient information is accessible to the broadest audience possible.

Figures in this Article

It has long been recognized that the Internet is a key resource for patients seeking information related to their health problems.1 With increasing demands on ophthalmic services, shorter clinic appointments, and recent advancements in ophthalmic treatments and technologies (eg, anti–vascular endothelial growth factor drugs), there may be less time available for effective communication of patient education. As a result, Internet use to supplement health care information provided during the face-to-face encounter has increased. Indeed, in 2012, it was determined that 81% of US adults use the Internet, with 71% searching for health information.2

This is understandable, because educational material can reinforce verbal communication and improve understanding and recall of what has been said, with subsequent reduction in anxiety, increased patient satisfaction, and improved decision making.3,4 Moreover, patient education is crucial to increase understanding and treatment compliance and to generally enhance the physician-patient relationship.5,6

However, it is difficult for those without medical knowledge to judge the accuracy, quality, and relevance of online information.7 Furthermore, although physicians may encourage patients to seek out material related to their medical condition,8 they may also overestimate their patients’ health literacy by using medical words and phrases, both spoken and written, that exceed patients’ capacity for understanding.9 Certainly, excessive levels of reading difficulty have been previously noted in patient information.10 In addition, online information may be affected by biases related to commercial interests, health beliefs, or unsubstantiated conjecture.11 As online health care information becomes increasingly accessible to patients, it is vital to make the available resources more readable and, consequently, more comprehensible.

Health literacy has been defined as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”12 No information currently exists on the level of health literacy in the United Kingdom. However, the Skills for Life Survey demonstrated that England has low levels of general literacy, with 46% of participants having literacy below that required to achieve their full potential and 3% being functionally illiterate.13 It is thought very likely that levels of health literacy are similarly low. This is significant, given that those with lower health literacy have been shown to have less knowledge of their disease, more risk of hospitalization, inferior treatment compliance, poorer health, and higher mortality than people with adequate health literacy.13,14

A key aspect of health literacy is readability. Readability is defined as “the ease with which written materials are read” and is a critically important factor in assessing how well a patient-focused resource may be comprehended, with increased readability being associated with increased comprehension.15,16 In response to high levels of inadequate adult health literacy, the US Department of Health and Human Services (USDHHS) recommends that health information for patients should be written at or below the sixth-grade (11-12 years of age) reading level, categorized as “easy to read,” to optimize comprehension. Any material between the seventh- and ninth-grade (12-15 years of age) levels is classified as “average difficulty” and above ninth-grade, as “difficult” (Table 1).15

Table Graphic Jump LocationTable 1.  FRES With Equivalent US Education Grade Level, UK School Age, and USDHHS Readability Ratinga

Because there are no current UK guidelines for readability of patient-oriented literature, most studies assessing online patient information literature have used USDHHS standards. Multiple studies have analyzed the content of online patient information for a variety of medical and surgical conditions using a number of different readability indices and demonstrated that the great majority are written above minimum USDHHS readability levels, and certainly beyond that of the average individual.15,1722

We aimed to assess the readability of a wide range of patient-oriented online ophthalmic information. We also aimed to evaluate the use of images, figures, and illustrations used to enhance text and the incorporation, or otherwise, of appropriate text fonts and sizes to optimize viewing potential for visually impaired patients.

The readability of the content of the 10 highest-ranked English-language patient-oriented online resources was analyzed for 16 different ophthalmic conditions (a total of 160 webpages), encompassing the full range of ophthalmic subspecialties. Relevant webpages were identified in December 2012 using specific Google search terms for each condition. Webpage text was analyzed irrespective of country of origin or whether related to a commercial or not-for-profit organization. All webpage searches, text extraction, compilation of readability statistics, and subsequent analysis were undertaken by 2 researchers (M.R.E. and R.J.B.), following confirmation of low interobserver variability. No webpage intended, in any part, for health care professionals was included for assessment. Note was made of the type of font used (serif or sans serif), text size, whether there was facility to increase or decrease text size, and the use and number of illustrations, images, or videos. All extraneous text (eg, hyperlinks, citations, affiliations, figures, legends, disclaimers, acknowledgments, copyright notices, and author information) was removed. Relevant body text was copied into Microsoft Word 2010 and analyzed for word count, number of words per sentence, and Flesch Reading Ease Score (FRES) using the “spelling and grammar” function. Following this, the same text was analyzed with an online readability calculator (http://www.harrymclaughlin.com) as used in previous studies,17 providing a further 3 distinct readability scores: Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning Fog Index (GFOG). Data were analyzed using GraphPad Prism version 5.0 (GraphPad Software). Degree of readability difficulty was categorized according to USDHHS standards. Word count and words per sentence were nonparametrically distributed according to the D'Agostino-Pearson test. Therefore, data were compared with the Mann-Whitney test (for 2 groups) or Kruskal-Wallis test with the Dunn posttest (for more than 2 groups) and correlation was assessed by the Spearman r. Results were considered statistically significant at a P value of .05 or less.

The 4 readability formulas chosen have been widely used in a variety of previous studies.15,1722 Each assesses readability according to word difficulty and sentence length, although with different emphasis and weighting factors. The FRES is a 100-point scale that measures the average number of syllables per word and words per sentence, with higher scores indicating more easily understood text (Table 1). The remaining 3 measures, FKGL, SMOG, and GFOG, indicate the US academic grade level (number of years of education) necessary to comprehend the written material. For example, a score of 8.0 indicates that a pupil in the eighth grade (13-14 years of age) can understand such a document. The GFOG measures mean number of words per sentence and percentage of polysyllabic words while SMOG counts the number of words of 3 or more syllables in 3 ten-sentence samples near the beginning, middle, and end of a piece of text. If there are less than 30 sentences, then the entire article is examined. The square root of the count is then calculated before adding 3 to obtain the grade level. The results of FRES should correlate inversely with FKGL, SMOG, and GFOG so that text with a high FRES has lower FKGL, SMOG, and GFOG scores.17

One hundred sixty webpages (10 webpages for each of the 16 ophthalmic diagnoses) were assessed. Of these, 118 (74%) were from not-for-profit and 42 (26%) were from commercial webpages. One hundred (63%) were from UK-based organizations and 60 (37%) were from non-UK locations. The number of commercial webpages varied according to ophthalmic diagnosis with, for example, “corneal graft” and “cataract” having 5 commercial webpages each of the 10 websites evaluated but “diabetic retinopathy” and “glaucoma” having no commercial webpages in their top 10 websites assessed.

Overall, the median word count (with interquartile range [IQR] and minimum and maximum values) for all webpages was 1071 (IQR, 1082; range, 144-25 223), with median words per sentence of 16.6 (IQR, 3.8; range, 12.1-29.5). Median FRES was 52.1 (IQR, 12.3; range, 28.6-70.3); FKGL score, 11.3 (IQR, 2.2; range, 8.5-15.1), SMOG score, 12.7 (IQR, 1.5; range, 10.3-15.5), and GFOG score, 13.1 (IQR, 2.2; range, 9.9-17.6). The Figure presents the distribution of each readability score for all of the webpages evaluated, and readability scores for each individual ophthalmic condition are summarized in Table 2. None of the webpages adhered to USDHHS guidelines for any of the readability scores, with no individual webpage achieving the recommended FRES of 80 or more or FKGL, SMOG, or GFOG scores of 6 or less.

Place holder to copy figure label and caption
Figure.
Distribution of Readability Scores in Terms of Number and Percentage for All Ophthalmic Webpages Evaluated and for Each of the Different Readability Scores

A, Flesch Reading Ease Score (FRES). B, Flesch-Kincaid Grade Level (FKGL). C, Simple Measure of Gobbledygook (SMOG). D, Gunning Fog Index (GFOG).

Graphic Jump Location
Table Graphic Jump LocationTable 2.  Summary Table of Median Readability Statistics (With IQR) and Range of Minimum and Maximum Values for the 10 Webpages of Each of the 16 Ophthalmic Search Terms Evaluated

There were statistically significant differences in median readability scores of webpages for different ophthalmic conditions. In particular, between-group nonparametric analysis revealed differences in FRES (“keratoconus” significantly lower than “age-related maculopathy” [P < .05], “cataract” [P < .01], and “macular hole” [P < .05]); FKGL (“uveitis” significantly higher than “cataract” [P < .05] and “macular hole” [P < .01]); SMOG (“uveitis” significantly higher than “age-related maculopathy” [P < .05] and “cataract” [P < .05]); and GFOG (“uveitis” significantly higher than “age-related maculopathy” [P < .05], “cataract” [P < .05], and “basal cell carcinoma” [P < .01]). There was no statistically significant difference in word count or words per sentence between any of the different ophthalmic diagnoses.

There was statistically significant correlation between word count and FRES (Spearman r = 0.19; P = .02) and GFOG (Spearman r = 0.16; P = .04), but words per sentence correlated with all readability indices (Spearman r for FRES = −0.35, for FKGL = 0.50, for SMOG = 0.49, and for GFOG = 0.51; P < .001 for each). Not-for-profit webpages were of significantly greater length than commercial webpages, but there were no other significant differences between the 2 groups in any of the readability measures. However, although UK and non-UK webpages were of equivalent median word count, UK webpages had slightly superior readability scores (Table 3).

Table Graphic Jump LocationTable 3.  Median Readability Statistics for NFP as Compared With Commercial Webpages and for UK as Compared With Non-UK Webpagesa

Of all the webpages evaluated, only 2 (1.25%) used a complicated serif font, contrary to guidelines such as those of the Royal National Institute of Blind People. However, only 34% of the webpages included a facility to adjust the size of text to account for visually impaired readers. This was variable among webpages for the different ophthalmic diagnoses, with 7 of the 10 “optic neuropathy” webpages including this function but none of those for “diabetic retinopathy.” Only 59% of webpages included images or illustrations to aid understanding. Again, this varied, with, for example, 7 of the 10 “retinal vein occlusion” webpages having illustrations (with 54 images in total) but only 3 of the 10 webpages for “posterior vitreous detachment” (a total of 3 images).

To our knowledge, this is the first study to assess the readability of online patient-focused literature related specifically to a range of ophthalmic diseases. A previous study assessing the quality and readability of information related to age-related macular degeneration found SMOG scores of around 16, indicative that they would be difficult for the average person to understand.23 Likewise, an evaluation of webpages related to retinopathy of prematurity found the majority of the information provided to be “fair or poor.”24

In other health care arenas, the readability of patient-oriented literature has been assessed in heart disease, cancer, stroke, chronic obstructive pulmonary disease, diabetes, orthopedic conditions, dermatology, thyroid surgery, craniofacial conditions, epilepsy, and Parkinson disease.15,1722 Our study is in keeping with each of these in finding that the readability of online patient-focused information is set at a level above that of the majority of the population. Indeed, none of the webpages examined had a readability score with any of the measures that was in keeping with USDHHS guidelines. The highest (most readable) FRES was 70.3 and the lowest (most readable) FKGL score was 8.5; SMOG score, 10.3; and GFOG, score 9.9, all some way distant of the recommended FRES of 80 or grade level less than sixth grade. Furthermore, 83% of webpages had a FRES of less than 60, classified as “difficult” by USDHHS. This is important because such information may not be understood or, perhaps worse, may be misinterpreted by patients.9 The SMOG and GFOG scores were consistently higher than the FKGL score, in keeping with previous reports.3

Except for word count, there was no significant difference between not-for-profit and commercial webpages. This contrasts with the observation of Fitzsimmons et al,17 who reported that commercial websites had slightly better readability than not-for-profit websites, possibly because the writing style of these webpages aimed to ensure the widest possible audience. However, this group, in contrast with our study, found no difference in readability scores between UK and US webpages.

It is reassuring that the great majority of ophthalmic webpages use clear, large, uncomplicated fonts as recommended by such agencies as the Royal National Institute of Blind People. However, one would imagine that more websites dedicated to clarifying ophthalmic health issues for patients should take into consideration those who are visually impaired by including a facility to adjust text size.

Patient education is a critical part of the physician-patient relationship.3 However, a significant proportion of the adult Western population is known to have limited literacy.13 As a result, they may have greater difficulty fully understanding online health information related to ophthalmic disease, even when this information has apparently been prepared for patient consumption rather than that of health professionals. In addition, although the quantity of online information is vast, the quality can be variable and the prevalence of incorrect information is unknown. Certainly, the material assessed by this study appeared to be accurate throughout, although grammatical errors and spelling mistakes, sometimes present, were ignored in the analysis.

Given these results, it is perhaps surprising that a previous survey found that more than 80% of respondents rated the quality of ophthalmic information obtained on websites as either “excellent” or “good.”25 This may be a reflection of the context in which these patients were exposed to a particular webpage. For example, it is more likely that a patient will find a webpage more readable following a consultation with a health care professional rather than before that same interaction. It is telling, however, that the great majority also indicated that they would like their physicians to provide them with relevant links to webpages with reliable information.25

The limitations of this study are mainly related to the readability scores currently in existence. Each of the scores only measures whether words can be effectively read, not if they can be understood. A shorter word may not be more understandable, while a longer word may not necessarily be more difficult to understand.19 It is reassuring, however, that each of the readability indices, except for GFOG, correlated well with each other. Other readability formulas are available. For example, the New Dale-Chall readability formula uses a list of 3000 “easy” words familiar to 80% of fourth-grade students (9-10 years of age), with a score calculated on the basis of the percentage of “unfamiliar” words and the mean sentence length.19

Readability is only one facet of comprehension and other factors such as the layout, organization, structure, and visual presentation of a webpage may supplement and overcome literacy barriers. It is known, for example, that photographs improve patient comprehension and recall of information.15 It is also possible that there may have been confounding due to the inclusion of medical terms within webpage text. Although anatomical descriptions (eg, “choroidal vasculature”) or medication names (eg, “ranibizumab,” and “benzalkonium chloride”) may have been adequately defined before further use within a webpage, they may still have skewed readability data because of their unfamiliarity and inherently polysyllabic nature. Conversely, one may also argue that some patients may desire to read webpages of greater complexity, with the risk that some webpages may be too “simplified” for some. Guidelines do exist to improve readability of patient-oriented education resources. For example, the National Institutes of Health suggests a number of strategies, such as finding alternatives for complex words, keeping sentences to only 10 to 15 words, using terms in a consistent manner, and providing simple pronunciation guides.26

In this study, the results for each ophthalmic condition were limited strictly to the first 10 websites identified by the search engine because it was felt that these were the most likely to be viewed by a patient, and also to avoid selection bias. However, this relatively small number of webpages examined for each condition is more likely to be prone to data skewing if, for example, 1 of the webpages among the 10 was exceptionally long or of particular poor readability. Nevertheless, even if these outliers were to be removed from analysis, it is likely that overall readability is poorer than published guidelines. It is also the case that our Google search terms may not have been those used by patients. For example, whereas we used the term basal cell carcinoma, patients may have searched for “rodent ulcer.” We do not believe that this would have made any difference to the study outcome or conclusions. Of greater importance, Kahana and Gottlieb11 found that commercial websites dominated the higher-ranking positions of web searches, although being dependent on the search engine used. These webpages may have inherent bias related to their commercial interest, but our study did not find such a relationship.

In conclusion, we have determined that the great majority of online ophthalmic patient information, over a range of subspecialties, is written at a level above that which the average person can understand. If online ophthalmic information for patients was readable at the recommended grade level, then there may be a number of benefits, including patient empowerment, increased understanding, and greater treatment compliance.17 Together these may be expected to have a positive overall impact on prognosis. By highlighting the apparent gap between the readability of such online patient resources and general estimates of health literacy in the adult population, we hope to increase the awareness and use of readability indices when producing such literature in the future. We recommend minimum readability policies based on USDHHS guidelines to ensure that resources are accessible to the broadest possible audience.

Corresponding Author: Matthew R. Edmunds, MRCP, MRCOphth, Academic Unit of Ophthalmology, College of Medical and Dental Sciences, University of Birmingham, Birmingham and Midland Eye Centre, Dudley Road, Birmingham B18 7QU, England (m.r.edmunds@bham.ac.uk).

Submitted for Publication: March 26, 2013; final revision received May 13, 2013; accepted May 18, 2013.

Published Online: October 31, 2013. doi:10.1001/jamaophthalmol.2013.5521.

Author Contributions: Dr Edmunds had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: All authors.

Acquisition of data: All authors.

Analysis and interpretation of data: All authors.

Drafting of the manuscript: All authors.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: All authors.

Administrative, technical, or material support: All authors.

Study supervision: Denniston.

Conflict of Interest Disclosures: None reported.

Funding/Support: The Academic Unit of Ophthalmology is supported by the Birmingham Eye Foundation (UK Registered Charity 257549). Dr Edmunds is funded by the Wellcome Trust.

Role of the Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Coiera  E.  The Internet’s challenge to health care provision. BMJ. 1996;312(7022):3-4.
PubMed   |  Link to Article
Trend data (adults). Pew Internet and American Life Project website. http://www.pewinternet.org/Static-Pages/Trend-Data-(Adults)/Usage-Over-Time.aspx. Accessed May 7, 2013.
Patel  CR, Cherla  DV, Sanghvi  S, Baredes  S, Eloy  JA.  Readability assessment of online thyroid surgery patient education materials. Head Neck. 2013;35(10):1421-1425.
PubMed
O’Connor  AM, Rostom  A, Fiset  V,  et al.  Decision aids for patients facing health treatment or screening decisions: systematic review. BMJ. 1999;319(7212):731-734.
PubMed   |  Link to Article
McMullan  M.  Patients using the Internet to obtain health information: how this affects the patient-health professional relationship. Patient Educ Couns. 2006;63(1-2):24-28.
PubMed   |  Link to Article
Winnick  S, Lucas  DO, Hartman  AL, Toll  D.  How do you improve compliance? Pediatrics. 2005;115(6):e718-e724.
PubMed   |  Link to Article
Bessell  TL, McDonald  S, Silagy  CA, Anderson  JN, Hiller  JE, Sansom  LN.  Do Internet interventions for consumers cause more harm than good? a systematic review. Health Expect. 2002;5(1):28-37.
PubMed   |  Link to Article
Patel  PP, Hoppe  IC, Ahuja  NK, Ciminello  FS.  Analysis of comprehensibility of patient information regarding complex craniofacial conditions. J Craniofac Surg. 2011;22(4):1179-1182.
PubMed   |  Link to Article
Walsh  TM, Volsko  TA.  Readability assessment of internet-based consumer health information. Respir Care. 2008;53(10):1310-1315.
PubMed
Davis  TC, Crouch  MA, Wills  G, Miller  S, Abdehou  DM.  The gap between patient reading comprehension and the readability of patient education materials. J Fam Pract. 1990;31(5):533-538.
PubMed
Kahana  A, Gottlieb  JL.  Ophthalmology on the internet: what do our patients find? Arch Ophthalmol. 2004;122(3):380-382.
PubMed   |  Link to Article
The Joint Commission. What Did the Doctor Say? Improving Health Literacy to Protect Patient Safety. Health Care at the Crossroads series. http://www.jointcommission.org/assets/1/18/improving_health_literacy.pdf. Published 2007. Accessed November 2, 2012.
Why is health literacy important? Health Literacy Group website. http://www.healthliteracy.org.uk. Accessed December 8, 2012.
Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association.  Health literacy: report of the Council on Scientific Affairs. Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association. JAMA. 1999;281(6):552-557.
PubMed   |  Link to Article
Tulbert  BH, Snyder  CW, Brodell  RT.  Readability of patient-oriented online dermatology resources. J Clin Aesthet Dermatol. 2011;4(3):27-33.
PubMed
Baker  GC, Newton  DE, Bergstresser  PR.  Increased readability improves the comprehension of written information for patients with skin disease. J Am Acad Dermatol. 1988;19(6):1135-1141.
PubMed   |  Link to Article
Fitzsimmons  PR, Michael  BD, Hulley  JL, Scott  GO.  A readability assessment of online Parkinson’s disease information. J R Coll Physicians Edinb. 2010;40(4):292-296.
PubMed   |  Link to Article
Boulos  MN.  British internet-derived patient information on diabetes mellitus: is it readable? Diabetes Technol Ther. 2005;7(3):528-535.
PubMed   |  Link to Article
Kasabwala  K, Misra  P, Hansberry  DR,  et al.  Readability assessment of the American Rhinologic Society patient education materials. Int Forum Allergy Rhinol. 2013;3(4):325-333.
PubMed   |  Link to Article
Misra  P, Kasabwala  K, Agarwal  N, Eloy  JA, Liu  JK.  Readability analysis of internet-based patient information regarding skull base tumors. J Neurooncol. 2012;109(3):573-580.
PubMed   |  Link to Article
Sabharwal  S, Badarudeen  S, Unes Kunju  S.  Readability of online patient education materials from the AAOS web site. Clin Orthop Relat Res. 2008;466(5):1245-1250.
PubMed   |  Link to Article
Elliott  JO, Shneker  BF.  A health literacy assessment of the epilepsy.com website. Seizure. 2009;18(6):434-439.
PubMed   |  Link to Article
Rennie  CA, Hannan  S, Maycock  N, Kang  C.  Age-related macular degeneration: what do patients find on the internet? J R Soc Med. 2007;100(10):473-477.
PubMed   |  Link to Article
Martins  EN, Morse  LS.  Evaluation of internet websites about retinopathy of prematurity patient education. Br J Ophthalmol. 2005;89(5):565-568.
PubMed   |  Link to Article
Narendran  N, Amissah-Arthur  K, Groppe  M, Scotcher  S.  Internet use by ophthalmology patients. Br J Ophthalmol. 2010;94(3):378-379.
PubMed   |  Link to Article
How to write easy-to-read health materials. Medline Plus website. http://www.nlm.nih.gov/medlineplus/etr.html. Accessed December 8, 2012.

Figures

Place holder to copy figure label and caption
Figure.
Distribution of Readability Scores in Terms of Number and Percentage for All Ophthalmic Webpages Evaluated and for Each of the Different Readability Scores

A, Flesch Reading Ease Score (FRES). B, Flesch-Kincaid Grade Level (FKGL). C, Simple Measure of Gobbledygook (SMOG). D, Gunning Fog Index (GFOG).

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1.  FRES With Equivalent US Education Grade Level, UK School Age, and USDHHS Readability Ratinga
Table Graphic Jump LocationTable 2.  Summary Table of Median Readability Statistics (With IQR) and Range of Minimum and Maximum Values for the 10 Webpages of Each of the 16 Ophthalmic Search Terms Evaluated
Table Graphic Jump LocationTable 3.  Median Readability Statistics for NFP as Compared With Commercial Webpages and for UK as Compared With Non-UK Webpagesa

References

Coiera  E.  The Internet’s challenge to health care provision. BMJ. 1996;312(7022):3-4.
PubMed   |  Link to Article
Trend data (adults). Pew Internet and American Life Project website. http://www.pewinternet.org/Static-Pages/Trend-Data-(Adults)/Usage-Over-Time.aspx. Accessed May 7, 2013.
Patel  CR, Cherla  DV, Sanghvi  S, Baredes  S, Eloy  JA.  Readability assessment of online thyroid surgery patient education materials. Head Neck. 2013;35(10):1421-1425.
PubMed
O’Connor  AM, Rostom  A, Fiset  V,  et al.  Decision aids for patients facing health treatment or screening decisions: systematic review. BMJ. 1999;319(7212):731-734.
PubMed   |  Link to Article
McMullan  M.  Patients using the Internet to obtain health information: how this affects the patient-health professional relationship. Patient Educ Couns. 2006;63(1-2):24-28.
PubMed   |  Link to Article
Winnick  S, Lucas  DO, Hartman  AL, Toll  D.  How do you improve compliance? Pediatrics. 2005;115(6):e718-e724.
PubMed   |  Link to Article
Bessell  TL, McDonald  S, Silagy  CA, Anderson  JN, Hiller  JE, Sansom  LN.  Do Internet interventions for consumers cause more harm than good? a systematic review. Health Expect. 2002;5(1):28-37.
PubMed   |  Link to Article
Patel  PP, Hoppe  IC, Ahuja  NK, Ciminello  FS.  Analysis of comprehensibility of patient information regarding complex craniofacial conditions. J Craniofac Surg. 2011;22(4):1179-1182.
PubMed   |  Link to Article
Walsh  TM, Volsko  TA.  Readability assessment of internet-based consumer health information. Respir Care. 2008;53(10):1310-1315.
PubMed
Davis  TC, Crouch  MA, Wills  G, Miller  S, Abdehou  DM.  The gap between patient reading comprehension and the readability of patient education materials. J Fam Pract. 1990;31(5):533-538.
PubMed
Kahana  A, Gottlieb  JL.  Ophthalmology on the internet: what do our patients find? Arch Ophthalmol. 2004;122(3):380-382.
PubMed   |  Link to Article
The Joint Commission. What Did the Doctor Say? Improving Health Literacy to Protect Patient Safety. Health Care at the Crossroads series. http://www.jointcommission.org/assets/1/18/improving_health_literacy.pdf. Published 2007. Accessed November 2, 2012.
Why is health literacy important? Health Literacy Group website. http://www.healthliteracy.org.uk. Accessed December 8, 2012.
Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association.  Health literacy: report of the Council on Scientific Affairs. Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association. JAMA. 1999;281(6):552-557.
PubMed   |  Link to Article
Tulbert  BH, Snyder  CW, Brodell  RT.  Readability of patient-oriented online dermatology resources. J Clin Aesthet Dermatol. 2011;4(3):27-33.
PubMed
Baker  GC, Newton  DE, Bergstresser  PR.  Increased readability improves the comprehension of written information for patients with skin disease. J Am Acad Dermatol. 1988;19(6):1135-1141.
PubMed   |  Link to Article
Fitzsimmons  PR, Michael  BD, Hulley  JL, Scott  GO.  A readability assessment of online Parkinson’s disease information. J R Coll Physicians Edinb. 2010;40(4):292-296.
PubMed   |  Link to Article
Boulos  MN.  British internet-derived patient information on diabetes mellitus: is it readable? Diabetes Technol Ther. 2005;7(3):528-535.
PubMed   |  Link to Article
Kasabwala  K, Misra  P, Hansberry  DR,  et al.  Readability assessment of the American Rhinologic Society patient education materials. Int Forum Allergy Rhinol. 2013;3(4):325-333.
PubMed   |  Link to Article
Misra  P, Kasabwala  K, Agarwal  N, Eloy  JA, Liu  JK.  Readability analysis of internet-based patient information regarding skull base tumors. J Neurooncol. 2012;109(3):573-580.
PubMed   |  Link to Article
Sabharwal  S, Badarudeen  S, Unes Kunju  S.  Readability of online patient education materials from the AAOS web site. Clin Orthop Relat Res. 2008;466(5):1245-1250.
PubMed   |  Link to Article
Elliott  JO, Shneker  BF.  A health literacy assessment of the epilepsy.com website. Seizure. 2009;18(6):434-439.
PubMed   |  Link to Article
Rennie  CA, Hannan  S, Maycock  N, Kang  C.  Age-related macular degeneration: what do patients find on the internet? J R Soc Med. 2007;100(10):473-477.
PubMed   |  Link to Article
Martins  EN, Morse  LS.  Evaluation of internet websites about retinopathy of prematurity patient education. Br J Ophthalmol. 2005;89(5):565-568.
PubMed   |  Link to Article
Narendran  N, Amissah-Arthur  K, Groppe  M, Scotcher  S.  Internet use by ophthalmology patients. Br J Ophthalmol. 2010;94(3):378-379.
PubMed   |  Link to Article
How to write easy-to-read health materials. Medline Plus website. http://www.nlm.nih.gov/medlineplus/etr.html. Accessed December 8, 2012.

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles