0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation |

Assessment of Online Patient Education Materials From Major Ophthalmologic Associations FREE

Grace Huang, BS1; Christina H. Fang, BS1; Nitin Agarwal, MD2; Neelakshi Bhagat, MD1; Jean Anderson Eloy, MD3,4,5; Paul D. Langer, MD1
[+] Author Affiliations
1Institute of Ophthalmology and Visual Science, Rutgers New Jersey Medical School, Newark
2Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
3Department of Otolaryngology–Head and Neck Surgery, Rutgers New Jersey Medical School, Newark
4Center for Skull Base and Pituitary Surgery, Neurological Institute of New Jersey, Rutgers New Jersey Medical School, Newark
5Department of Neurological Surgery, Rutgers New Jersey Medical School, Newark
JAMA Ophthalmol. 2015;133(4):449-454. doi:10.1001/jamaophthalmol.2014.6104.
Text Size: A A A
Published online

Importance  Patients are increasingly using the Internet to supplement finding medical information, which can be complex and requires a high level of reading comprehension. Online ophthalmologic materials from major ophthalmologic associations should be written at an appropriate reading level.

Objectives  To assess ophthalmologic online patient education materials (PEMs) on ophthalmologic association websites and to determine whether they are above the reading level recommended by the American Medical Association and National Institutes of Health.

Design, Setting, and Participants  Descriptive and correlational design. Patient education materials from major ophthalmology websites were downloaded from June 1, 2014, through June 30, 2014, and assessed for level of readability using 10 scales. The Flesch Reading Ease test, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook test, Coleman-Liau Index, Gunning Fog Index, New Fog Count, New Dale-Chall Readability Formula, FORCAST scale, Raygor Readability Estimate Graph, and Fry Readability Graph were used. Text from each article was pasted into Microsoft Word and analyzed using the software Readability Studio professional edition version 2012.1 for Windows.

Main Outcomes and Measures  Flesch Reading Ease score, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook grade, Coleman-Liau Index score, Gunning Fog Index score, New Fog Count, New Dale-Chall Readability Formula score, FORCAST score, Raygor Readability Estimate Graph score, and Fry Readability Graph score.

Results  Three hundred thirty-nine online PEMs were assessed. The mean Flesch Reading Ease score was 40.7 (range, 17.0-51.0), which correlates with a difficult level of reading. The mean readability grade levels ranged as follows: 10.4 to 12.6 for the Flesch-Kincaid Grade Level; 12.9 to 17.7 for the Simple Measure of Gobbledygook test; 11.4 to 15.8 for the Coleman-Liau Index; 12.4 to 18.7 for the Gunning Fog Index; 8.2 to 16.0 for the New Fog Count; 11.2 to 16.0 for the New Dale-Chall Readability Formula; 10.9 to 12.5 for the FORCAST scale; 11.0 to 17.0 for the Raygor Readability Estimate Graph; and 12.0 to 17.0 for the Fry Readability Graph. Analysis of variance demonstrated a significant difference (P < .001) between the websites for each reading scale.

Conclusions and Relevance  Online PEMs on major ophthalmologic association websites are written well above the recommended reading level. Consideration should be given to revision of these materials to allow greater comprehension among a wider audience.

Figures in this Article

Preservation of health requires a combination of quality medical care and informed, preventive lifestyle choices. To patients, medical conditions and associated procedures are complex and the terminology is often difficult to understand. Patients are increasingly using Internet resources to supplement health care knowledge. A survey from 2006 estimated that 98 million American adults have used the Internet to find health care information.1

Unfortunately, medical information on the Internet is itself often complex and requires a high level of reading comprehension. Health literacy is defined in an Institute of Medicine report as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”2 Studies have shown that the reading level of American adults is reportedly between the seventh- and eighth-grade levels.3,4 In addition, the average American adult reads at 3 to 5 grades below the highest grade of schooling completed. The American Medical Association and National Institutes of Health recommend presenting patient education materials (PEMs) at the fourth- to sixth-grade level.35 A key aspect of literacy is readability, or the ease with which written materials are read. Material is considered easy to read if written below the sixth-grade level; of average difficulty if written between the seventh- and ninth-grade levels; and difficult if written above the ninth-grade level.3

Seventy percent of Americans who use the Internet to obtain health information have said that it influenced their decision about how to treat an illness or condition.6 These data are significant because patients with low health literacy relying on Internet content to make health decisions could be compromised by lack of comprehension of the information. Patients with lower health literacy have been shown to have less knowledge of their disease, greater risk of hospitalization, inferior treatment compliance, poorer health, and higher mortality than people with adequate health literacy.7 These patients incur medical expenses up to 4 times greater than patients with adequate literacy skills, costing the health care system billions of dollars annually in unnecessary physician visits and hospital stays.8

A recent study from the United Kingdom investigated the readability of a Google web search of 16 ophthalmologic diagnoses.9 None of the webpages examined had a readability score within any of the recommended guidelines. There were several limitations to that study, however. The results for each ophthalmic diagnosis were limited to the first 10 websites identified. Data are more likely to be skewed when a small sample of webpages is used for each condition and if the articles vary in length or readability. In addition, the Google search terms used may not have been those used by patients.10 For example, whereas ophthalmologists may use the term amblyopia, patients may have searched for lazy eye.

In this study, we evaluated PEMs from the websites of 10 major ophthalmologic associations, including the American Academy of Ophthalmology (AAO), American Association of Ophthalmic Oncologists and Pathologists, American Association for Pediatric Ophthalmology and Strabismus (AAPOS), American Glaucoma Society (AGS), American Society of Cataract and Refractive Surgery, American Society of Ophthalmic Plastic and Reconstructive Surgery, American Society of Retina Specialists, American Uveitis Society, Cornea Society, and North American Neuro-Ophthalmology Society (NANOS). The purpose of this study was to analyze the reading level of publicly accessible Internet-based ophthalmologic information articles using 10 assessment tools.

From June 1, 2014, through June 30, 2014, we browsed through the websites of major ophthalmologic associations and downloaded all available Internet-based PEMs. These organizations are the AAO, American Association of Ophthalmic Oncologists and Pathologists, AAPOS, AGS, American Society of Cataract and Refractive Surgery, American Society of Ophthalmic Plastic and Reconstructive Surgery, American Society of Retina Specialists, American Uveitis Society, Cornea Society, and NANOS. The Table lists the organizations along with the number of PEMs downloaded from each website. This study qualifies for exempt status as per the nonhuman subject research protocol set by the Institutional Review Board of Rutgers New Jersey Medical School.

Table Graphic Jump LocationTable.  Ophthalmologic Associations Used

Only material directed toward patients that was found under the patient section of the website was downloaded. Media-directed articles such as press releases or articles directed toward clinicians were excluded. The text from each article was pasted as plain text into a new Microsoft Word document (Microsoft Corp). Text sections of nonmedical information such as copyright notices, author information, citations, references, disclaimers, acknowledgments, and webpage navigation were excluded from assessment. Figures, figure legends, and captions were also excluded.

A readability assessment was then performed using the software package Readability Studio professional edition version 2012.1 for Windows (Oleander Software, Ltd). The readability level was assessed using 8 numerical scales and 2 graphical scales. The Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG) test, Coleman-Liau Index (CLI), Gunning Fog Index (GFI), New Fog Count (NFC), New Dale-Chall Readability Formula (NDC), and FORCAST scale use different formulas to generate a readability grade. The FRE uses sentence length and syllable count to calculate a score between 0 and 100. Higher scores indicate greater ease of reading. A score of 90 to 100 indicates a readability level of very easy, whereas a score of 0 to 30 indicates a level of very difficult.11 The FKGL assessment uses the same independent variables as the FRE to determine grade level. For example, an FKGL score of 8 indicates that the reader requires an eighth-grade level of education to read and understand the article.11 The SMOG test uses sentence length and number of complex words (words with >3 syllables) and the CLI uses sentence and word counts to determine the grade level of a written document.12,13 The GFI assessment uses the total number of sentences and complex words (>3 syllables).14 The NFC uses number of complex words, number of easy words (<3 syllables), and number of sentences. The NDC considers sentence length and frequency of unfamiliar words. Words are considered unfamiliar if they do not appear on a preset list of 3000 common words recognized and known by the average fourth grader.15 The FORCAST scale counts the number of single-syllable words in a 150-word sample of text from a document to estimate grade level.16

The Raygor Readability Estimate (RRE) Graph and the Fry Readability Graph were used to visually display the grade level of reading. The RRE Graph calculates a grade level based on the average number of sentences and long words (>6 characters) per 100 words present in the document.17 The Fry Readability Graph uses the average number of sentences and syllables per 100 words.18 Both of these graphs plot the intersection of these 2 independent variables to determine a document’s grade level.

One-way analysis of variance was performed using Microsoft Excel (Microsoft Corp) to determine differences in assessment scale metrics between the various ophthalmologic association websites. A post hoc Tukey honestly significant difference analysis was performed for analysis of variance results with a significance level of P < .05.

Three hundred thirty-nine PEMs from the ophthalmologic association websites were downloaded and analyzed for their level of readability using the 10 assessment techniques (Table). No PEMs were downloaded from the American Association of Ophthalmic Oncologists and Pathologists, American Society of Cataract and Refractive Surgery, or Cornea Society. The mean FRE score of the PEMs was 40.7, ranging from 17.0 to 51.0 (Figure 1). This mean value corresponds to a difficult level of reading. The mean FKGL grade scores ranged from 10.4 to 12.6. The mean CLI scores ranged from 11.4 to 15.8. The mean SMOG scores ranged from 12.9 to 17.7. The mean GFI grade scores demonstrated a mean reading level of 12.4 to 18.7. The mean NDC grade levels were between 11.2 and 16.0. The mean FORCAST grade levels for each website ranged from 10.9 to 12.5. The mean NFC readability scores ranged from 8.2 to 16.0 (Figure 2). Readability of the articles was visually demonstrated using the RRE Graph (Figure 3). The plot showed mean readability scores ranging from 11.0 to 17.0. Readability assessments using the Fry Readability Graph demonstrated that the mean scores ranged between 12.0 and 17.0 (Figure 4).

Place holder to copy figure label and caption
Figure 1.
Mean Flesch Reading Ease Readability Scores of Ophthalmologic Association Websites

AAO indicates American Academy of Ophthalmology; AAPOS, American Association for Pediatric Ophthalmology and Strabismus; AGS, American Glaucoma Society; ASOPRS, American Society of Ophthalmic Plastic and Reconstructive Surgery; ASRS, American Society of Retina Specialists; AUS, American Uveitis Society; and NANOS, North American Neuro-Ophthalmology Society.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.
Mean Grade Level of Online Patient Education Materials (PEMs) Found on Each Ophthalmologic Association Website Using 7 Numerical Scales

AAO indicates American Academy of Ophthalmology; AAPOS, American Association for Pediatric Ophthalmology and Strabismus; AGS, American Glaucoma Society; ASOPRS, American Society of Ophthalmic Plastic and Reconstructive Surgery; ASRS, American Society of Retina Specialists; AUS, American Uveitis Society; NANOS, North American Neuro-Ophthalmology Society; and SMOG, Simple Measure of Gobbledygook.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.
Raygor Readability Estimate Graph of All Online Patient Education Materials Used in the Study

The Raygor Readability Estimate Graph visually demonstrates the readability of the articles by the intersection of the amount of long words per 100 words and sentences per 100 words. Circles indicate reading levels. Numbers within the graph indicate the approximate reading grade level.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 4.
Fry Readability Graph Assessment of All Online Patient Education Materials Used in the Study

The Fry Readability Graph visually demonstrates the readability of articles by the intersection of the number of syllables per 100 words and the number of sentences per 100 words. Circles indicate reading levels.

Graphic Jump Location

Analysis of variance results indicated a significant difference (P < .001) between the websites (AAO, AAPOS, AGS, American Society of Ophthalmic Plastic and Reconstructive Surgery, American Society of Retina Specialists, American Uveitis Society, and NANOS) for each individual readability assessment (CLI, FKGL, FRE, FORCAST scale, GFI, NDC, NFC, SMOG test, Fry Readability Graph, and RRE Graph). Further analysis with post hoc Tukey honestly significant difference analysis showed several differences (P < .001) between the websites for the different assessment scales. The AGS website was found to be more difficult to read than all of the other websites as measured by the NFC and the FRE (P < .001). In addition, it was found to be more difficult to read than the AAPOS and NANOS websites using the FKGL (P < .001) and more difficult to read than the AAO and NANOS websites using the GFI (P < .001). The NANOS website was written at a lower reading level than all of the other websites as evaluated by the FRE scale (P < .001). We found no difference between the websites when evaluating with the CLI, FORCAST scale, NDC, SMOG test, Fry Readability Graph, and RRE Graph (P > .05).

Patients are increasingly using the Internet to find health information because of the convenience of obtaining information at any hour at any location and performing this research anonymously.6 Furthermore, clinicians are more often referring their patients to online sources of PEMs.19 Patient education and understanding of their medical conditions are critical to optimizing the patient-physician relationship and the patient’s overall health.

Readability above the average American literacy level is a widespread issue that has been identified broadly throughout PEMs in other specialties and is not just limited to ophthalmology. Prior studies performed in other surgical subspecialties such as otolaryngology,2022 urology,23 orthopedic surgery,2426 and neurosurgery27 have shown similar results: the reading level of PEMs found on their respective websites is significantly higher than the recommended reading level. Similarly, studies on heart disease, cancer, stroke, chronic obstructive pulmonary disease, and diabetes mellitus found comparable results.3,28

Our analysis of PEMs from major ophthalmologic association websites is consistent with previous studies of ophthalmology-related online PEMs that have shown that readability exceeds the reading level of the average American.9,29,30 Analyses using the FKGL, SMOG, CLI, GFI, NDC, NFC, and FORCAST numerical scales showed that PEMs were written at mean grade ranges all greater than the American Medical Association– and National Institutes of Health–recommended fourth- to sixth-grade reading level. The FRE score corresponded to a difficult level to read. In addition, analysis of variance showed that the ophthalmologic association websites differed among themselves in average readability. This may be due to the varying complexity of topics that each association addresses and the difference in authorship. Overall, this analysis demonstrated that the PEMs from major ophthalmologic association websites are too advanced for the average American patient to comprehend.

This study has certain limitations despite using 10 different assessment techniques. Readability level is a key, but not sole, component of literacy. Overall readability may also be influenced by factors such as images, content organization, layout, and design.31 The Suitability Assessment of Materials32 and PMOSE/IKIRSCH33 are instruments that may be used in the future to measure the readability of charts, graphs, video, and audiotaped instructions; however, they have yet to be fully validated in the medical literature.34 New readability measures being developed will take medical vocabulary, cohesion, and style into consideration when evaluating health texts in an attempt to create a gold standard of readability.31 Until the impact of multimedia on patient comprehension is better understood, effort should be focused on the word text of PEMs on patient-oriented websites. Another limitation of this study is that 4 of the numerical scales (FRE, FKGL, SMOG test, and CLI) are influenced by the number of syllables in a word. A shorter word such as drusen or ptosis may not be more understandable, while a longer word may not necessarily be more difficult to understand. This algorithm might underestimate the reading level required to comprehend the text. On the other hand, many terms in ophthalmology such as cataract are inherently technical and may be unavoidably used. Despite the author defining the word, once it is used again in the text it may be assessed again as a difficult word, which would overestimate the reading level. However, in this study, we used other readability assessment tools that examined other parameters in addition to difficult words. Another limitation is that the literacy level of potential ophthalmology patients may be higher than the average American. For example, a study of patient demographic characteristics found that the average patient undergoing cosmetic surgery at a private practice had a college education or beyond.35 Although there is no definite study that shows the same in ophthalmology, this specific subgroup of Americans (ophthalmology patients who use the Internet to obtain health information) may indeed possess higher reading comprehension than the average American. Consequently, one can make the argument that PEMs written at a higher level may be more beneficial to this subgroup. Because it is difficult to determine who is accessing these websites for PEMs, it would seem prudent to target the general population and write PEMs at the recommended grade level. Despite the limitations discussed, however, it is reassuring that all of the readability results were well correlated with each other and were consistent with results from other studies.

Recommendations on methods to improve the presentation of health education information include changing word choice and structure. Medical terms may be difficult to avoid in ophthalmology owing to their complexity; however, descriptive or instructive phrases can be substituted to enhance readability. Other difficult word types to avoid are concept words such as normal range, category words such as β-blockers, and value judgment words such as excessive. These types of words may be well defined in the minds of clinicians but not in the minds of patients.36 Simple and clear supporting audio and video multimedia should also be incorporated when appropriate. A potential method to follow up on the implications of this study may include surveys targeting readers of the online PEMs to obtain feedback about the population using these websites. Additional surveys can be used in the future to assess the impact of making these changes in PEMs. Until the impact of these changes in PEMs can be assessed, efforts should be directed at developing more appropriate and targeted PEMs that cater to the needs of the general population. Addressing the problem of limited health literacy on our websites can hopefully improve the understanding of eye disease and positively affect the eye health of individuals and populations.

Corresponding Author: Jean Anderson Eloy, MD, Department of Otolaryngology–Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St, Ste 8100, Newark, NJ 07103 (jean.anderson.eloy@gmail.com).

Submitted for Publication: July 10, 2014; final revision received December 22, 2014; accepted December 23, 2014.

Published Online: February 5, 2015. doi:10.1001/jamaophthalmol.2014.6104.

Author Contributions: Ms Fang and Dr Agarwal had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Huang, Fang, Agarwal, Eloy, Langer.

Acquisition, analysis, or interpretation of data: Huang, Fang, Agarwal, Bhagat.

Drafting of the manuscript: Huang, Fang, Agarwal.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Huang, Fang, Agarwal, Bhagat.

Administrative, technical, or material support: Agarwal.

Study supervision: Eloy, Langer.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.

Sultz  HA, Young  KM. Health Care USA: Understanding Its Organization and Delivery.7th ed. Sudbury, MA: Jones & Barlett Learning; 2010.
Nielson-Bohlman  L, Panzer  AM, Kindig  DA. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004.
Walsh  TM, Volsko  TA.  Readability assessment of Internet-based consumer health information. Respir Care. 2008;53(10):1310-1315.
PubMed
Weis  B. Health Literacy: A Manual for Clinicians. Chicago, IL: American Medical Association, American Medical Foundation; 2003.
US National Library of Medicine. How to write easy-to-read health materials. http://www.nlm.nih.gov/medlineplus/etr.html. Accessed June 6, 2014.
Fox  S, Rainie  L. The online health care revolution: how the web helps Americans take better care of themselves. http://www.pewinternet.org/2000/11/26/the-online-health-care-revolution/. Accessed April 23, 2013.
Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association.  Health literacy: report of the Council on Scientific Affairs. JAMA. 1999;281(6):552-557.
PubMed   |  Link to Article
Edmunds  MR, Barry  RJ, Denniston  AK.  Readability assessment of online ophthalmic patient information. JAMA Ophthalmol. 2013;131(12):1610-1616.
PubMed   |  Link to Article
Kahana  A, Gottlieb  JL.  Ophthalmology on the Internet: what do our patients find? Arch Ophthalmol. 2004;122(3):380-382.
PubMed   |  Link to Article
Flesch  R.  A new readability yardstick. J Appl Psychol. 1948;32(3):221-233.
PubMed   |  Link to Article
McLaughlin  GH.  SMOG grading: a new readability formula. J Read. 1969;12(8):638-646.
Coleman  M, Liau  TL.  A computer readability formula designed for machine scoring. J Appl Psychol. 1975;60(2):283-284. doi:10.1037/h0076540.
Link to Article
Gunning  R. The Technique of Clear Writing. New York, NY: McGraw-Hill; 1952.
Chall  JS. Readability Revisited: The New Dale-Chall Readability Formula. Cambridge, MA: Brookline; 1995.
Caylor  JS, Sticht  TG, Fox  LC, Ford  JP. Methodologies for Determining Reading Requirements of Military Occupational Specialties. Alexandria, VA: Human Resources Research Organization; 1973.
Raygor  AL. The Raygor Readability Estimate: a quick and easy way to determine difficulty. In: Pearson  PD, ed. Reading: Theory, Research, and Practice: Twenty-Sixth Yearbook of the National Reading Conference. Clemson, SC: National Reading Conference; 1977:259-263.
Fry  E.  A readability formula that saves time. J Read. 1968;11(7):513-516, 575-578.
Patel  PP, Hoppe  IC, Ahuja  NK, Ciminello  FS.  Analysis of comprehensibility of patient information regarding complex craniofacial conditions. J Craniofac Surg. 2011;22(4):1179-1182.
PubMed   |  Link to Article
Eloy  JA, Li  S, Kasabwala  K,  et al.  Readability assessment of patient education materials on major otolaryngology association websites. Otolaryngol Head Neck Surg. 2012;147(5):848-854.
PubMed   |  Link to Article
Cherla  DV, Sanghvi  S, Choudhry  OJ, Jyung  RW, Eloy  JA, Liu  JK.  Readability assessment of Internet-based patient education materials related to acoustic neuromas. Otol Neurotol. 2013;34(7):1349-1354.
PubMed   |  Link to Article
Cherla  DV, Sanghvi  S, Choudhry  OJ, Liu  JK, Eloy  JA.  Readability assessment of Internet-based patient education materials related to endoscopic sinus surgery. Laryngoscope. 2012;122(8):1649-1654.
PubMed   |  Link to Article
Colaco  M, Svider  PF, Agarwal  N, Eloy  JA, Jackson  IM.  Readability assessment of online urology patient education materials. J Urol. 2013;189(3):1048-1052.
PubMed   |  Link to Article
Sabharwal  S, Badarudeen  S, Unes Kunju  S.  Readability of online patient education materials from the AAOS web site. Clin Orthop Relat Res. 2008;466(5):1245-1250.
PubMed   |  Link to Article
Vives  M, Young  L, Sabharwal  S.  Readability of spine-related patient education materials from subspecialty organization and spine practitioner websites. Spine (Phila Pa 1976). 2009;34(25):2826-2831.
PubMed   |  Link to Article
Badarudeen  S, Sabharwal  S.  Assessing readability of patient education materials: current role in orthopaedics. Clin Orthop Relat Res. 2010;468(10):2572-2580.
PubMed   |  Link to Article
Agarwal  N, Chaudhari  A, Hansberry  DR, Tomei  KL, Prestigiacomo  CJ.  A comparative analysis of neurosurgical online education materials to assess patient comprehension. J Clin Neurosci. 2013;20(10):1357-1361.
PubMed   |  Link to Article
Agarwal  N, Hansberry  DR, Sabourin  V, Tomei  KL, Prestigiacomo  CJ.  A comparative analysis of the quality of patient education materials from medical specialties. JAMA Intern Med. 2013;173(13):1257-1259.
PubMed   |  Link to Article
Edmunds  MR, Denniston  AK, Boelaert  K, Franklyn  JA, Durrani  OM.  Patient information in Graves’ disease and thyroid-associated ophthalmopathy: readability assessment of online resources. Thyroid. 2014;24(1):67-72.
PubMed   |  Link to Article
Hansberry  DR, Agarwal  N, Shah  R,  et al.  Analysis of the readability of patient education materials from surgical subspecialties. Laryngoscope. 2014;124(2):405-412.
PubMed   |  Link to Article
Kandula  S, Zeng-Treitler  Q.  Creating a gold standard for the readability measurement of health texts. AMIA Annu Symp Proc. 2008;353-357.
PubMed
Doak  CC, Doak  LG, Root  JH. Teaching Patients With Low Literacy Skills.2nd ed. Philadelphia, PA: JB Lippincott Co; 1996.
Mosenthal  PB, Kirsch  IS.  A new measure for assessing document complexity: the PMOSE/IKIRSCH document readability formula. J Adolesc Adult Lit. 1998;41(8):638-657.
Kasabwala  K, Agarwal  N, Hansberry  DR, Baredes  S, Eloy  JA.  Readability assessment of patient education materials from the American Academy of Otolaryngology–Head and Neck Surgery Foundation. Otolaryngol Head Neck Surg. 2012;147(3):466-471.
PubMed   |  Link to Article
Schlessinger  J, Schlessinger  D, Schlessinger  B.  Prospective demographic study of cosmetic surgery patients. J Clin Aesthet Dermatol. 2010;3(11):30-35.
PubMed
Doak  CC, Doak  LG, Friedell  GH, Meade  CD.  Improving comprehension for cancer patients with low literacy skills: strategies for clinicians. CA Cancer J Clin. 1998;48(3):151-162.
PubMed   |  Link to Article

Figures

Place holder to copy figure label and caption
Figure 4.
Fry Readability Graph Assessment of All Online Patient Education Materials Used in the Study

The Fry Readability Graph visually demonstrates the readability of articles by the intersection of the number of syllables per 100 words and the number of sentences per 100 words. Circles indicate reading levels.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.
Raygor Readability Estimate Graph of All Online Patient Education Materials Used in the Study

The Raygor Readability Estimate Graph visually demonstrates the readability of the articles by the intersection of the amount of long words per 100 words and sentences per 100 words. Circles indicate reading levels. Numbers within the graph indicate the approximate reading grade level.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.
Mean Grade Level of Online Patient Education Materials (PEMs) Found on Each Ophthalmologic Association Website Using 7 Numerical Scales

AAO indicates American Academy of Ophthalmology; AAPOS, American Association for Pediatric Ophthalmology and Strabismus; AGS, American Glaucoma Society; ASOPRS, American Society of Ophthalmic Plastic and Reconstructive Surgery; ASRS, American Society of Retina Specialists; AUS, American Uveitis Society; NANOS, North American Neuro-Ophthalmology Society; and SMOG, Simple Measure of Gobbledygook.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 1.
Mean Flesch Reading Ease Readability Scores of Ophthalmologic Association Websites

AAO indicates American Academy of Ophthalmology; AAPOS, American Association for Pediatric Ophthalmology and Strabismus; AGS, American Glaucoma Society; ASOPRS, American Society of Ophthalmic Plastic and Reconstructive Surgery; ASRS, American Society of Retina Specialists; AUS, American Uveitis Society; and NANOS, North American Neuro-Ophthalmology Society.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable.  Ophthalmologic Associations Used

References

Sultz  HA, Young  KM. Health Care USA: Understanding Its Organization and Delivery.7th ed. Sudbury, MA: Jones & Barlett Learning; 2010.
Nielson-Bohlman  L, Panzer  AM, Kindig  DA. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004.
Walsh  TM, Volsko  TA.  Readability assessment of Internet-based consumer health information. Respir Care. 2008;53(10):1310-1315.
PubMed
Weis  B. Health Literacy: A Manual for Clinicians. Chicago, IL: American Medical Association, American Medical Foundation; 2003.
US National Library of Medicine. How to write easy-to-read health materials. http://www.nlm.nih.gov/medlineplus/etr.html. Accessed June 6, 2014.
Fox  S, Rainie  L. The online health care revolution: how the web helps Americans take better care of themselves. http://www.pewinternet.org/2000/11/26/the-online-health-care-revolution/. Accessed April 23, 2013.
Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association.  Health literacy: report of the Council on Scientific Affairs. JAMA. 1999;281(6):552-557.
PubMed   |  Link to Article
Edmunds  MR, Barry  RJ, Denniston  AK.  Readability assessment of online ophthalmic patient information. JAMA Ophthalmol. 2013;131(12):1610-1616.
PubMed   |  Link to Article
Kahana  A, Gottlieb  JL.  Ophthalmology on the Internet: what do our patients find? Arch Ophthalmol. 2004;122(3):380-382.
PubMed   |  Link to Article
Flesch  R.  A new readability yardstick. J Appl Psychol. 1948;32(3):221-233.
PubMed   |  Link to Article
McLaughlin  GH.  SMOG grading: a new readability formula. J Read. 1969;12(8):638-646.
Coleman  M, Liau  TL.  A computer readability formula designed for machine scoring. J Appl Psychol. 1975;60(2):283-284. doi:10.1037/h0076540.
Link to Article
Gunning  R. The Technique of Clear Writing. New York, NY: McGraw-Hill; 1952.
Chall  JS. Readability Revisited: The New Dale-Chall Readability Formula. Cambridge, MA: Brookline; 1995.
Caylor  JS, Sticht  TG, Fox  LC, Ford  JP. Methodologies for Determining Reading Requirements of Military Occupational Specialties. Alexandria, VA: Human Resources Research Organization; 1973.
Raygor  AL. The Raygor Readability Estimate: a quick and easy way to determine difficulty. In: Pearson  PD, ed. Reading: Theory, Research, and Practice: Twenty-Sixth Yearbook of the National Reading Conference. Clemson, SC: National Reading Conference; 1977:259-263.
Fry  E.  A readability formula that saves time. J Read. 1968;11(7):513-516, 575-578.
Patel  PP, Hoppe  IC, Ahuja  NK, Ciminello  FS.  Analysis of comprehensibility of patient information regarding complex craniofacial conditions. J Craniofac Surg. 2011;22(4):1179-1182.
PubMed   |  Link to Article
Eloy  JA, Li  S, Kasabwala  K,  et al.  Readability assessment of patient education materials on major otolaryngology association websites. Otolaryngol Head Neck Surg. 2012;147(5):848-854.
PubMed   |  Link to Article
Cherla  DV, Sanghvi  S, Choudhry  OJ, Jyung  RW, Eloy  JA, Liu  JK.  Readability assessment of Internet-based patient education materials related to acoustic neuromas. Otol Neurotol. 2013;34(7):1349-1354.
PubMed   |  Link to Article
Cherla  DV, Sanghvi  S, Choudhry  OJ, Liu  JK, Eloy  JA.  Readability assessment of Internet-based patient education materials related to endoscopic sinus surgery. Laryngoscope. 2012;122(8):1649-1654.
PubMed   |  Link to Article
Colaco  M, Svider  PF, Agarwal  N, Eloy  JA, Jackson  IM.  Readability assessment of online urology patient education materials. J Urol. 2013;189(3):1048-1052.
PubMed   |  Link to Article
Sabharwal  S, Badarudeen  S, Unes Kunju  S.  Readability of online patient education materials from the AAOS web site. Clin Orthop Relat Res. 2008;466(5):1245-1250.
PubMed   |  Link to Article
Vives  M, Young  L, Sabharwal  S.  Readability of spine-related patient education materials from subspecialty organization and spine practitioner websites. Spine (Phila Pa 1976). 2009;34(25):2826-2831.
PubMed   |  Link to Article
Badarudeen  S, Sabharwal  S.  Assessing readability of patient education materials: current role in orthopaedics. Clin Orthop Relat Res. 2010;468(10):2572-2580.
PubMed   |  Link to Article
Agarwal  N, Chaudhari  A, Hansberry  DR, Tomei  KL, Prestigiacomo  CJ.  A comparative analysis of neurosurgical online education materials to assess patient comprehension. J Clin Neurosci. 2013;20(10):1357-1361.
PubMed   |  Link to Article
Agarwal  N, Hansberry  DR, Sabourin  V, Tomei  KL, Prestigiacomo  CJ.  A comparative analysis of the quality of patient education materials from medical specialties. JAMA Intern Med. 2013;173(13):1257-1259.
PubMed   |  Link to Article
Edmunds  MR, Denniston  AK, Boelaert  K, Franklyn  JA, Durrani  OM.  Patient information in Graves’ disease and thyroid-associated ophthalmopathy: readability assessment of online resources. Thyroid. 2014;24(1):67-72.
PubMed   |  Link to Article
Hansberry  DR, Agarwal  N, Shah  R,  et al.  Analysis of the readability of patient education materials from surgical subspecialties. Laryngoscope. 2014;124(2):405-412.
PubMed   |  Link to Article
Kandula  S, Zeng-Treitler  Q.  Creating a gold standard for the readability measurement of health texts. AMIA Annu Symp Proc. 2008;353-357.
PubMed
Doak  CC, Doak  LG, Root  JH. Teaching Patients With Low Literacy Skills.2nd ed. Philadelphia, PA: JB Lippincott Co; 1996.
Mosenthal  PB, Kirsch  IS.  A new measure for assessing document complexity: the PMOSE/IKIRSCH document readability formula. J Adolesc Adult Lit. 1998;41(8):638-657.
Kasabwala  K, Agarwal  N, Hansberry  DR, Baredes  S, Eloy  JA.  Readability assessment of patient education materials from the American Academy of Otolaryngology–Head and Neck Surgery Foundation. Otolaryngol Head Neck Surg. 2012;147(3):466-471.
PubMed   |  Link to Article
Schlessinger  J, Schlessinger  D, Schlessinger  B.  Prospective demographic study of cosmetic surgery patients. J Clin Aesthet Dermatol. 2010;3(11):30-35.
PubMed
Doak  CC, Doak  LG, Friedell  GH, Meade  CD.  Improving comprehension for cancer patients with low literacy skills: strategies for clinicians. CA Cancer J Clin. 1998;48(3):151-162.
PubMed   |  Link to Article

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

774 Views
2 Citations
×

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
Jobs