In linkage studies, one uses statistical variables that reflect the probability that the disease locus is closely linked to 1 or more genetic markers based on 1 or more inheritance models (a parametric approach) or the degree to which sharing of a locus among affected individuals within a family is not accounted for by chance (a nonparametric approach). These statistics are affected by several factors, including the appropriateness of the genetic model, the sample size, and the strength of the effect of the particular locus on the disease itself. A low LOD (logarithm of odds) score may reflect weak evidence of linkage, a relatively weak effect of the locus on the disease, or the fact that the study does not have sufficient power to show a strong effect. A high LOD score provides more convincing evidence that a disease-related gene exists within a locus but it does not tell us how important the variations in that gene are with respect to the disease risk. Typically, we rely on 2 measures, the OR and the attributable risk, to give us some estimates as to the impact of a genetic variant on a disease. Most clinicians are somewhat familiar with these concepts from other genetic, environmental, and dietary association studies, which use the comparisons of cases and controls. However, the OR can be misleading in genetic studies of common disorders because it usually represents an upper bound of another measure, the relative risk, which is what people generally think about with risk factors. An OR provides a measure of the difference in the frequency of a risk allele between case and control groups. For a common disorder, such as ARM, this OR is usually not the same as the relative risk, which is the increased likelihood that a person will develop ARM if the person has the high-risk allele compared with individuals who have the “normal” or low-risk allele(s). Relative risk cannot be measured from case-control association studies and requires that one prospectively monitor a group of people for the development of disease with knowledge of their genotypes and/or exposures. Similarly, the population-attributable risk that is calculated from a case-control association study reflects the prediction of how much of the disease would be eliminated from the case-control population if the high-risk genotype were not present. However, that is not the same as saying that a certain percentage of AMD is caused by a specific variant. This is why we see reports of attributable risks of 40% to 60% for CFH or LOC387715 variants and yet the 2 genes together do not constitute 100% of the risk of developing ARM. Only a few of the present studies have been designed to provide accurate relative risk assessments and population-attributable risks for genetic variants associated with ARM. We can make some initial estimates from population-based longitudinal studies, such as the Beaver Dam and the Blue Mountain Eye studies, but the relatively low percentages of individuals in these populations with advanced ARM are still limiting factors. These population-based studies are essential for gathering the kind of risk assessment information that will be crucial for clinical genetic counseling and decisions regarding the appropriateness of genetic-based diagnostics and care. These distinctions between ORs and relative risks are important for the clinician, because the inappropriate interpretation of ORs to calculate clinical risk can be misleading. As previously noted, the prevalence of the high-risk CFH allele (in a heterozygous state) in the control population is roughly comparable to that in the affected group.