Table of contents
Scientific and legal communities often use similar words differently. This resource is designed to help lawyers navigate scientific concepts that frequently arise in climate litigation. The glossary clarifies these concepts by providing clear definitions, outlining their scientific basis, and highlighting how they may be understood or applied in legal contexts. It is intended as a practical reference to build familiarity with scientific terms, better assess expert evidence, and avoid misinterpretations of science in the courtroom. This glossary relies on peer-reviewed studies, general scientific knowledge, and reports from authoritative bodies like the Intergovernmental Panel on Climate Change (IPCC) and the US National Climate Assessment (US NCA).
The glossary is organized into seven thematic sections that reflect how scientific evidence typically enters legal arguments. Foundational scientific concepts introduces core ideas—such as uncertainty, causality, probability, and significance—which often cause confusion when scientific language is applied to legal standards of proof. In the Measurement, data, and methods section, we explain how scientific evidence is generated, including data collection, baselines, models, and analytical techniques. Climate and environmental science concepts covers the physical processes that drive climate change and its impacts, including emissions, climate forcing, feedback, thresholds, and attribution science. Risk and impact concepts focuses on how hazards translate into real-world harms, addressing risk, vulnerability, resilience, and loss and damage in ways relevant to liability and remedies. Public health introduces population-level tools used to link environmental exposure to health outcomes, such as epidemiology, relative risk, and health disparities. Indigenous Knowledge and cultural heritage provides guidance on concepts related to Indigenous Knowledge systems, cultural harm, and knowledge sovereignty, which increasingly arise in climate cases. Socioeconomic and policy concepts addresses terms commonly used in regulatory, economic, and accountability research, including disinformation, emissions accounting, externalities, and valuation tools.
Readers looking for a key term or specific concept can use the index at the end of the glossary to quickly locate definitions and related entries across sections.
Foundational Scientific Concepts
Best Available Science
Best available science refers to the most reliable, valid, up-to-date, and relevant empirical knowledge. It reflects the evolving nature of science, building on continuous cycles of research, data collection, and the refinement of methods. It incorporates evidence from multiple sources, undergoes peer review, and draws on expertise across disciplines to ensure credibility and robustness.
Why it matters: Best available science is not only a scientific concept but also a legal standard. It appears explicitly in laws, regulations, and court rulings; however, no formalized framework exists to identify best available science. In practice, lawyers can demonstrate their use of best available science by showing that the work relies on peer-reviewed or widely accepted methodologies, aligns with major scientific assessments, uses transparent and reproducible data, and addresses both assumptions and uncertainty using established scientific techniques.
Bias
In science, bias refers to systematic error that skews results or findings away from the true value. This can include statistical bias (when errors in data analysis distort estimates), selection bias (when the sample studied is not representative of the population), and observer bias (when researchers' expectations influence how data are recorded or interpreted). Scientists work to identify and minimize bias through study design, statistical analysis, peer review, transparency, and replication. Bias in this sense is about accuracy and the reliability of results. Bias correction, a set of statistical methods, is used in climate modeling to better align model output with observational data.
Why it matters: Bias in law typically refers to a lack of impartiality or conflict of interest. When talking to a scientific expert, lawyers should be prepared to ask whether potential sources of scientific bias were accounted for and how they might affect results. Similarly, opposing counsel may conflate scientific bias with personal or institutional bias, to discredit findings.
Causality and Correlation
Correlation means that two variables change together, while causation means that one variable directly influences the other. When variable Y tends to increase in value when variable X increases, X and Y are termed to be positively correlated. If Y falls when X increases, then they are negatively correlated. Establishing causation requires ruling out confounding factors; demonstrating a plausible mechanism; and sometimes using counterfactual analysis, which involves assessing what would have happened absent a specific factor, such as anthropogenic climate change. See Bradford Hill Criteria.
Why it matters: Correlation does not imply causation. Moreover, legal causation (proximate cause, defined with reference to a particular statute or common law cause of action) may differ from scientific causation, which is grounded in empirical evidence and probabilistic assessment. Climate attribution science is focused on causality and specifically asks whether greenhouse gas emissions increased the risk or magnitude of an event, either expressed probabilistically (e.g., climate change made this flood three times more likely) or based on intensity (e.g., climate change made this peak flow of the flood two times larger). Research can robustly quantify increased risks and attributable damages. Scientific studies that help explain a causal chain (e.g., emissions → greenhouse effect → increased heat → wildfire intensity) can provide empirical evidence that may meet legal standards of causation. See Attribution Science.
Cherry-Picking
Cherry-picking in science refers to the selective use of data, studies, or results to support a particular premade conclusion, while ignoring or omitting information that may contradict it. This practice almost always distorts findings by presenting an incomplete or misleading picture of the evidence and drawing conclusions based on a subset of available data. It may involve highlighting a single study while disregarding a larger body of research, choosing favorable time periods in a dataset, or emphasizing outliers in place of representative trends. See Bias.
Why it matters: Cherry-picking can improperly influence legal actors using incomplete or misleading scientific information. To identify cherry-picking, it can be helpful to ask, 1) Was the full range of relevant data considered or only a subset? 2) How were studies selected for inclusion or exclusion? 3) Were time frames or geographic areas chosen in a way that skews results? 4) Does the expert's conclusion align with the broader peer-reviewed literature? If an expert relies heavily on a narrow slice of evidence while dismissing or ignoring established findings, this usually signals cherry-picking.
Confidence Levels and Confidence Intervals
In Intergovernmental Panel on Climate Change (IPCC) reports, confidence levels reflect how strongly the evidence supports a finding by combining two dimensions: evidence (the amount, consistency, and quality of data, observations, and models) and agreement (the degree to which independent studies and experts converge on the same conclusion). This produces categories such as low, medium, or high confidence. Separately, the IPCC uses calibrated likelihood terms to assign a numerical probability. These terms help to standardize communication about uncertainty within the IPCC. In contrast, a confidence interval reflects statistical analysis of the range of values around an estimate and is associated with a level of probability. Confidence intervals can be one or two sided—reflecting, for example, either a 2.5 percent interval on both ends of a distribution or a 5 percent interval on one end—for the same overall confidence level. See Uncertainty; Probability and Likelihood.
Why it matters: While confidence levels provide an important means of communication for the scientific community, their translation to a legal setting requires careful consideration of legal standards of proof, which are often less rigorous than scientific ones and vary by jurisdiction and claim. Low and medium confidence levels, for instance, may still meet a "more likely than not" or "preponderance of evidence" standard, reinforcing their utility in legal settings and the importance of communicating directly with scientists about confidence in a given finding or statement.
Consensus
Consensus in science refers to the broad agreement that emerges when multiple, independent lines of evidence converge on the same conclusion, reinforced through peer review and replication. It does not mean there is unanimity or that all questions are settled, but rather indicates where the weight of evidence is strongest. In climate science, the Intergovernmental Panel on Climate Change (IPCC) provides a conservative, international consensus that synthesizes data, model projections, scenario analyses, and policy-relevant findings. Consensus for the IPCC is achieved through an exacting process in which every word, figure, and number is reviewed by scientists and government representatives, and no language remains to which any participant objects. In this sense, consensus is not unanimity of approval but unanimity in not disagreeing. A scientific finding can be very high confidence and peer reviewed—and thus be sound—but there may not be a consensus on it (yet) if the research is relatively new. Lack of consensus does not mean solid scientific findings should be dismissed.
Why it matters: Consensus in science reflects an extraordinary convergence of evidence and findings that included well-supported confidence or likelihood statements. The IPCC is one example of scientific consensus. US National Climate Assessments and formal studies by the National Academies of Sciences, Engineering, and Medicine are other examples that can provide more geographically focused insights. Credible, peer-reviewed science remains valid and probative even if it has not yet been synthesized into a formal consensus document, which can take years to produce (about six years for the IPCC and four for US National Climate Assessments).
Error/Error Bars
Error is present in any scientific study and can include both random and systematic error. Random error can be due to natural variability or sample size and is reported through statistics, for instance, as a mean or median +/- error (which can be expressed as standard deviation, standard error, or confidence intervals, each with slightly different interpretations). This type of error reduces the precision of measurement or estimates (i.e., how close repeated measurements are to each other, regardless of how close they are to the true value). Systematic error, however, affects accuracy (i.e., how close measurements are to the true value) and cannot be addressed with statistics. Origins of systematic error include improperly calibrated instruments or misuse of a measurement tool. Consistency in experimental procedures and conditions can reduce systematic error, as can randomized sampling. See Uncertainty.
Why it matters: Interpreting error is a key component of understanding and contextualizing scientific results, and being specific about the type and expression of that error is important for avoiding misinterpretation of the results. The absence of an expression of error or uncertainty when reporting scientific findings should be a warning to legal teams and warrants additional investigation and consultation with scientific experts.
Outliers
An outlier is a data point or a finding that is statistically distinct from the rest of the observations or consensus findings. Outliers can happen for different reasons. Outliers can reveal important new insights, like the presence of a rare but dangerous extreme event. At other times, they are caused by errors in measurement, missing information, or natural variation that does not change the overall trend.
Why it matters: Interpretations of outliers can be debated. An extreme event that looks like an outlier may actually be the strongest evidence of climate change impacts—showing how warming is pushing conditions beyond anything in the historical record. At the same time, such data points may be dismissed as "just outliers" in arguments that they do not prove a broader pattern. In other cases, they can be cherry-picked to weaken expert testimony based on scientific consensus. With climate change, however, the increasing incidence of outliers can serve as evidence of a tipping point or threshold in an underlying climatic process. Regardless, outliers should be treated with caution and justify additional scrutiny. See Trends and Variability.
Peer Review
Peer review is the process by which scientific findings are evaluated and critiqued. It occurs during the publication process of an individual study in a journal (e.g., Nature) but also during the production of large reports like the US National Climate Assessment and Intergovernmental Panel on Climate Change (IPCC) reports. For a journal article, peer review typically involves two to three experts in a field evaluating whether a methodology for a given research question is appropriate and whether the evidence presented adequately supports the conclusions drawn by the authors. This process can involve multiple iterations of comments and responses, usually mediated by a journal editor. If the critiques suggest significant flaws in a research study, the paper may be rejected by a journal editor. Reports such as those from the IPCC and the US National Climate Assessment undergo especially rigorous peer review, with large expert panels and structured responses to public input. While peer review is not perfect, it remains one of the strongest and most widely recognized forms of scientific quality control.
Why it matters: Peer-reviewed research signals that peer scientists, who are not involved in a specific study, have reviewed its methods, findings, and conclusions. This process lends rigor to scientific findings because the author team will have defended their research to reviewers and addressed concerns from other scientists. In legal contexts, such a review supports the credibility of scientific evidence. A study may use a "peer-reviewed methodology" applying a previously vetted scientific approach to new evidence, thereby drawing on the rigor of the underlying method even when the specific analysis has not been peer reviewed. Here, consultation by outside experts can ensure the methods were appropriately applied.
Probability and Likelihood
Probability and likelihood describe the degree of confidence that a particular result or outcome is true, based on quantitative analysis of available data. Probability refers to the chance that a particular outcome occurs based on the values of parameters in a statistical model; likelihood refers to how well a sample provides support for parameter estimates in a model. The Intergovernmental Panel on Climate Change uses terms such as likely (66–100 percent probability) or very likely (90–100 percent) to standardize communication of uncertainty. These terms are based on statistical evidence, model simulations, and expert judgment, and they quantify the degree to which the data support the conclusion that an event or the effect is real.
Why it matters: Legal standards of proof, such as "preponderance of the evidence," do not map neatly onto how scientists express confidence. In attribution science, statements such as "Climate change very likely made an event more likely to occur" or "Climate change very likely made an event more extreme" reflect high statistical confidence, typically corresponding to probabilities of 90 percent or greater. While this language may sound cautious to nonexperts, it represents a level of certainty that exceeds what is required to meet most civil legal thresholds. The legal context may require only that something is "more likely than not," meaning a probability of over 50 percent. Lawyers play a critical role in translating this scientific confidence into legal arguments, ensuring that probabilistic findings are properly understood as strong evidence of causation rather than as uncertainty.
Scientific Method
The scientific method incrementally advances knowledge and involves testing and rejecting hypotheses. Background research is used to develop a hypothesis based on credible explanation. Evidence is gathered to test the hypothesis, typically through observational data from experiments or data from models. If a hypothesis is repeatedly tested and corroborated by evidence, and has been tested against all alternative hypotheses, it may be accepted as a scientific theory. This result requires years or decades of research and broad assessment by the scientific community.
Why it matters: In the scientific method, a theory is not merely an idea or hypothesis; it has been repeatedly and consistently supported by evidence and explanation. A scientific theory should not be confused with layman's common usage of the term theory. A scientific theory provides a logical explanation for why something happens. It is different than a scientific law, like the law of gravity, which describes what will happen.
Significance
Statistical significance is used to test whether an observed effect happened by chance. Researchers start with a null hypothesis, often one that the observed effect happened by chance. They then calculate the probability of observing the data if that null hypothesis were true. If the probability is very low (often less than 5 percent, or p < 0.05), then the result is considered statistically significant. This result indicates that the data provide strong evidence against the null hypothesis. Strong statistical significance does not necessarily mean an effect is large or particularly meaningful from a nonstatistical perspective; it simply shows that probability of the effect occurring by chance is low.
Why it matters: Legal significance involves materiality and relevance to the case, not statistical thresholds. Even if a study does not meet the strict 95 percent confidence threshold frequently used in research, it can still be powerful legal evidence if it shows that harm was more likely than not caused or worsened by a defendant's actions. Ensuring that experts retained as witnesses or consultants can distinguish between scientific and legal standards of proof will minimize confusion and opportunities for intentional manipulation of the term significance.
Trends and Variability
Variability (e.g., climate variability) is the deviation of observations around a baseline value over time or across a population. A trend is a change in observations over time in the same direction from the baseline, often indicating that the process has changed fundamentally and the old baseline no longer applies. Warmer decadal average temperatures illustrate a long-term trend, while year-to-year temperature fluctuations illustrate variability. In observations, variability can occur within a trend, and a trend can emerge even amid ongoing variability. See Baseline Data.
Why it matters: Both trends and variability describe patterns in data but are used to arrive at different conclusions. Acknowledging the difference and using precise language when describing data are critical for accurately drawing conclusions.
Uncertainty
Uncertainty in scientific research is different from uncertainty in common usage. Scientists communicate the limits of a given study to emphasize the strength of their findings. Scientific uncertainty is expressed in quantitative forms (e.g., error bars, confidence intervals, statistical tests) or qualitative forms (e.g., contextual presentation of alternative hypotheses, questions for future research, confidence statements).
For interpretation:
-
Error bars show how much a measurement could vary. For example, "sea level rise of 3 mm ± 0.5 mm per year" means the value is likely within the range of 2.5–3.5 millimeters.
-
Confidence intervals (CI) express how certain scientists are about an estimate. A 95 percent CI means scientists are 95 percent confident the true value lies within the provided range. For example, "The increase in temperature was 1°C –1.2°C (95 percent CI)" means there is only a 5 percent chance the true value is below 1°C or above 1.2°C.
-
Statistical tests (like t-tests) assess whether results are likely due to chance and express that probability in terms of p-values. For example, p < 0.05 means there is less than a 5 percent probability the finding occurred randomly.
Broader assessments of the state of science (e.g., those of the Intergovernmental Panel on Climate Change) can communicate uncertainty with the terms very likely (90–100 percent probability) or extremely likely (95–100 percent probability) when the underlying data are sufficiently large and scientists can numerically evaluate how different studies agree or conflict with one another, using statistical methods rather than qualitative judgment alone. In other cases, assessment authors can use standardized confidence language to summarize subjective judgments drawn from a range of relevant studies (e.g., high confidence = multiple consistent studies; medium confidence = mixed evidence). In other words, when differences among studies can be evaluated numerically, scientists can move from broader, qualitative statements like "Most studies seem to agree" to more detailed, quantitative statements like "There is a 95–100 percent probability, based on quantified evidence across studies."
Why it matters: While uncertainty in conversation means something one does not know, uncertainty in science is a sign of rigorous research and describes how well something is known. It is critical for lawyers to convey to judges, policymakers, and the public that stated uncertainty in a scientific conclusion does not mean the science is untrustworthy. Powerful actors have exploited popular understanding of uncertainty to undermine confidence in climate science. Conversely, a small range of uncertainty does not imply the finding is substantively important or indicative of cause and effect. For example, using 10 digital thermometers may show with high certainty that someone's body temperature rose between 0.1°F and 0.3°F, but that precise shift does not describe what caused the change or provide information about the individual's health. See Causality and Correlation.
Measurement, Data, and Methods
Baseline Data
In research, baseline is the reference state against which change is measured, providing a starting point for detecting shifts in physical, biological, or social systems. In climate science, a common baseline is the preindustrial period (i.e., 1850–1900), before large-scale fossil fuel use increased atmospheric greenhouse gas concentrations. The baseline period for a given study, however, varies depending on the research question at hand and the availability of relevant data. A baseline allows scientists to determine whether current conditions represent a significant departure from historical patterns. When baseline data are missing or incomplete, measuring change becomes more difficult. Scientists may reconstruct baselines from proxy evidence (like tree rings or ice cores) or use statistical methods to fill gaps, but these reconstructions can increase uncertainty. The amount and type of uncertainty depend on how the reconstruction is done and what data are available.
Why it matters: The presence or absence of a reliable baseline affects the strength of scientific evidence. Use of a baseline in this context is similar to how courts assess damages by considering proof of prior conditions to determine whether harm occurred and to what extent. Baselines can also be intentionally or unintentionally manipulated by selecting shorter or more recent time periods to downplay long-term trends or mislead audiences about the scale of human-caused change.
Bias Correction
Bias correction is the process of adjusting climate model output to account for systematic deviations from observed means or statistical distributions of climate variables (such as temperature or precipitation). It is commonly applied to outputs from global climate models (GCMs) but also used for regional climate models and downscaled climate products. These biases can arise from incomplete representation of physical processes or limits in computational resolution. Bias correction and downscaling are distinct techniques that may be applied separately or in combination to refine climate projections. See Downscaling.
Why it matters: Legal decisions often require climate information about not only the magnitude of change, but also expected future values at specific locations. Bias correction can produce more actionable projections for assessing localized risks, damages, and impacts. However, it also introduces additional uncertainty, as each modeling step relies on assumptions. For example, a common assumption is that biases in future GCM simulations resemble those present in historical simulations.
Counterfactuals
A counterfactual answers the question, What would the system look like if a particular factor were absent? Some types of research call these tools controls. In climate science, counterfactuals typically involve modeling the climate as it would operate without human-caused greenhouse gas emissions. Counterfactuals are widely used in fields such as medicine, economics, and social science to analyze causal relationships.
Why it matters: By comparing observed reality to this constructed "but for" world, scientists can assess the extent to which human influence altered the probability or severity of events or trends. Counterfactuals allow scientists to identify the contribution of a specific factor to an observed outcome, thereby providing evidence relevant to foreseeability, proximate cause, and proportional responsibility.
Data Assimilation
Data assimilation refers to several statistical methods that combine observational data with model estimates to produce the best estimate of a variable. This blend of data and modeling helps reduce the limitations seen in observations (e.g., sparse coverage, gaps) and model forecasts (e.g., approximations). This assimilation is typically done to initialize models for weather forecasting and to develop long-term historical climate datasets. Data assimilation for weather forecasting includes the most accurate and complete set of available data at the time of model initialization to provide the most accurate forecast. For long-term historical climate datasets from reanalysis products, the criteria for inclusion of data for assimilation are different. See Reanalysis Products.
Why it matters: Data assimilation reduces uncertainty and fills gaps in the climate record, providing reliable datasets that can serve as credible evidence in legal contexts, particularly when establishing causation or evaluating responsibility.
Data Gaps
Data gaps are limitations in the availability, quality, or coverage of observations. They can be spatial (missing data for certain regions), temporal (incomplete records over time), or methodological (changes in how variables are measured). In climate and health research, data gaps may arise from sparse monitoring networks, limited historical records, or challenges collecting consistent health statistics. Gaps often reflect broader societal and global inequities.
Why it matters: Data gaps highlight where evidence is incomplete, but they do not erase the weight of the evidence that does exist. Scientists use established methods to address gaps (such as statistical modeling, proxies, or a combination of multiple datasets), so findings can be robust even with incomplete records.
Downscaling
Downscaling is the process of translating global or regional climate model outputs to finer spatial or temporal resolutions. This can be done through dynamical downscaling (using higher-resolution regional models) or statistical downscaling (applying empirical relationships between large-scale climate variables and local conditions).
Why it matters: Legal decisions often require climate information at the local or regional level rather than at the coarse resolution of global models. Downscaling provides more actionable projections for studies assessing localized risks, damages, and impact studies. However, the process also introduces uncertainty, because each additional layer of modeling requires assumptions. For example, any biases or errors in a global model will propagate into the downscaled version. In some cases, the resolution of global models may already be sufficient for robust conclusions, especially when examining broad patterns or long-term trends.
Experimental Data
Experimental data are empirical measurements generated through controlled studies designed to isolate and test specific processes or relationships. In climate and environmental science, these data come from laboratory experiments (such as studies of radiative properties, material responses to heat, or chemical reactions) and field experiments (such as ecosystem manipulation studies or controlled emissions tests).
Why it matters: When evaluating expert evidence, lawyers may wish to ask how experimental conditions relate to real-world settings, whether results have been replicated, and how findings are integrated with observational data and models to support causal claims.
Longitudinal Observation
Longitudinal observation is the systematic collection of information about environmental, ecological, or social conditions for a given unit (e.g., a place, individual, or entity) over extended periods of time. In Western scientific contexts, this may involve repeated measurements using instruments, monitoring stations, or surveys. In Indigenous Knowledge systems, longitudinal observation is often based on generations of place-based experience, oral histories, and the continuous monitoring of indicators such as seasonal cycles, species behavior, or landscape changes. Both approaches provide a timeline of information that can reveal trends, variability, and shifts in baseline conditions.
Why it matters: Longitudinal observation is essential for establishing environmental baselines and detecting long-term change, such as climate impacts on ecosystems, species, or community health. In legal contexts, courts may be presented with evidence from longitudinal observations from both instrumental records and Indigenous Knowledge.
Mixed-Methods Research (Integrative Approaches)
Mixed-methods research combines multiple data sources and methodologies to answer complex questions. In environmental and climate science, this often includes integrating quantitative measurements (e.g., climate models, remote sensing) with qualitative or experiential knowledge (e.g., Traditional Ecological Knowledge, oral histories, participatory mapping). Mixed-methods approaches are valued because they cross-check results and capture both measurable and context-specific dimensions of change. See Indigenous Knowledge and Cultural Heritage.
Why it matters: In legal contexts, mixed-methods research outputs may be introduced as expert evidence, especially in cases where Indigenous and Western science are integrated to assess impacts or damages. Courts may need to evaluate how different data sources were combined, what standards of reliability were applied, and what relative weight was given to distinct epistemologies in reaching conclusions.
Model Calibration and Validation
Climate models should undergo rigorous testing, such as calibration and validation, to ensure reliability and accuracy. In calibration (or model training, model tuning, parameter tuning), a model's parameters are adjusted so its outputs closely match a predefined subset of observed data. In validation (or model testing), the model's results are compared against separate observations not used during calibration, providing an independent test of performance.
Why it matters: In legal and policy settings, such rigorous testing underpins the scientific validity of model-based evidence, helping to demonstrate that findings meet standards of reliability and transparency. Lawyers should challenge the reliability of claims based on a model lacking scientific calibration and validation.
Observational Data
Observational data are empirical measurements collected to record conditions in the natural world, forming the basis for scientific analysis of climate and environmental change. These data come from instrumental records (such as weather stations, tide gauges, and buoys), satellite observations (which have provided consistent global coverage since the late 1970s), and proxy records (such as tree rings, ice cores, or sediments that extend knowledge of past climates). In legal contexts, observational data are used to establish the occurrence of change, quantify its magnitude, and support causal assessments. Strengths of these data include reproducibility, transparency, and the ability to identify long-term trends, while limitations may arise from gaps in coverage, measurement issues, or uncertainties in proxy interpretation. For instance, decades of tide-gauge and satellite observations showing steadily rising sea levels can establish that coastal flooding risk has increased over time, even before models are used to assess why the change occurred or how it will evolve.
Why it matters: When evaluating expert evidence, lawyers may wish to ask how the data were collected, how gaps or uncertainties were addressed, and what methods were used to validate or cross-check results.
Oral Histories
Oral histories are structured accounts of events, environmental changes, or cultural practices transmitted through storytelling, memory, and lived experience. In Western scientific contexts, oral histories are often treated as qualitative data sources and analyzed alongside archival or instrumental records. In Indigenous Knowledge systems, oral histories are a primary mode of transmitting place-based environmental knowledge across generations, often incorporating ecological observations, seasonal cycles, and adaptation to change. See Indigenous Knowledge and Cultural Heritage.
Why it matters: Oral histories can extend records of environmental baselines and describe shifts beyond the limitations of written or instrumental data. In legal contexts, they may be introduced as evidence of long-term ecological knowledge, land use, or cultural impacts, raising questions about admissibility, corroboration with scientific data, and the weight given to oral testimony compared to written records.
Participatory Mapping (Community Mapping)
Participatory mapping combines local or Indigenous Knowledge with spatial data to document land use, resource distribution, or environmental change. Maps are developed collaboratively with community members, often integrating Traditional Knowledge, oral histories, and ecological observations with tools such as GIS, aerial imagery, or surveys. Scientifically, participatory mapping, sometimes termed "citizen science" or "community science," is recognized as a mixed-methods approach that bridges experiential and technical data. See Mixed-Methods Research.
Why it matters: Participatory mapping can be used to establish land rights (particularly in jurisdictions without comprehensive historical legal records of land transfers), document how communities rely on land and natural resources, demonstrate environmental harms, or track climate impacts. In litigation, such maps may be presented as evidence of resource use, harms experienced, or cultural attachment to landscapes, raising issues around data validity, ownership, and the authority of community-generated versus state-generated maps.
Reanalysis Products
Reanalysis products are a type of long-term historical climate dataset that scientists create through data assimilation of observational data and model estimates. These products are widely used to monitor the climate and understand trends and variability. Data assimilation for long-term climate records includes only data that have been observed consistently for the full length of the reanalysis product so that any resulting variations over time can be more confidently interpreted as actual environmental variations, rather than measurement artifacts.
Why it matters: Reanalysis products provide a comprehensive record of historical climate conditions, integrating diverse observations into a consistent dataset. In law, these products can serve as evidence demonstrating long-term trends, distinguishing natural variability from human-driven change and supporting expert testimony. They are also used to evaluate the reliability of climate models.
Scale and Resolution
Scale refers to the dimensions of the object being modeled (e.g., global-scale model or mesoscale weather, such as thunderstorms). Resolution refers to the dimensions at which data are measured or modeled. Spatial (horizontal) resolution describes the size of the geographic area represented by each unit of observation or model grid (e.g., meters to hundreds of kilometers). Vertical resolution describes the thickness of the vertical layers used to represent the atmosphere or the ocean and are described as levels. Temporal resolution refers to the frequency of observations or time steps (e.g., hourly, daily, annually, or longer). For example, a climate scientist could be using a regional-scale model that has a 1 x 1 kilometer horizontal resolution with 20 vertical levels, and a monthly temporal resolution. Higher or finer resolution provides more detail but may require more data and computational power, while lower or coarser resolution captures broader patterns but may miss local variability.
Why it matters: The resolution of data and the studies they inform affect how precisely science can describe climate impacts. Coarser resolutions capture broad patterns but may miss small-scale processes and local details, while finer resolutions add precision but may be limited in scope and require a lot of computational power. As an example, quantifying sea level rise using global data can provide a clear picture of broad impacts but lacks the granular resolution to provide meaningful information about sea level rise in a single location. Legal arguments should consider whether the resolution of evidence matches the geographic and temporal focus of a case.
Climate and Environmental Science Concepts
1.5° Celsius
The 1.5°C target is a policy benchmark agreed upon under the Paris Agreement and assessed by the Intergovernmental Panel on Climate Change. The target represents the 30-year average level of global average warming relative to preindustrial (1850–1900) conditions, and the level at which the risks of climate impacts increase substantially. Scientifically, 1.5°C is used as a reference point for comparing scenarios, evaluating risks to ecosystems and societies, and understanding the benefits of limiting warming as opposed to allowing higher amounts of warming, like 2°C. Crossing 1.5°C does not represent a tipping point or physical threshold, but it is associated with increasing risks that become more severe and widespread, and with increased chances of crossing actual Earth system tipping points.
Why it matters: The 1.5°C benchmark provides a common language for scientists, policymakers, and the public to assess climate risks. It anchors research on mitigation pathways and adaptation needs, and it helps quantify the difference in impacts between 1.5°C and higher levels of warming. Because 1.5°C is a policy target reflected in the Paris Agreement, in national legislation in many countries, and by the International Court of Justice, it also has legal significance, including as a reference point for claims seeking to require compliance by states and private actors. Scientifically, it highlights that climate impacts escalate along a continuum rather than at a single point and that every fraction of a degree matters for limiting harm.
Adaptation and Mitigation
Adaptation and mitigation are two distinct but complementary responses to climate change. Mitigation refers to efforts that reduce the sources or enhance the removals of heat-trapping emissions. Such actions include transitioning to renewable energy, improving energy efficiency, and protecting forests and wetlands so they can absorb carbon dioxide. Mitigation addresses the root cause of climate change by limiting the release and enhancing the absorption of greenhouse gases. Adaptation, in contrast, refers to adjustments in human or natural systems to cope with the impacts of a changing climate. Examples include redesigning infrastructure to withstand stronger storms, developing drought-resistant crops, and creating early warning systems. Adaptation addresses the consequences of climate change, helping communities and ecosystems reduce vulnerability and increase resilience to unavoidable impacts.
Why it matters: Understanding the difference between mitigation and adaptation is crucial for judges' and lawyers' evaluations of responsibility and remedies. Mitigation determines how much climate change will occur, while adaptation determines how societies and ecosystems can withstand its effects. This distinction helps clarify whether a case focuses on preventing future harm by limiting emissions or compensating for, and responding to, present and future harms caused by inadequate adaptation measures. Framing issues through mitigation and adaptation also helps to assess foreseeability, duty of care, and proportionality. See Resilience.
Attribution Science
Attribution science uses quantitative methods to examine the causal links among human activities, climate change, and resulting harms. Its four subdisciplines provide distinct, but interrelated forms of evidence. Trend attribution evaluates long-term shifts in the climate system, such as rising global temperatures, melting glaciers, or sea level rise and determines the extent to which those trends are attributable to human emissions versus natural variability. Event attribution, the most well-known type, assesses whether and to what extent climate change influenced the probability or intensity of a specific event, such as a heat wave or storm. Impact attribution quantifies climate change's contribution to specific impacts, like area burned in a wildfire or economic losses from heat waves. Source attribution traces climate change back to identifiable actors, for example, quantifying how much of the observed warming or ocean acidification stems from the emissions of specific corporations, sectors, or nations. End-to-end attribution integrates these four approaches to connect emissions to concrete climate impacts, showing, for example, how emissions from a set of companies contributed to sea level rise that caused flooding and economic losses in a particular jurisdiction. Rapid attribution provides a methodology to respond to public and media requests effectively and consistently immediately after extreme events such as wildfires, hurricanes, tropical cyclones, and severe convective storms. Currently, rapid studies use peer-reviewed event attribution methods.
Why it matters: Attribution science can provide courts with scientifically robust findings on causation and impact that can be used as evidence to inform determinations of standing, responsibility, apportionment of damages, and the suitability of mitigation or adaptation measures. For example, event attribution provides evidence on the role of climate change in acute disasters; trend attribution supports claims about systemic and foreseeable risks; source attribution connects those harms to responsible parties; impact attribution relates climate change to specific harms; and end-to-end attribution offers a comprehensive causal chain linking specific emissions to specific damages. Attribution science can provide an important part of the evidentiary foundation for establishing causation.
Carbon Capture and Storage
Carbon capture and storage (CCS) and carbon capture, utilization, and storage (CCUS) both refer to a range of technologies that separate, collect, and store carbon dioxide from industrial processes, blocking those gases from entering the atmosphere. Some of this technology has been used for decades for methane gas separation and ethanol and fertilizer production, while other forms (like capture from power generation) are relatively newer applications. The bulk of captured carbon, to date, has been used to extract additional oil and gas from aging wells and store carbon dioxide underground. Both CCS and CCUS are mentioned in Intergovernmental Panel on Climate Change reports, but with stringent capture rates that have rarely been achieved. Current research is exploring direct air capture (DAC) technology to capture carbon dioxide from the air; however, since the source is not an industrial process, DAC is classified as a carbon dioxide removal strategy.
Why it matters: Both CCS and CCUS, to date, have come with significant environmental and health risks and have been used as a pretext to extend reliance on fossil fuels. Industry often points to both CCS and CCUS as evidence of climate action from industry, but prior forms of these technologies have captured only a small subset of emissions and rarely achieve stated emissions reductions. CCS or CCUS technologies should be carefully scrutinized to ensure the stated benefit translates to meaningful emissions reduction.
Carbon Dioxide Removal
Carbon dioxide removal (CDR) refers to a broad range of approaches that remove carbon from the atmosphere and store it over the long term in biomass, underground, in the ocean, or in other products. The approaches range from "conventional" strategies (like reforestation and soil carbon sequestration) to emerging technologies (like direct air capture and enhanced rock weathering). The durability of storage is a key component and risk, as carbon can leak from geologic formations and ecosystem-based removals can experience climate impacts (like wildfires), leading to reversal and the rerelease of carbon to the atmosphere.
Why it matters: CDR is essential for bringing temperatures back to 1.5°C following overshoot, but it has not been deployed at anywhere near the scale required to do so. Large-scale deployment of CDR runs the risk of replicating existing systems of inequity and injustice that burden vulnerable front and fenceline communities and ecosystems.
Cascading Events
Cascading events occur when an initial hazard triggers a chain reaction causing a secondary hazard (e.g., an earthquake triggering a tsunami; heavy rains triggering a landslide; a hurricane triggering flash flooding). Cascading events can amplify impacts and place additional strain on emergency response systems as multiple disasters unfold in sequence.
Why it matters: Climate change is increasing the risk of some cascading events (e.g., heat waves, droughts, wildfires) by amplifying their intensity and frequency.
Climate Forcing
Climate forcing refers to any factor that alters the global energy balance in Earth's climate system, driving changes in temperature and other climate processes. The most common way to measure it is through radiative forcing, which is the difference between energy coming in from the Sun minus energy leaving Earth (back to space), expressed in units of watts per square meter, W/m². A positive radiative forcing (e.g., heat-trapping emissions, such as carbon dioxide) adds energy to the climate system and tends to warm the planet. A negative radiative forcing (e.g., volcanic eruptions or aerosols that prevent light from reaching Earth's surface by reflecting it back into space) removes energy from the climate system and tends to cool the planet. The net effect of all forcings determines whether the climate warms or cools over time.
Why it matters: Climate forcing provides the most direct way to measure how human activities alter Earth's energy balance and drive increases in average global surface temperature. Understanding it allows scientists to quantify the role of human emissions in increasing temperature and to counter arguments that exaggerate temporary or minor natural influences to downplay fossil fuel responsibility.
Climate Models
Climate models are computer simulations that use equations based primarily on physics or other sciences to represent how Earth's climate system works and how it might change in the future. Climate models are built from component models (or "base models") that represent individual parts of Earth's climate system (e.g., the atmosphere, ocean, land, or sea ice). Climate models can have the components run individually or "coupled," a scenario in which multiple components interact with each other. There are two common types of climate models, general or global circulation models and earth system models (ESMs). Global circulation models simulate only the physical climate system (e.g., atmosphere, ocean, land surface, and sea ice), while ESMs include other processes, such as vegetation, human activities, and biogeochemical processes (e.g., carbon cycle).
The Coupled Model Intercomparison Project (CMIP) provides a way to compare the output of different global circulation models and ESMs that simulate the climate and understand patterns and trends that they capture. Reduced complexity models—like the Model for the Assessment of Greenhouse Gas Induced Climate Change (MAGICC), OSCAR, and Finite Amplitude Impulse Response (FaIR)—have been used by the Intergovernmental Panel on Climate Change to project future climates based on a range of emissions scenarios and represent another way to simulate past and future changes to Earth's climate.
Why it matters: Climate models are a primary tool that scientists use to understand past changes to Earth's climate, develop scenarios of future climate changes, describe how climate change is affecting extreme events, and trace emissions from specific sources. Climate models form the basis of attribution science. They underlie a substantial proportion of litigation-relevant research and continue to be improved by the scientific community.
Compound Events
Compound events, or compound extreme events, occur when two or more hazards (e.g., a heat wave and a wildfire or a heat wave and a drought) happen together, exposing communities to new types of risks. Such events can also include non-climate-driven events, like wildfires during the COVID-19 pandemic or typhoons during armed conflict.
Why it matters: Climate change has increased the likelihood of compound events, and projections suggest that they will continue to become more frequent. These events pose unique risks to communities and natural environments, as many hazards have not historically occurred concurrently due to climate change. Such events can overwhelm adaptations and hamper response and recovery efforts.
Cumulative Emissions
Cumulative emissions are the total amount of greenhouse gases released into the atmosphere over a defined period, typically from the beginning of the industrial era to the present. They are calculated by summing annual emissions across years and are expressed in units of mass (such as gigatons). Because greenhouse gases like carbon dioxide remain in the atmosphere for long periods, cumulative emissions are a key determinant of long-term atmospheric concentrations and associated warming.
Why it matters: Cumulative emissions provide a measure of historical responsibility, because temperature change is driven by the accumulation of emissions in the atmosphere. Courts and regulators may use cumulative emissions to assess specific actors' long-term contributions to climate change. Annual emissions show present-day behavior, but cumulative emissions capture the enduring legacy of past actions and their role in shaping current conditions. This distinction is particularly important for questions of equity, liability, and intergenerational justice, since climate harms are driven by the total buildup of emissions over time, not just recent outputs.
Emissions/Concentration
Emissions refer to the release of greenhouse gases into the atmosphere from human or natural sources, typically measured in units of mass per unit time (e.g., metric tons of carbon dioxide equivalent per year). Concentration refers to the amount of a gas present in the atmosphere at a specific time. This can be expressed as a percentage of the total atmosphere for the major atmospheric constituents (e.g., oxygen is ~20 percent of the atmosphere), but for trace gases (such as carbon dioxide), which constitute a smaller proportion of the atmosphere, this is commonly expressed as parts per million (ppm). The magnitude of the concentration of a gas is not representative of its importance to Earth's climate; trace gases such as carbon dioxide are extremely important. Emissions are inputs to the system. Concentrations are the cumulative levels in the atmosphere, influenced by both emissions and natural processes that remove gases from the atmosphere (such as absorption by oceans and vegetation).
Why it matters: Emissions data often form the basis of responsibility and liability analyses, since they can be directly linked to specific sources (like those compiled in the Carbon Majors database or Global Carbon Project). Concentration data, however, provide evidence of cumulative atmospheric change and the global context in which harms occur. Legal arguments may rely on emissions to attribute responsibility to actors and on concentrations to establish the broader causal chain connecting human activities to observed climate impacts.
Extreme Events
An extreme event refers to a physical event (such as high or low temperature, rainfall, drought, storm surge, or wind speed) that has a low probability of occurrence based on the historical observations for a given region and time of year. Because extreme is defined statistically, what counts as an extreme event can change over time. As larger storms become more common, some may argue these huge storms should no longer be classified as statistically extreme, but lawyers should emphasize to judges and policymakers that their real-world impacts remain severe and damages are increasing due to the increasing frequency of storms.
Why it matters: An extreme event may be linked to climate change and particular sources of emissions through attribution science. For legal purposes, understanding the statistical rarity of an event is not the only factor that matters. Even if an event is no longer classified as statistically extreme because it has become more common, research can still show how and why the event occurred and caused harm, and that connection can be central to establishing causation and responsibility.
Feedback Loops
A feedback loop amplifies or dampens the effects of climate change once triggered. In a positive feedback loop, warming leads to changes that cause even more warming. In feedback loops, the output of the process feeds back into the system as a new input, creating a continuous cycle. That circular causation makes these loops powerful drivers of climate dynamics. An example is the loss of sea ice: Ice reflects more radiation (higher albedo) back to space than open water or other land cover types, so the loss of ice from warming results in decreased albedo (due to open water), thus leading to more warming. Similarly, warming leads to permafrost thaw, which releases methane and carbon dioxide, both heat-trapping gases. A negative feedback loop can remove greenhouse gases from the atmosphere (as in the case of increased plant uptake of carbon dioxide due to greater carbon dioxide concentrations or improved growing conditions in some locations). While such feedback can temper some changes within the climate system, they are limited by other factors that affect plant growth, like nitrogen and water limitation, and do not offset the effects of human-caused greenhouse gas emissions or stop overall warming.
Why it matters: Feedback loops show that the impacts of climate change are not linear and may not be easily contained. Actors may claim their emissions had only a limited effect, but feedback mechanisms demonstrate that emissions can unleash larger, self-reinforcing harms. Recognizing feedback loops helps to explain why climate damages escalate over time and why early knowledge of these dynamics heightens the duty of care for major emitters.
Global Warming Potential (GWP)
Global Warming Potential (GWP) is the total radiative forcing accumulated by one ton of a greenhouse gas compared to one ton of carbon dioxide over a specified period. For methane, the GWP over 100 years (the time frame used in the 2015 Paris Agreement) is approximately 30, indicating that methane is a far more potent greenhouse gas than carbon dioxide (with a GWP of 1). GWP will change slightly over time as concentrations change and as better estimates are obtained for the lifetime of gases in the atmosphere.
Why it matters: Carbon dioxide is the most abundant greenhouse gas from human activities, but methane, nitrous oxide, and halogenated compounds confer a greater greenhouse effect on a mass emissions basis. GWP is the standard way to compare the effects of the different gases.
Overshoot
Overshoot refers to a scenario in which global temperatures temporarily exceed a target level of warming (such as 1.5°C or 2°C above preindustrial levels) before later being reduced. These scenarios assume that large-scale carbon dioxide removal or other negative emissions technologies will be developed, deployed, and sustained at scale to reduce atmospheric concentrations of greenhouse gases and temperatures.
Why it matters: While concentrations and temperatures may return below a given threshold, the real-world impacts of overshoot—such as the loss of ice sheets, species extinction, sea level rise, and the triggering of tipping points—may be irreversible. Overshoot is therefore not simply a temporary detour but a pathway with long-term risks and damages that cannot be undone by eventual cooling.
Scenarios/Pathways
Scenarios or pathways are projections calculated under specific conditions of future human-caused emissions and their effects on Earth's climate system. These scenarios are not predictions but "what if" frameworks that allow scientists and policymakers to test the likely outcomes of various choices and actions. Climate models and climate impact studies use emissions scenarios—estimates of potential future changes in heat-trapping emissions—to help us see how choices made about emissions today can shape tomorrow's climate. There are two main types of emissions scenarios used in climate modeling. Representative Concentration Pathways (RCPs) describe possible futures through concentrations in heat-trapping emissions in the atmosphere, expressed as radiative forcing. The number associated with an RCP (e.g., RCP2.6 or RCP8.5) represents the level of radiative forcing that Earth would experience by 2100; a larger number indicates a greater amount of forcing. Shared Socioeconomic Pathways (SSPs) are societal narratives that incorporate changes in population, technology, policy, and economic and adaptation/mitigation practices, which affect emissions. The number associated with an SSP (e.g., SSP1) represents a societal storyline, ranging from a sustainable-focused future (SSP1) to fossil fuel growth (SSP5). Recent models, including CMIP6, combine the SSP storyline and the RCP radiative forcing into an integrated scenario (e.g., SSP1-2.6), to create a more complete picture of possible climate futures. See Climate Forcing; Climate Models.
Why it matters: Scenarios provide a structured way for judges and policymakers (as well as scientists) to evaluate a range of potential climate futures, understand adaptation and mitigation challenges, and assess the consequences of policy and legal choices. These scenarios focus on the consequences of mitigation scenarios or the lack thereof and assess temperature change and impact. Scenarios can help provide the evidence base for assessing foreseeability, weighing risks, and evaluating which strategies are most effective in addressing climate change.
Slow-Onset Events
Slow-onset events are gradual, cumulative changes in the climate system that unfold over decades to centuries. Sea level rise is a primary example, caused by thermal expansion of seawater as it warms, land subsidence, and the melting of glaciers and ice sheets. Unlike acute events such as hurricanes, the harms from sea level rise (such as tidal flooding, erosion, saltwater intrusion, and displacement) can accumulate incrementally and persist over long timescales. Sea level changes are effectively irreversible on the timescale of centuries, because once oceans warm and land ice melts, the processes cannot be quickly undone.
Ocean acidification is another prominent example: Absorption of carbon dioxide by the ocean reduces overall pH, threatening coral reefs, fisheries, and economies. Other slow-onset climate events include glacial retreat, desertification, salinization, permafrost thaw, biodiversity loss, and long-term shifts in rainfall or drought patterns, all of which unfold gradually but produce profound and lasting impacts. Scientific studies can predict and measure slow-onset events.
Why it matters: Unlike acute events, slow-onset harms unfold slowly, but scientific observations and models establish both their trajectory and their link to identifiable sources. The fact that major contributors were aware of the dangers yet continued to emit at scale strengthens foreseeability and liability arguments. Lawyers working with slow-onset events should include scientific evidence in their cases, including historical baselines, modeled counterfactuals, and future projections. They may also need to request creative remedies that acknowledge irreversibility, such as significant damages and adaptation measures or ongoing injunctive relief. For example, in a sea level rise case, remedies might include damages to fund long-term coastal adaptation (such as elevating infrastructure or restoring wetlands), combined with ongoing injunctive relief requiring emissions reductions or monitoring, recognizing that the underlying physical changes to the coastline cannot be reversed.
System/System Dynamics
A system includes a set of components that interact with each other, along with the processes by which they interact. At its broadest, the components of Earth's climate system are the atmosphere, hydrosphere, lithosphere, biosphere, and human systems. Interactions between the atmosphere and the biosphere might include photosynthesis, respiration, transpiration, decay or deep burial of organic matter, reflection of solar radiation by different vegetation, and atmospheric pollution from forest fires. An ecosystem would include both organic (e.g., plants, animals, portions of the soil) and inorganic (e.g., portions of the soil, rock) components that coexist in a particular region, and their interactions might include the exchange of energy and nutrients and the provision of habitat for animals by plants. The study of system dynamics focuses on how systems change over time depending on the interactions between components and external forcing. A researcher will often define the system being studied by including only those components and interactions believed to be relevant to the question.
Why it matters: The understanding of, and the solutions to, the most pressing climate and environmental issues requires a broad systems approach and adequate inclusion of the most relevant components and processes of Earth's climate's systems and their interactions.
Thresholds
A threshold is a point or boundary at which a system shifts from one state to another, often in a nonlinear or abrupt way. Below the threshold, changes may be incremental or reversible, but once the threshold is crossed, impacts can accelerate, compound, or become irreversible. Thresholds can apply to both natural and human systems.
Why it matters: Thresholds mark the line between slow, limited impacts and rapidly escalating harms, or between risk and realized harm. Crossing a physical threshold provides a clear, science-based moment at which damages become measurable, foreseeable, and attributable. Thresholds can strengthen legal arguments by showing that harms were not speculative but foreseeable and a particular defendant should be found liable because its specific actions pushed a system past a measurable boundary. Thresholds also help courts distinguish background variability from actionable change, making it easier to demonstrate causation and responsibility. In litigation, identifying a threshold, such as heat indices that exceed human survival limits or erosion that undermines infrastructure, can be a compelling way to argue that the law must respond to concrete, irreversible harms. Scientific thresholds should not be confused with policy targets or regulatory standards. For example, the Paris Agreement set a policy target of less than 1.5°C in warming, but harms related to climate change are occurring below that target and some physical thresholds are crossed below or above that target.
Tipping Points
In physical science, a tipping point is a critical threshold beyond which a small additional change causes a large, self-reinforcing, and often irreversible shift. For example, once a glacier retreats past a certain point, its complete collapse may become unavoidable, even if warming slows. In physical science, tipping points are defined scientifically by system dynamics, such as feedback loops, thresholds, and stability limits. In social science, the term is used to describe a moment when gradual changes in behavior, norms, or institutions lead to rapid and widespread social transformation. See 1.5° Celsius; Thresholds.
Why it matters: Risks associated with tipping points build well before a tipping point is reached, and consequences can become irreversible after it is crossed. Emphasizing irreversible climate tipping points (like the glacier example in this entry) can influence judicial and legislative decisionmaking by underscoring the significant consequences of delayed action. Importantly, policy benchmarks (such as 1.5°C), while informed by science, are not physical tipping points.
Risk and Impact Concepts
Ecosystem Services
Ecosystem services are the benefits that people obtain from natural systems, including provisioning services (such as food, water, timber), regulating services (such as climate regulation, flood control, water purification), cultural services (such as recreation, heritage), and supporting services (such as habitat, water and nutrient cycling, soil formation). Ecosystem services is an approach to valuing nature using a Western socioeconomic framework. Ecosystem services stand in contrast to an Indigenous relationality understanding of human-nature relationships.
Why it matters: Ecosystem services can provide a framework for translating environmental harm into legally cognizable damages. Courts may use ecosystem service valuations in damage assessments, cost-benefit analyses, or arguments about standing. For example, demonstrating the loss of coastal wetlands as natural storm barriers can strengthen claims about foreseeability, proximate cause, and the economic value of protective services destroyed by climate change or industrial activity. Engagement with Indigenous communities is required to apply ecosystem services accounting in a way that includes a relationality human-nature relationship, including responsibilities, reciprocity, and cultural value that extend beyond economic metrics.
Hazard and Impact
Hazard refers to a potentially harmful event or physical condition and is distinct from risk; risk reflects the probability of a hazard occurring, combined with the severity of its potential impacts. In climate science, this might be an extreme heat wave, a hurricane, a wildfire, or heavy rainfall. A hazard is characterized by its physical attributes, such as magnitude, duration, frequency, and spatial extent. Impact refers to the consequences that arise when a hazard interacts with human or natural systems. Impacts include the damages, losses, or benefits (in rare cases) that result from exposure to the hazard. A hurricane (hazard) may cause the flooding of homes, economic losses, and injuries (impacts). The hazard is the event, and the impact is what happens because of it.
Why it matters: Liability and damages are tied not just to the existence of a hazard but to its impacts. In event attribution science, researchers can distinguish whether an anthropogenic factor, such as fossil fuel emissions, increased the likelihood or severity of the hazard. Courts and impact attribution science may focus on the quantifiable impacts when considering damages, compensation, or responsibility.
Loss and Damage
Loss and damage refers to the observed and projected harms caused by climate change that cannot be, or have not been, avoided through mitigation (reducing emissions) or adaptation (adjusting systems to cope with impacts). Losses can be economic (such as the destruction of property or reduced agricultural yields) or noneconomic (such as the loss of cultural heritage, biodiversity, or human lives). This framing emphasizes measurable impacts tied to climate hazards and their consequences across human and natural systems.
Why it matters: Scientifically, loss and damage provide evidence of the real-world consequences of climate hazards and can distinguish between avoidable impacts and those that represent irreversible harms. Legally, they offer a framework for connecting climate science to remedies by showing not just that climate change made a hazard more likely or severe, but that it caused tangible, compensable losses. In international contexts, the concept establishes a legal and political precedent for responsibility and reparations; in US domestic cases, it can strengthen arguments for damages by linking harms directly to the best available science.
Resilience
Resilience describes the ability of an entity (e.g., organism, ecosystem, infrastructure) to maintain structure and function following a disturbance. This can occur through resistance—the ability to absorb a disturbance with little change—or redundancies in function. Socially, resilience refers to the ability of communities and their built environments to withstand or recover from disturbances, for example, hurricanes, extreme heat, or large amounts of precipitation.
Why it matters: Climate change is increasing the frequency and intensity of disturbances, in some cases exceeding historical bounds for a given system. This pertains to ecosystems where, for example, severe wildfires are changing not only how ecosystems respond after a fire but also how the wildfire impacts the built environment, where roads and bridges were built to withstand one range of historical limits but are now buckling and cracking due to the effects of climate change exceeding those limits.
Risk
Risk in climate change and climate action conceptually takes account of both the probability of an outcome and the severity of the impact of that outcome. When data are available, risk can be calculated as the likelihood that an event will occur multiplied by the consequences of some event (calibrated in a well-defined metric, such as currency or human life). In climate science, risk is also understood as a function of hazard, exposure, and vulnerability. Recognition of all these components makes it easier to evaluate the complementarity of adaptation and mitigation actions. For example, investments in mitigation reduce the likelihood that an impact will occur and adaptation reduces the consequences of its occurrence.
Why it matters: Scientists and lawyers tend to think of risk differently. In law, the risk of liability, regulatory enforcement, or case outcomes is rarely expressed in formal probabilistic terms and is often assessed qualitatively. Scientific risk is a measurable product of probability and consequence rooted in empirical evidence.
Tolerable Risk
Tolerable risk is the level of risk that a society, community, or institution is willing to accept under current social, economic, political, cultural, and technical conditions. It reflects judgments about what is acceptable given available resources, values, priorities, societal aversion to risk, and the perceived costs of alternative actions. As circumstances change, the objective of policy responses can be to maintain actual risk below the limits of tolerance. Expressions of this tolerance can be seen in building codes, speed limits, and acceptance of some level of incidence (e.g., the US public accepting that an average of 25,000 people in the country die each year from the flu). However, discussions of tolerable risk create room for subjective interpretation and societal dispute.
Why it matters: In the climate context, tolerable risk defines the boundaries across consequences that are diminished by adaptation, mitigation that reduces the likelihoods of residual consequences, and programs that strengthen ex post recovery. Tolerable risk may not serve as a stand-alone legal threshold, but it can inform litigation by contextualizing why certain harms are viewed as unacceptable or foreseeable within a given society.
Vulnerability
Vulnerability describes the degree to which a system, community, or individual is susceptible to harm from climate-related hazards. It is dynamic through time and typically assessed through three components: exposure (the extent to which people, assets, or ecosystems are in harm's way—for example, when living in a floodplain), sensitivity (how severely those exposed are affected—for example, older adults who are unable to cope with extreme heat or infrastructure easily damaged by flooding), and adaptive capacity (the ability to anticipate, cope with, and recover from impacts—for example, through protective infrastructure, resources, or governance).
Why it matters: Vulnerability analysis can help answer key questions about who is at risk and why. Its strengths lie in providing a structured, evidence-based framework that identifies not only the presence of a hazard but also the natural and societal conditions that make certain populations or assets more or less susceptible to harm. This can illuminate disparities in the underlying determinants of adaptive capacity by showing that some harms fall disproportionately on particular groups or regions. For example, two communities may face the same hazard (e.g., extreme heat), but differences in exposure (e.g., outdoor workers vs. office workers) shape the scale and type of impacts. However, vulnerability assessments often rely on complex datasets, alternative assumptions embedded in the evaluation of harm, and indicators of vulnerability that may vary across studies, locations, and time. This creates room for subjective interpretation or societal dispute.
Public Health
Bradford Hill Criteria
The Bradford Hill criteria are nine principles to help determine whether an observed association between a potential cause and an effect is likely to be causal. These criteria are strength of association, consistency across studies, specificity, temporality (cause precedes effect), biological gradient (dose-response relationship), plausibility, coherence with existing knowledge, experimental evidence, and analogy to similar causal relationships. The principles do not constitute a checklist in which all criteria must be met, but rather a framework for weighing evidence.
Why it matters: The Bradford Hill criteria provide a scientifically recognized method for distinguishing correlation from causation in population-level studies, which is particularly relevant to climate litigation when plaintiffs present evidence of climate-related health impacts or community-level harms. Courts frequently scrutinize whether the evidence presented shows a true causal link rather than a coincidental association. Understanding these criteria allows lawyers to demonstrate that causation in science, while evaluated differently than in law, is grounded in rigorous, widely accepted principles.
Co-morbidities
Co-morbidities are the presence of two or more health conditions in the same individual at the same time. They can increase vulnerability to additional stressors, such as extreme heat or air pollution, and worsen overall health outcomes.
Why it matters: Co-morbidities help explain why some people experience more severe outcomes from climate-related hazards, but they can also make it harder to pinpoint a single cause of illness or death.
Epidemiology
Epidemiology is the scientific study of the distribution and determinants of health-related states or events in populations, and the application of this study to control health problems. It uses statistical and observational methods to identify patterns of disease, investigate cases, and evaluate interventions. Epidemiology often relies on measures such as incidence (new cases), prevalence (total cases), relative risk, and odds ratios to describe and compare health outcomes across groups. In the context of climate change, epidemiology can be used to examine how exposures to hazards such as heat, air pollution, or vector-borne diseases affect public health at population scales.
Why it matters: Epidemiology systematically connects environmental exposures to health outcomes, informing arguments about causation and foreseeability. Courts are often concerned with whether there is reliable evidence linking an exposure to specific harms. Epidemiological studies can provide that link by quantifying risk across populations and showing patterns that are unlikely to be explained by random variation.
Exposure Pathways
An exposure pathway describes how a person or population comes into contact with a hazard. It includes the source of the hazard, the environmental medium through which it moves (e.g., air, water, soil, food), the route of exposure (e.g., inhalation, ingestion, skin contact), and the exposed population.
Why it matters: Mapping exposure pathways helps to demonstrate how harms occur and who is affected. In court, showing a clear pathway from source to exposure to health outcome strengthens causation arguments and helps to distinguish between background risks and those tied to defendants' actions.
Health Disparities
Health disparities are systemic differences in health outcomes across groups of people and are often associated with factors such as socioeconomic status, race, ethnicity, gender, age, occupation, and geography. These differences may be observed in rates of disease, life expectancy, access to health care, or overall health status. They often arise from a combination of biological, environmental, social, economic, and behavioral factors. In climate and environmental research, health disparities can be measured by comparing how different populations experience and respond to exposures such as extreme heat, air pollution, or flooding.
Why it matters: Health disparities provide scientifically documented evidence that harms are not experienced equally across populations. Courts often review whether risks and damages were foreseeable, and disparities data can help show that certain groups were predictably more affected because of measurable differences in exposure, vulnerability, or access to resources. This evidence can strengthen arguments about causation and damages by demonstrating that harms were not random but followed clear, documented patterns across populations.
Morbidity and Mortality
Morbidity refers to illness or disease within a population, while mortality refers to death. Public health research uses both to measure the burden of health impacts, often expressed as rates per 100,000 population over time.
Why it matters: Morbidity and mortality data provide quantifiable evidence of harm, but in the context of climate change, the listed cause of death on a certificate often will not reflect the underlying driver. For example, a heat-related death might be recorded as cardiac arrest or kidney failure, even though extreme heat triggered the fatal condition.
Relative Risk and Odds Ratio
Relative risk and odds ratio are statistical measures used in epidemiology to quantify the strength of an association between an exposure and an outcome, but critically, causation is not correlation. Relative risk compares the probability of an outcome in an exposed group to the probability in an unexposed group. An odds ratio compares the odds of an outcome in one group to the odds in another. Both measures help indicate whether exposure to a factor, such as extreme heat or air pollution, is linked to a higher likelihood of illness or death. See Causality and Correlation.
Why it matters: These measures can clearly show how much more likely harm is when people are exposed to a hazard compared to when they are not, and it translates well to legal claims.
Indigenous Knowledge and Cultural Heritage
Biocultural Diversity
Biocultural diversity describes the interconnected biological systems and cultural practices that Indigenous Peoples actively maintain through land stewardship, ecological knowledge, language, and cultural practices tied to specific places. Indigenous communities have cultivated such relationships over millennia, and Western disciplines later documented these linkages in measurable frameworks.
Why it matters: Biocultural diversity is central to claims in which ecosystem loss is connected to cultural loss—for example, with fisheries, sacred landscapes, or species tied to ceremonies. Biocultural rights frameworks provide legal mechanisms for asserting the collective rights of Indigenous Peoples and local communities to maintain their cultural identity through ecosystems protection. Biocultural diversity provides a framework based on cultural harm rather than property damage alone by linking rights to land, resources, and cultural survival.
Cultural Heritage
Cultural heritage refers to the tangible and intangible expressions of a community's identity, including traditions, languages, sacred sites, artifacts, and practices passed across generations. It encompasses both material objects (such as monuments, landscapes, or artifacts) and living traditions (such as ceremonies, oral histories, and skills).
Why it matters: Cultural heritage is affected by environmental harm, development, and climate change. In legal contexts, it provides a basis for claims about rights violations, damages, and loss of identity. Lawyers may use cultural heritage arguments to demonstrate that harm extends beyond physical damage to include impacts on cultural continuity, spiritual practices, and collective rights. Unlike Western frameworks (which treat heritage as historical artifacts requiring preservation), Indigenous cultural heritage often requires ongoing access, active practice, and intergenerational transmission.
Knowledge Sovereignty
Knowledge sovereignty is the principle that Indigenous communities retain absolute authority over how their knowledge is collected, interpreted, and used, including rights to refuse disclosure. Litigation frameworks may be incompatible with knowledge sovereignty. Discovery rules, cross-examination, and public record requirements can lead to Indigenous communities losing knowledge sovereignty. In scientific contexts, knowledge sovereignty is parallel to data governance and intellectual property frameworks, highlighting how Indigenous Knowledge can inform science project design and implementation as equal cocreators.
Why it matters: The importance of knowledge sovereignty raises admissibility questions when Traditional Ecological Knowledge is shared in litigation and shapes standards for consent, confidentiality, and respect for knowledge systems. Practitioners should establish pre-litigation agreements on knowledge boundaries, disclosure limits, and the community's right to free, prior, and informed consent over specific evidence uses.
Relationality
Relationality is a foundational principle in many Indigenous worldviews that all components of an ecosystem (i.e., all living and nonliving elements of a region—humans, nonhuman life, the surrounding environment, and natural processes) are interconnected in reciprocal relationships. This includes relationships with nonhuman beings (such as animals, rivers, mountains, and other elements of the natural world) that may be understood as sentient, ancestral, or relationally significant. Relationality is at the center of Indigenous scholarship describing environmental impacts beyond material loss. Relationality makes Western models of valuation an incomplete tool for understanding how the degradation and loss of nature impact Indigenous communities.
Why it matters: Relationality provides a different lens for assessing harm in legal contexts. For example, climate impacts on species or landscapes may also harm Indigenous livelihoods, cultural identity, spirituality, social cohesion, or governance systems. In some legal systems that recognize elements of nature as rights-bearing entities or legal persons, these nonhuman beings themselves may also be understood as directly affected by environmental harm.
Traditional Knowledge
Traditional Knowledge (TK) refers to the cumulative body of knowledge, practices, and beliefs developed by Indigenous Peoples and local communities through long-term interaction with specific lands, waters, and ecosystems. It encompasses ecological, cultural, spiritual, and social dimensions, and is transmitted across generations through oral traditions, ceremonies, and lived practices. Traditional Ecological Knowledge (TEK) relates directly to ecosystems, biodiversity, and environmental stewardship. In scientific contexts, TEK constitutes a distinct knowledge system with independent methodologies, theoretical frameworks, and validity criteria that can inform and be informed by other knowledge systems, including Western science.
Why it matters: TK can serve as evidence in environmental and climate litigation involving land rights, resource management, cultural heritage, and climate impacts. However, Western evidentiary standards create structural tensions: Oral transmission may conflict with documentation requirements; collective knowledge holders may have conflicts with the individual expert witness model; Indigenous protocols may prohibit certain information from being disclosed. Courts may need to assess how to admit TK or TEK alongside Western scientific evidence, which raises questions of epistemic justice, standards of admissibility, and respect for knowledge sovereignty. Demonstrating how TK or TEK has been applied in land stewardship or climate adaptation can help establish historical baselines, document harm, and assert rights tied to ecosystems and cultural survival.
Two-Eyed Seeing (Etuaptmumk)
Two-eyed seeing is a framework developed by Mi'kmaq elders that emphasizes using the strengths of Indigenous Knowledge systems and Western science, without subsuming one under the other. Two-eyed seeing is a holistic approach to creating multidisciplinary and transcultural understanding of the natural world.
Why it matters: Two-eyed seeing is increasingly cited in environmental co-management regimes and research collaborations; courts may encounter it when evidence integrates multiple epistemologies. Traditional Knowledge may be preserved by living knowledge keepers and therefore less accessible in judicial processes than Western framework knowledge transmitted in books and standard curricula.
Socioeconomic and Policy Concepts
Cost-Benefit Analysis
A cost-benefit analysis (CBA) systematically compares the expected costs and benefits of a policy, project, or action. Both costs and benefits are expressed in monetary terms when possible, allowing decisionmakers to evaluate net outcomes.
Why it matters: CBA influences regulatory and policy decisions, but results depend heavily on assumptions, such as discount rates, time horizons, the definition of externalities, and the valuation of nonmarket impacts (e.g., human health or biodiversity). In legal contexts, understanding these assumptions is critical, since they can shape whether a policy is deemed cost-effective or whether climate harms appear overstated or understated.
Discount Rate
The discount rate is a percentage used to convert future costs and benefits into present values. In economic analysis, a higher discount rate reduces the weight given to long-term impacts, while a lower rate places greater emphasis on future outcomes. In climate contexts, the choice of discount rate directly affects estimates of the economic value of avoided damages and the social cost of carbon.
Why it matters: Because climate change involves long-term harms, the discount rate can drastically alter cost-benefit calculations and policy decisions. In legal and regulatory settings, understanding which discount rate is applied is essential, since it shapes whether mitigation measures appear economically justified or whether future harms are undervalued.
Disinformation vs. Misinformation
Disinformation refers to false information that is deliberately created and disseminated with the intent to mislead or manipulate public understanding. Misinformation refers to false or inaccurate information that is shared without intent to deceive (e.g., when someone unknowingly repeats an incorrect claim).
Why it matters: The distinction between misinformation and disinformation is crucial in determining liability and intent. In litigation, showing that a defendant knowingly engaged in disinformation can support allegations of fraud, conspiracy, or deceptive practices. By contrast, the presence of misinformation may be relevant in understanding public reliance, damages, or the spread of confusion but may not carry the same evidentiary weight regarding culpability.
Emissions Accounting (Net-Zero; Scopes 1, 2, and 3)
Emissions accounting refers to the measurements and reporting of greenhouse gas emissions linked to an entity or activity. A commonly applied emissions accounting framework is the Greenhouse Gas Protocol, which distinguishes among Scope 1 (direct emissions from owned or controlled sources), Scope 2 (indirect emissions from purchased energy), and Scope 3 (all other indirect emissions across a value chain, including suppliers and product use). For companies in some industries, Scope 3 emissions can account for a majority of the company's total emissions. For example, 80–95 percent of oil and gas company emissions fall under Scope 3. Net-zero emissions goals generally require net greenhouse gases resulting from Scopes 1–3 activities to be balanced by carbon removals through natural sinks or carbon capture technologies. However, some net-zero emissions goals cover a limited range of emissions (e.g., only Scopes 1 and 2 or Scopes 1, 2, and some categories of 3). Net-zero goals require accurate emissions accounting.
Why it matters: Accurate accounting is essential for defining responsibility and tracking progress toward targets. Scope emissions and net-zero pledges have meaning only if both emissions and removals are measured with rigor and double counting is avoided. Emissions accounting may be crucial in claims of deceit or fraud. However, accounting relies on baseline data that often needs to be supplied by the emitter, which can be a significant barrier, particularly in jurisdictions without mandated disclosure frameworks.
Externalities
An externality is an indirect (or unintended) cost or benefit of an activity that affects parties who did not participate in (or choose to implement) the activity. Externalities can be negative (e.g., air pollution harming public health) or positive (e.g., urban trees providing cooling).
Why it matters: Externalities highlight the gap between private actions and public consequences. In legal contexts, a negative externality represents a real-world cost that must be borne by someone. Courts often play a central role in determining who pays that cost—the polluter whose actions created the harm, the public, taxpayers, or the local communities who are left to absorb it. Identifying climate change as a negative externality helps explain why unregulated markets fail to account for the unintended costs of social and environmental harms—and why regulatory and liability frameworks should be modified to correct this imbalance.
Greenwashing
Greenwashing is the practice of making misleading or unsubstantiated claims about the environmental benefits of a product, service, or company's overall operations. Greenwashing can involve selective disclosure, exaggeration, or omission of information to present an organization as more environmentally responsible than it is. Greenwashing is studied scientifically by analyzing discrepancies between corporate communications and measurable actions.
Why it matters: Greenwashing claims are increasingly central to litigation and regulatory action. For lawyers, the concept provides a basis for arguing deception, fraud, or unfair business practices under consumer protection laws, securities regulations, or false advertising standards.
Life Cycle Analysis (LCA)
Life cycle analysis (LCA), also known as life cycle assessment, is a systematic, science-based method for evaluating the total environmental impacts of a product, process, or service across its entire lifespan, from raw material extraction and manufacturing to use and disposal. It quantifies energy use, emissions, and resource consumption at each stage to assess overall environmental performance and identify opportunities for improvement.
Why it matters: LCA results depend on the quality and completeness of input data, the boundaries defined for the analysis, and methodological choices, such as allocation and impact weighting. These factors can lead to uncertainty or bias, making it important for legal practitioners to interpret LCA findings carefully and understand their assumptions when using them as evidence.
Mortality Cost of Carbon (MCC)
The mortality cost of carbon (MCC) estimates the number of excess/premature human deaths expected to result from the release of one metric ton of carbon dioxide into the atmosphere. It translates climate impacts into direct human health terms by linking temperature increases caused by emissions to mortality outcomes such as heat-related deaths, disease spread, and food insecurity.
Why it matters: MCC is a newer concept but can inform arguments in tort, negligence, or human rights cases by framing greenhouse gas emissions not only as damaging to the climate, but as a measurable threat to life and health. However, MCC relies on complex climate and epidemiological models that incorporate significant uncertainties in climate sensitivity, socioeconomic projections, and regional health responses. As a global average, it may obscure local disparities in vulnerability and exposure, and should be interpreted as an indicator, rather than a precise measure of mortality risk linked to emissions.
Precautionary Principle
The precautionary principle is an epistemological, philosophical, or legal framework grounded in acting with caution in situations in which scientific knowledge is incomplete and/or the consequences of action can be catastrophic. In science, the principle aligns with rigorous standards of evidence and transparency—acknowledging uncertainty, testing competing hypotheses, and applying the best available data, even when knowledge is incomplete. Acting with precaution means recognizing limits while avoiding paralysis in the face of risk.
Why it matters: In law and policy, the principle guides decisionmakers to prevent harm before it occurs, shifting the burden of proof toward those proposing potentially damaging activities. The precautionary principle creates a bridge between scientific understanding and legal responsibility, ensuring that uncertainty is not a reason for inaction. In other words, both scientists and policymakers have a duty to act responsibly amid uncertainty.
Social Cost of Carbon/Social Cost of Greenhouse Gases
The social cost of carbon quantifies the net economic consequences (both positive and negative) caused by emitting one additional ton of carbon dioxide (or carbon dioxide equivalent when including other greenhouse gases) into the atmosphere. It is derived using an integrated assessment model and calculated as the discounted value of the costs caused by that one ton as it persists (while dissipating slowly) for many decades. The metric integrates climate science, economics, and demographics to quantify impacts such as health effects, agricultural losses, property damage, and ecosystem changes, typically expressed in monetary terms (dollars per ton of carbon dioxide).
Why it matters: It provides a standardized metric for comparing regulatory choices, but outcomes depend on modeling choices such as discount rates, socioeconomic scenarios, and the range of climate damages included. The social cost of carbon is sometimes incorrectly presented as the optimal price for carbon, but it is not the product of a benefit-cost optimization exercise. To avoid this error, any estimate should be identified with the year from which the calculation begins and associated with a scenario of how the economy and therefore future emissions will develop (future costs depend on future emissions and future atmospheric concentrations).
Authors
Delta Merner, PhD; Carly A. Phillips, PhD; William S. Beckett, MD, MPH; Charles A. Brown, PE; Robert Byron, MD; Johnnie Chamberlin, PhD; Allan Frei, PhD; Dargan M. W. Frierson, PhD; Warren G. Lavey, JD; Grace Lindsay, PhD; Murray H. Loew; Paasha Mahdavi, PhD; Carlos Martinez, PhD; Adele Simmons; Christina Tonitto, PhD; Emily L. Williams, PhD; and Gary W. Yohe, PhD
Acknowledgments
With gratitude to those who have provided reviews and other guidance in the creation of this resource including Amanda Fencl, PhD; Gabriel Filippelli, PhD; Sarah Goodspeed; Bobbie Mooney, JD; J. Pablo Ortiz-Partida, PhD; Ashley Otilia Nemeth, JD; Julie Taylor, J.D.; Noah Walker-Crawford, PhD; and Cynthia Williams.
The Climate Law Accelerator (CLX), New York University School of Law
Grantham Research Institute on Climate Change and the Environment, The London School of Economics and Political Science
Organizational affiliations are listed for identification purposes only. The opinions expressed herein do not necessarily reflect those of the organizations that funded the work or the individuals who reviewed it. The authors bear sole responsibility for the report's content. Reports of New York University clinics, centers, or programs do not purport to represent the institutional views of New York University School of Law, if any.