The internal validity of particular evaluation designs may be affected by certain biases such as history and maturation. The history effect is the influence of events occurring outside of the study or circumstances between repeated measurements of the dependent variable on the behavior and perception of participants of the experiment (Issel & Wells, 2017). The maturation effect implies a natural change in the sample members over time (Issel & Wells, 2017). History and maturation may add factors that are not anticipated by research to influence the dependent variable.
For instance, a randomized experiment is the evaluation design that can be biased by the effects mentioned above. It should be noted that this design is the most reliable and valid regarding statistical conclusions and treatment effects (Issel & Wells, 2017). It implies control and experimental research groups where only the latter will be exposed to an independent variable. Typically, the design involves two or more measurements, and it is this property that makes it susceptible to history and maturation. Certain events unrelated to the experiment, as well as natural changes in participants over time, may affect the dependent variable.
Another design that can be biased in the same way is the time-series design. This evaluation design is not randomized and is based on observing changes over time and identifying specific trends. Accordingly, the research group is observed several times to draw appropriate conclusions. However, as a rule, researchers are interested in a particular factor affecting the group, and due to time gaps between observations, the group is also affected by other factors and internal natural changes.
Determining the sample size is of paramount importance and is based on research objectives. According to Pye, Taylor, Clay-Williams, and Braithwaite (2016), “the importance of an accurate sample size calculation when designing quantitative research is well documented,” and “without a carefully considered calculation, results can be missed, biased or just plain incorrect” (p. 90.1). For example, in order to test the efficacy of a new medication or a vaccine, a large sample is needed. It will allow examining the approximate effects of the treatment on a population. Thus, the sample size corresponds to the research objectives and is determined by scientists in each case.
Issel, L. M., & Wells, R. (2017). Health program planning and evaluation (4th ed.). Burlington, MA: Jones & Bartlett Learning.
Pye, V., Taylor, N., Clay-Williams, R., & Braithwaite, J. (2016). When is enough, enough? Understanding and solving your sample size problems in health services research. BMC research notes, 9(1), 90.1-90.7.