Science Online navigation
-
- Science in the NZ Curriculum
- Science capabilities for citizenship
- Nature of science
- Teaching science
- Teaching resources
- Science at work in the world
- Newsletters
This resource illustrates how a Science Online investigation can be adapted to provide opportunities for students to strengthen their capability to critique evidence in the context of science.
The Nature of Science strand
Aims | Achievement objectives relevant to this resource |
---|---|
Investigating in science Carry out science investigations using a variety of approaches: classifying and identifying, pattern seeking, exploring, investigating models, fair testing, making things or developing systems. | L5: Begin to evaluate the suitability of the investigative methods chosen. |
Aims | Achievement objectives relevant to this resource |
---|---|
Physical inquiry and physics concepts Explore and investigate physical phenomena in everyday situations. | L5: Identify and describe the patterns associated with physical phenomena found in simple everyday situations involving movement/motion of objects. |
Students critique the number of trials needed to reach a more-or-less stable average.
This practical investigation already suggests a number of ways in which students might be supported to think critically about the design and results of science investigations. The following adaptation could help strengthen their capabilities to critique evidence, by encouraging them to think about the adequacy of the data they need to collect to provide a stable measure of central tendency (i.e., an average that no longer changes noticeably with each new trial).
Adapting the resource:
Students choose several different balls to compare. Ask them to predict which one might show more variability in its bounce and to say why they think this. [For example: a ball with an irregular knobbly surface will bounce more unpredictably than a smooth ball because it will land differently each time.]
They then set up a fair test but with the following addition. After each test bounce, have them record the height of the bounce and recalculate the average height (from the second test onwards). At first they are likely to see some variability in the average but with repeated trials this should begin to settle around a central measure that changes less with each successive bounce. These running averages could be plotted on a graph to determine how many bounces would be enough to satisfy the investigator that a reasonably accurate measure has been obtained. How would the graph help them choose? [It should flatten out as the average stabilises.]
Do the different balls need to be trialled the same number of times or does one reach this stable measure with fewer bounces than the other?
What are the implications for designing other investigations where multiple measures need to be taken?
It is not just enough to know how to do a “fair test”: students also need to know why repeat measures are an important part of experimental design and to gain a feel for inevitability of variations in measurements taken with the human senses, no matter how carefully these are done.
Developing an appreciation of how evidence in science is generated supports students to become scientifically literate, i.e., to participate as critical, informed, and responsible citizens in a society in which science plays a significant role. (This is the purpose of science in NZC.)
Can students calculate and plot running averages.
Can they identify and justify a point at which the evidence suggests that sufficient measures have been taken?
For suggestions about adapting tasks in ways that allow students to show progress in critiquing evidence see Progressions .
The capability 3 resource, NCEA Level 1 Investigations (which is based on Level 1 Chemistry assessment resources ), also looks critically at investigative design.
Fair testing, statistics