More from productive discussions at Aarhus University

Standard

Thinking about the big picture

I cannot underestimate how valuable it is to talk to people in other disciplines, in other schools, with other perspectives. I was challenged yesterday to think about the big picture of my work and philosophy on labs, and to really step back to see the forest from the trees. What I have come to is this:

The way we teach in labs is not new. There is lots of fundamental research that tells us that the components that we used are all helpful and productive for learning. The key features are as follows:

  • Students make comparisons
  • They reflect on them and make sense of them
  • Then they decide what to do about them (how to act), ultimately leading to new decisions
Cycle

Figure from Holmes, et al. (2015) in PNAS with “Decisions” added to the center to reinforce the student autonomy involved.

The importance of making comparisons has been well documented (e.g. Gick & Holyoak, 1980; Bransford et al., 1989; Bransford & Schwartz, 1999). The importance of revising and iterating with feedback has been well documented (e.g. Ericsson, et al. 1993; Schwartz, et al. 1999 — I should have more references than this… hmm…). Clearly, combining the two is going to be helpful.

And while we focused very specifically on the sorts of comparisons that introductory physics students make with data and models, these comparisons could be a number of things. For example, coming up with multiple experimental designs, comparing them, determining which is best, and then trying one of them. Of course, then one has to evaluate the outcome of acting on it to try to determine whether it worked as well as expected (compare to expectations or predictions). This may then lead to trying the other design or modifying it in a small way. The initial comparison, however, can happen before students take any data in a lab, which can be beneficial when iterating and repeating measurements is costly (e.g. non-reusable materials in chemistry or biology labs). Another example would be to compare to another group, each try the designs, then compare how well they worked to determine the optimal design.

Comparing to predictions is also useful, but the problem with most courses is that students stop at the comparison. They reflect, but often draw funny conclusions, such as “they disagree because of human error.” [What does that mean?!] Encouraging students to then test that interpretation (or act on it in some way) is going to make those comparisons much more meaningful.

I’m pretty sure I wrote about a lot of this in our papers, but I think I needed to pushed on it to really come to believe or understand it.

Advertisements

2 thoughts on “More from productive discussions at Aarhus University

  1. Not to draw attention away from the big picture of the labs that you are discussing, but I really like the idea of having groups using different designs for shorter measurements, comparing their results, and that informing how to get high-quality data. An example that springs to mind is having them compare in the pendulum experiment the magnitude of their uncertainties by doing 2 trials of 8 swings, 4 trials of 4 swings and 8 trials of 2 swings.

    • Absolutely – I think it depends what your goal is, right? And for non-physics labs, that sort of uncertainty focus is just not the goal. Repeating experiments in biology and chemistry can be wasteful of resources and take a whole lot of time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s