The calm before the storm…

Standard

My first big project of my postdoc begins next week, so I thought I’d squeeze in another blog post, since I expect the next 10 weeks to be crazy. For this blog, I thought I’d just write about some of the things I’ve been thinking about lately, in attempts to start some conversations and also just to get my thoughts down somewhere.

TA training and course development

The new project I’m working on has involved taking an existing course and restructuring it to use pedagogy and structure from my dissertation course. There are a couple neat challenges here, such as:

  1. The UBC course was 24 weeks long, this one is 10 weeks
  2. The UBC sections were 3 hours long with no pre-lab or homework, these ones are 2 hours long with pre-lab assignments
  3. The UBC course was graded, this one is pass/fail

Trying to fit 24 weeks of goals developed in 3-hour lab sections into 10 weeks and 2-hour labs has caused me to really evaluate what the important and necessary goals are. Hopefully this process will make our lab structure more generally useful to lab instructors. Depending on whether it works, I suppose.

I also spent several weeks testing each of the experiments to see if they were workable:

  • They need to work. None of this ‘human error’ stuff
  • Students need to be able to make precise and accurate measurements
  • Some of the experiments need opportunities for models to be revised, when the data quality is high enough
  • They also need to be able to highlight particular analysis tools or ideas (e.g. fitting, comparing pairs of measurements, different sources of uncertainty…)

Several labs got thrown out, several got turned into two-week labs to give students enough time to explore the ideas. I expect more changes to happen throughout the course. As the goals of the course were refined, the lab documents started to come together. The first week of TA training, which was a 1.5-hour intro to the course, went along swimmingly! TAs seemed to buy-in to what I was doing after a quick discussion about how experiments work and interact with theory. With only 5 TAs, it was also fairly easy to have productive and engaging discussions about the goals and content. Next week we get to dig into data and feedback.

I’m also dealing with the little things: creating a syllabus, posting it on the online course management system, coordinating labs and pre-lab documents, for example. There are also neat challenges when the course is also being used for research: need to inform students about the research, figure out how to coordinate accessing students materials around TAs marking them, etc. All in all – a great learning experience so far!

Lab goals and structures

With the AAPT’s endorsement of new recommendations for lab goals and curriculum, I’ve been evaluating my previously held goals for labs. Compared with the AAPT’s 1998 document of the same nature,  the new document is much more focused on developing skills (technical skills, modeling skills, data analysis skills…). My current opinion is that, while lab experiments should try to resemble authentic experimentation (that’s a loaded term, mind you, see “What is science” below), we can’t drop novices into expert behaviours and activities. Labs should aim to develop the skills and behaviours slowly and deliberately, until students are ready to engage in real science experimentation. Even graduate students and postdocs rarely engage in a full experimental design process on their own (identifying research questions, designing the experiment from scratch, etc. etc.). It’s always guided by a mentor or a team. There’s no need to get students to do that on their own while they’re still learning the basics of data and uncertainty.

How can we assess these goals?

My dissertation work involved lots and lots of coding of and analyzing students’ written lab notes. I looked at whether students iterated to improve their measurements or models during the lab, the sophistication of their analyses and interpretations of methods and results, whether they could identify physical assumptions of a model that had been violated, and whether and how they made and corrected measurement errors. While it was very fruitful for a PhD, coding lab books is not a useful or generalizable assessment for people looking to evaluate their labs. So, I’ve been trying to figure out whether there’s an FCI-like way of assessing labs. There are several validated lab-related assessments, such as the Concise Data Processing Assessment or the Physics Measurement Questionnaire. The CDPA, however, is hard and measures conceptual understanding of specific data handling concepts. The PMQ is open-ended and requires the instructor/researcher to code student responses. I’m currently working to develop an assessment somewhere between these two, but that also includes a focus on modeling through experimentation. That will be piloted next week! Stay tuned!

What is science?

Discussions with supervisors and colleagues have sent me down a rabbit hole of philosophy of science. In trying to disentangle whether the activities students do in the lab are actually “authentic science,” I made the mistake of asking what “authentic science” really is. Is it a noun or a verb? A set of facts and theories or a process for developing those facts and theories? [My current idea is that it’s both…] This, of course, has spun into much bigger questions that are only partially relevant to labs, but it’s been fun to think about anyway. My PhD advisor sent me to this fascinating 1978 documentary interview between Bryan Magee and Hilary Putnam about the philosophy of science, which I highly recommend. Please send along other favourite nature of science resources for me to check out!