Brown, A. (2007). Software Development as Music Education Research. International Journal of Education & the Arts. Volume 8, Number 6.
My thesis is supposed to include a quantitative research component. This had been causing me some anxiety. It’s educational and creative software. What exactly could I measure? I had this vague notion of testing people’s rhythmic ability before and after using the app. But how do you quantify rhythmic ability? Even if I had a meaningful numerical representation, how could I possibly measure a big enough sample size over a long enough time to get a statistically significant result? The development of my app is going okay, but I was really stressing about the experimental component.
Then my advisor introduced me to Andrew Brown‘s notion of software development as research, or SoDaR. As Brown puts it, “SoDaR involves computers, but is about people.” Humans are complex, our interactions with computers are complex, the way we learn is complex. The only method of inquiry that can encompass all that complexity is qualitative, anthropological inquiry, involving a substantial amount of introspection on the part of the researcher.
Software development is a wonderful way to test educational concepts. The software itself is a concrete manifestation of the designer’s theories and assumptions, stated and unstated. Brown describes software as “a mirror on researcher understanding.” Seeing the software in action puts those theories and assumptions to the test, and gives the designer a lot of opportunity for ongoing reflection. Rather than waiting for your study to be over before you draw conclusions, you gather conclusions constantly and apply them to each iteration of your design.
Brown draws parallels between SoDar and ethnography, case study, and design-based research. All of these research methods deal with the messiness of people in real-world settings. Controlled laboratory environments are great for looking at specific components of our cognition and social functioning, but we can only get the full picture from looking at the world. Unfortunately, that means going without control groups, clean separation of cause and effect and other seemingly basic requirements for empirical objectivity. The results of SoDaR research are difficult to generalize, since they are necessarily so dependent on context. Nevertheless, for people in natural settings, qualitative observations are the best data we have.
So how do you minimize bias in your qualitative analyses? Brown advises you to collect as much information in as many different forms as you can: interviews, observations, artifact analysis, database analysis and surveys.
The software development process itself is an integral part of the research. Software development forces you to externalize your ideas, to do continual small-scale experiments, and to reflect on those experiences. My collaborator Chris and I represent the two richest sources of observational data. As we struggle with each iteration of each function of the drum machine, we’re performing extensive user testing on ourselves. We’re the world’s leading experts on this particular piece of software, so if a feature or metaphor doesn’t make sense to us, it certainly won’t make sense to anyone else.
The SoDaR approach has three stages. Each one includes description, data collection, and reflection.
1) Identify the learning opportunity.
The learning opportunity is the gap you can fill with software. Then you devise an approach to seize of that opportunity.
For me, the learning opportunity is the rhythm tutorial that I wish I had when I was trying to figure out drum programming, and drumming generally. I looked at existing teaching materials and software, evaluated their strengths and weaknesses, did a lot of background reading on music pedagogy and interface design, on music visualization and notation, on the Afrocentric roots of American dance music.
2) Actually design and produce your software.
The software design process began with a lot of concept images and flowcharts. Chris and I did a prototype using Max and JavaScript and I did some small-scale evaluation. We then rebuilt the app in iOS, taking the successful features and altering the unsuccessful ones. This stage is ongoing, and will likely come down to the wire.
3) Test, iterate, repeat.
Try your software in a real-world educational setting, use your observations to refine it, test it again, an so on.
Most of our iteration is taking place just trying stuff out on ourselves and talking it through with my classmates, friends and relatives. As of this writing, the prototype isn’t robust enough to withstand classroom testing. As soon as I have a minimum viable product, I’ll put it in front of as many novice musicians as I can, in as many different settings as I can. I’ll write up my observations for thesis purposes; most of the substantial software iteration will probably happen after I graduate.
All software development is iterative. You try something, you compile, you test, you debug, you compile, you test again. The SoDaR approach is different from traditional development practice in that you’re exposing your software to outside eyes at every stage of the process, rather than waiting until you have the finished (or nearly-finished) alpha and beta versions. Brown says:
Putting early prototypes in the field provides feedback about the struggles and novel uses of the software that can reveal both user understandings and patterns of thinking, and how well the learning theories are embodied in the software activity.
SoDaR draws on Activity Theory, the idea that our intelligence is distributed, not individual. Knowledge and skills don’t live in our heads; they are enacted through the interactions between people, and between people and technology, all in social contexts. You can really only understand how our minds work through participating in social interactions.
The supporting data Brown recommends that you collect should show user responses and changes to your activity plan and software at each cycle. Data can include participant interviews, video of students engaged with the activity, bug reports, feature requests, software version histories, and the results of any tests or assessments administered before and after. You should use the data to answer these questions:
- Are the activity and software mutually reinforcing?
- What are the differences between the expected and actual behavior of the students/users?
- How can the software and its use be improved?
- Are the students achieving the desired learning outcomes?
- Does the software open up new and unintended learning possibilities, or does it restrict them?
Brown describes the SoDaR process as an improvisational one, and that appeals to me very much as a jazz player.
The researcher benefits by ‘playing’ with the possibilities presented by the malleable nature of software, in the same way that the student learns by playing within the experiences created by the software/activity combination.
In Brown’s own SoDaR research, he finds that most of the major findings happened in the first few trials and iterations. I’m finding the same thing to be true. After watching people try my prototypes, it was obvious in the first minute what made sense and what didn’t. Unpromising ideas get nipped in the bud quickly.
SoDaR presents some challenges when it comes time to present your data. You might have all kinds of rich media like video and software prototypes, but the end product is going to be PDFs and paper. Software demos and live presentations will give a better representation of this kind of research. The online version of my thesis is probably going to make for a much more enlightening read than the printed one.