The promises of learning analytics: Scaling up direct evidence

Posted on Posted in Open Ed

After visiting the #CHEP2016 conference this week in Blacksburg, VA and taking in the materials for week 4 of a 5-week course on Assessment of College Student Learning I’ve been struck by how central assessment has become in higher education. Assessment has become so central that it seems to occlude the assumed or purported purpose of higher education to foster, guide, and enhance learning. Hutchings, Kinzie, and Kuh (2015) refer to this as a compliance model of assessment. Students and institutions alike are negatively impacted by this approach, as students focus on their grade point averages (GPA) and institutions become preoccupied with meeting certain standards or moving up arbitrary rankings reports. One way to explore this idea is considering a fundamental concept of student learning assessment: direct and indirect evidence.

 

Direct evidence.

Ideally, direct evidence of student learning is “tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned” (Suskie, 2009, p. 20). Examples that Suskie presents are written work, tests, presentations, observations, “think-alouds” (all accompanied by thorough rubrics or assessment blueprints, of course). For Suskie, a student’s writing sample accompanied by an explicit rubric with rigorous standards is an exemplar of direct evidence of student learning. I think this can be questioned as it frames student learning as well as its assessment in a particular way, but that’s for another post. Developing assessments and deploying them in a manner that is focused on benefiting the student has become an afterthought in most college teaching in my experience. But the potential is there. Direct assessments of student learning are based in some kind of individualized response to student performance. This is their strength. They can be based in actual student learning. This directness pushes back against automation that educational technology firms so consistently strive for. Data collection requires more work, is therefore more “expensive” (ultimately more effort on the instructor’s part) to produce, and until more recently has been difficult to summarize or consider at scale for communication/reporting purposes. For the sake of this argument, I’ll allow for the trickiness of establishing direct evidence of student learning and focus on the advantages of direct evidence over its counterpart, indirect evidence that I introduce below.

 

Indirect evidence.

A product of the institutionalization of education and concomitant oversights that were intended to improve efficacy, indirect evidence “consists of proxy signs that students are probably learning” (Suskie, 2009, p. 20). Obvious examples are grades: measures applied to student assignments, courses, programs, and resulting GPAs. In practice, the phenomenon of “credentialing” is rooted in the assumptions surrounding indirect evidence of learning. Surely someone with an AA, BA, BS, MA, MSc, EdD, PhD, MD will demonstrate a certain level of learning, right? What that entails exactly is absurdly ambiguous. For-profit higher education as well as elite colleges and universities all take advantage of this ambiguity to entice students in different ways and towards different ends.

Accreditors, oversight organizations, and internal research departments are interested in these and other forms of indirect evidence of student learning like completion rates, drop-out rates, years to completion, and so on.  The attraction is the feasibility of data collection as well as the ability to aggregate data across disciplines, programs, divisions, institutions, regions, countries, and so on. The ability to compare data across a variety of contexts is intensely attractive. Yet, the signature failure of this approach is its reliance on proxy and its utter detachment from engagement with actual student learning.

 

Learning analytics.

I have a lot of hope for learning analytics in bridging the gap between the efficacy and exactitude of direct evidence and the pragmatism of indirect evidence of student learning.

However, my hope is tempered by a cautionary stance that learning analytics will be mobilized to serve accountability agendas first and learning at a distant second. Ideally, accountability in higher education is centered on the improvement of student learning. Yet just as indirect evidence relies upon proxies for learning, and assessments too often become puzzles for students to be solved and ultimately entail demonstrating learning of how to take tests, how to mimic jargon, or how to navigate an educational institution. Further, data collected too often applied in highly limited ways. Assessments are often delivered to students without opportunities to apply that information to their learning (like when final projects or exams make up the large part of a student grade that magically appears on their record after the semester is over). Institutional research departments compile data in reports for a select few at an institution and then sit on a shelf collecting dust.

The potential here for learning analytics to scale direct evidence of student learning is latent, and can be actualized through open sharing of data. Some three months after hearing from Norman Bier at the Open Education conference, I’m beginning to wrap my head around this. Whereas that session got hung up on issues of privacy, I think what might come from learning analytics is a disruption of perpetuating the dominance of indirect evidence of student learning in education. OER can offer up more robust mechanisms for collecting direct evidence of student learning, each of which can generate data that can be collected enmasse and mobilized into far more accurate learning analytics.
I think the central issue is questioning the relationship and interactions of student learning and what we (can) do to establish direct evidence of that learning. If OER or LMSs can collect data that is grounded in actual student learning, then learning analytics has a potentially fruitful place in education. Alternatively, the opportunity to further entrench big data in support of a compliance culture in higher ed remains a viable specter.


Hutchings, P., Kinzie, J., & Kuh, G. D. (2015). EVIDENCE OF STUDENT LEARNING: WHAT COUNTS AND WHAT MATTERS FOR IMPROVEMENT. In G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. R. Cain, P. T. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 27–50). San Francisco, CA: Jossey-Bass.

Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey-Bass.

Leave a Reply

Your email address will not be published. Required fields are marked *