A Model Underlying Learning Analytics?
Emerging as a major trend in educational contexts, use of learner data to improve education has been the object of a significant amount of published work and internal discussion. For instance, there are conferences and communities of practice dedicated to Learning Analytics while standards bodies are trying to ensure interoperability in Learning Analytics. Some documents related to this topic are being shared privately, as standards are being developed.
In the spirit of the Requests for Comments (RFCs) which created the Internet, constructive feedback is offered on private texts from standards bodies about Learning Analytics. Since those texts have yet to be made available for public consumption, references to their content are withheld, making the current text into a more general commentary on issues relating to the use of learner data. The resulting tone may appear more obscure than needed. Hopefully, these comments can still provide food for thought to anyone interested in the use of learner data.
Several texts on Learning Analytics read as though they came from a peculiar perspective on learning which approach may unduly limit their applicability. This core model of learning can be described as linear, teleological, and insular. The documents in question could have a positive impact on the ways diverse stakeholders conceive of data associated with learning activities. Yet, the underlying thrust of those texts goes against some important trends in technopedagogy, including the very connectivist approach which has given so much power to Learning Analytics.
Limits of the Model
When talking about “sets”, connectivists allow for learners’ goals to remain diverse through the learning process. Granted, a course may have a specific list of “learning objectives”. But an environment conducive for learning may afford a large array of effects, which may or may not have been part of the course design. In fact, some of the most powerful learning experiences people tend to describe come from these unexpected effects.
Though meant to empower “learning activities” (described in a very specific way), those texts run the risk of deepening problems associated with a “black box” view of learning. Transforming the complexity of the learning process into a flowchart of inputs and outputs puts the focus on those aspects of education which can be controlled. This linear model misses many of the unexpected outcomes from learning as a wide phenomenon.
The learner’s brain represents one version of the “black box” in education, with echoes of what Freire called the “banking” concept of education. In such a linear process, learners may process information given to them, but they have limited agency. In other words, tracking learners in linear patterns may transform them into “products” instead of allowing for the emergence of new social dynamics among learners.
Texts on the use of learner data mention “personalisation” on occasion, especially in the context of learners with diverse needs. Learning Analytics could indeed benefit people who require specific attention. As the movement towards Universal Design for Learning reveals, most people do have “special needs” in one context or another. Taking learner diversity into account can thus open up interesting possibilities for more appropriate approaches to learning.
However, the type of personalisation described in these contexts has more to do with funneling diversity than with ensuring that it can flourish. Authors of such documents may readily acknowledge that learning paths vary while insisting on the convergence between these paths. Learners who wander too far off the well-beaten path need to get back in line (in order to pass standardised tests, for instance).
Coming from cybernetics, feedback systems have had a large impact on Twentieth Century epistemologies. Systems meant for Learning Analytics, especially predictive ones, commonly refer to “feedback loops”. As per the linear approach to learning, outlined above, preset goals and objectives for the analytics process may restrict its potential.
Missile-guiding and other mechanisms described by Wiener or other cyberneticists can indeed be described as “purposeful”:
… interpreted as directed to the attainment of a goal (Rosenblueth, Wiener, and Bigelow 1943: 18).
In fact, taken individually, most learners’ actions can satisfy this condition. However, the aggregate effects of interpersonal actions are much closer to “purposelessness” in the same theory. the discovery potential implied by constructive learning is absent from such a goal-oriented approach to education. As an extension to the previous one, the teleological nature of the model much work in Learning Analytics threatens to not only transform the learner into a product but to transform the learning environment into a factory.
Teleology also influences the analytical process itself. The effect relates to the proverbial drunk man looking for his keys under a light despite having lost them somewhere else: searching where we can see, ignoring the darkness around. By seeking specific patterns related to academic success, Learning Analytics would run the risk of only finding what is easy to find (or, indeed, what has been created by the analytical process itself). Back in 1979, Randall Collins explained such a circular system in The Credential Society: An Historical Sociology of Education and Stratification. By selecting students based on predetermined criteria it set for itself, an educational system maintains the social stratification it uses to define success. Observing the current problems affecting educational systems worldwide, Collins’s ideas appear as relevant as ever.
This circular logic affects Learning Analytics with utmost strength when the predictive systems are based on grades. By its very definition, a grading scale addresses a single dimension of a learning phenomenon. Even if it allows for diverse interpretations, the “degree of achievement” is unilinear. Of course, multiple scales can be used and complementary data (such as class averages) can give context to a grade. The fact remains that a grade, taken in isolation, refers to a point (or range, in the case of letter grades) along a single line. The compounded problem with the use of grades in Learning Analytics comes from the teleology of a system limited to testing itself. In a peculiar form of confirmation bias, one can easily find data to support the effect of an intervention meant to place learning at a given point on a line set by the system which proposes those interventions.
Specifically, problems having to do with labeling and the Hawthorne Effect need to be addressed. By internalising labels given to them, learners modify their behaviours to comply with social norms. Not only is it easy to spur deviant behaviour by convincing someone that they’re deviant, but the notion of an “A-Student” has a lot to do with encouraging behaviours which are easy to test. Students whose potential can only be fully realised outside of the most rigid educational contexts are like “ugly ducklings” prevented from transforming into beautiful swans.
Because learners are human beings, their behaviours are greatly affected by the observation process. As may be the case with factory workers whose productivity is being assessed while different parameters are changed, students are likely to modify their behaviours when they are part of an experiment related to their performance.
To paraphrase a well-known expression, “no learning is an island entire of itself”. Much learning happens in formal situations designed for education, training, or teaching. But learning can and does happen anywhere, with or without directed intervention.
Some texts on learner data refer specifically to schools and other educational institutions. These may represent the context in which demand for Learning Analytics is most directly felt. Which might be the reason why the overall scene makes it sound as though learning only happens in well-circumscribed situations. Though it might be easy to imagine Learning Analytics applied to “on-the-job training”, links between schools and workplaces would likely exacerbate most of the issues identified in the field. Moving a Learning Analytics system from a university to an office context could open Pandora’s Box with issues ranging from worries over privacy to the technical hurdles behind data transfer between learning management systems and employee databases. Designed for both professional and pedagogical uses, badges and portfolios could help solve this specific connection. Similarly, Learning Analytics systems might, one day, integrate extracurricular activities through a type of co-curricular record.
Trickier is the link between scholastic and broader social contexts. As could be expected, social network analysis can open up intriguing possibilities for Learning Analytics. What may be less obvious is the large impact social groups have on learning. Often defined as amicable interactions with others, socialisation is a thorough process through which we all learn such things as behaviours, norms, beliefs, and values. Since socialisation and formal education can contribute to the same learning processes, it often proves difficult to assess the relative extent to which school and society made learning possible. Chances are, in fact, that complex interactions are at stake, as we would find between drugs and their effects on health. In a truly “black box” model for learners’ brains, we would need to know all the inputs involved, including all sorts of social interactions for which data can hardly be collected.
Despite the use of “learner” instead of “student” and while calling attention to “Learning, Education, and Training”, the Learning Analytics literature focuses on the roles of people registered in a formal process of training, schooling, or studying. For instance, the “paths” mentioned in use cases come to a clear endpoint. Affording some granularity, the key concept in the core model behind Learning Analytics is that of a “course”, “programme”, or “module”. Related to teleology, the “learnings” involved have a well-defined finality. The concept of “lifelong learning” would appear quite foreign in such a context, even though much consideration is giving to it in other contexts.
Of course, all of these ways to open the sphere of learning to other aspects would detract from the standardisation process, were they to be followed to their fullest extent. However, they all point to the specificity of the learning model underlying the standards building process. Making explicit the scope of the document could help defend its insularity.
Relatively simple solutions exist to problems described here. Much can be done by framing Learning Analytics in a slightly different manner, addressing concerns and preempting further problems. Some additional work could also be done to expand the existing documentation but the main issues might be resolved more easily with some minor modifications.
Two key concepts could help in a reframing exercise: agency and appropriation. As a capacity to act, agency is quite powerful a concept. The diversity of stakeholders requires authors to take agency into account. As a process of making something one’s own, appropriation can also resonate with the process of recontextualisation, as would be the case with artistic endeavours involving secondhand material.
Learner Agency and Appropriation
Learners form a primary scope for both appropriation and agency in the context of Learning Analytics. As a partial resolution of the key issues mentioned above, authors in this domain could frame relevant tools and technology as being not only focused on learners but driven by learners. In other words, a key suggestion is to put Learning Analytics “in the hands of learners themselves”.
Clearly, there are contexts in which it can be deemed appropriate to conceal information from learners, including information concerning other learners. The approach advocated here has less to do with data points than with the goals and ends of Learning Analytics as a whole. Instead of coming “from above” (government bodies, administrators, teachers, or even parents and future employers), the decision to enter the world of data-driven processes can come “from the base”. As Kevin Carey (2015) likes to remind us, the first university was founded in Bologna by students who pooled their resources “in order to learn” (The End of College, Chapter 2: A Sham, a Bauble, a Dodge):
in the late eleventh century a group of students in Bologna got together and decided to pool their resources—financial, intellectual, and spiritual—in order to learn.
Explicitly or implicitly, most scenarios for Learning Analytics come with a variety of privacy issues. Informed consent can serve as a litmus test for privacy concerns, as in ethical work done on or with “human subjects”. For Learning, Education, and Training, the level of understanding necessary for consent may be higher than in other contexts. In fact, the degree to which learners fully grasp the implications of analytics could, in itself, be the object of Learning Analytics. With a nod to the domain’s use cases, best practices in data usage could be submitted to analysis as part of a learning process.
In this sense, ensuring appropriation of the analysis by learners themselves does more than skirt privacy issues. It integrates Learning Analytics in the broader frame of learning.
Though tricky to fully explain, reflexivity and metacognition play important roles in learning once it is conceived outside of Freire’s “banking” model. Thinking about one’s own thoughts can be as simple as self-evaluation or as deep as a full dive into epistemology. A reflexive approach may begin with self-awareness but it can open into a “gallery of mirrors” where people’s self-perception enter complex negotiations with others’ perceptions. Similarly, learner-driven Learning Analytics can begin with a “student dashboard” and broaden into a full system of connected learning. Unfortunately, Learning Analytics systems often relegate “student dashboards” far downstream in the development process. Instead, these dashboards could serve as a cornerstone for the building of Learning Analytics systems.
At a recent conference, a Learning Analytics practitioner and advocate was presenting some advantages in opening up predictive models used for intervention with students. Asked about the potential for similar models to be used to evaluate teaching effectiveness, this expert joked with the audience that, though this form of “Teaching Analytics” represents a common request, its impact would be “too political” (laughter ensued). Further prompted to expand on the difference between teachers and learners in such a situation (“Why are Learning Analytics not political, in this context?”) this same expert shared preliminary (and unpublished) observations from a focus group. Two factors seemed to dominate students’ responses when asked about privacy issues surrounding Learning Analytics:
- They mostly take for granted the availability of data on their performance and related issues.
- They know that the process is meant to help them succeed.
Informed consent requires an adequate understanding of both the benefits and the risks associated with data gathering. Prominent revelations and breaches have made it easy to imagine negative impacts resulting from the misuse of any type of data. Trust in the good intentions of those who keep student data (such as Google) may not be misplaced, but data guardianship can be quite risky. Not only do centralised student information services and portals compound these risks by making themselves into obvious targets, but aggregation of external data opens the door to other forms of data mismanagement. Students and other learners may realise that they live in a surveillance society. Directives from above simply make them disenfranchised. Disenfranchisement is a lack of agency. Learners appropriating their own data would gain back some agency, having the power (and responsibility) to decide matters.
Defining Success out of the Jaws of Defeat?
Students registered in formal programmes, not learners as a whole, are key stakeholders when credentials are at stake. These people accept the process through which decisions are made through the lens of “academic success”.
Common sense would have it that academic success is both a net positive and an ultimate goal for all learners. Key factors mislead common sense.
On one hand, academic success may not guarantee success in other spheres of life. The number of unemployed people with advanced degrees has been on the rise for a number of years and academic careers may appear like a pyramid scheme. The stage has been set for conversations on the value of degrees in higher education. Though these matters may appear removed from discussion of academic success at other levels, the linear nature of an education system makes it important to acknowledge where learning paths eventually lead.
From “diploma mills” to grade inflation, several signs point to the effects of credentialism and the associated decrease in the value of a given degree. Companies like Google may hire large numbers of people with doctorates, but founding Google was itself a matter of leaving a Ph.D. program. In 2011, much was said of the fact that Joi Ito was named director of the MIT Media Lab while lacking academic credentials. To great fracas, Peter Thiel created in 2013 a grant to encourage learners not to enter post-secondary education. “Dropping out of college” may not be desirable for most students, yet the edifice which ties professional success to academic success has been eroding for close to a hundred years.
There are many people in the background, when we talk about stakeholders for Learning Analytics. Eventually, all members of a community have stakes in data related to learning.
Much has been said of the roles parents play in education systems. Parents’ involvement in learning contexts may decrease over time, but the core model behind learner data gives parenthood a continuing role. For instance, North American higher education sees a deepening of the impact parents have in their children’s college or university careers. Part of the situation comes from an increase in the number of students who live with their parents which is likely related to tuition hikes. The business model for post-secondary institutions also encourages a top-down decision-making process whereby parents choose majors for their children (most of whom have legal rights as adults). In a world of deep uncertainty, however, those who have a long experience of the current system rarely predict the future with any accuracy.
Parents gaining access to their children’s data in educational contexts goes much further than getting the occasional grade report. As parents may react strongly against changes in educational systems, it would be useful to have conversations with them as to the actual learning phenomena targeted by Learning Analytics. By contrast, showing parents data without proper framing may do more harm than good.
Managing Community Expectations
In educational systems based on public funding, all taxpayers are stakeholders in Learning Analytics. The proverbial “can of worms” opened when such issues are mentioned need not stop standards drafters (and other interested parties) from using community input or encouraging broader conversations about the use of educational data.
An approach to this community engagement could go through a broadening of “learning”. Many community members would shiver at the thought of a Taylorist system used to assess the performance of school pupils and university alumni. The same people might welcome the possibility to form learning communities, where it would be possible to work together on the development of skills and construction of knowledge.
Though this may sound like a tall order, appropriate standards in Learning Analytics could make this ideal of broad learning easier to achieve.