Some visualisations from previous research projects

Stuart Palmer, April 2019 (updated November 2022)

  1. Text analytics
  2. Social network analysis
  3. Measuring active learning
  4. Repeated cross-sectional evaluation of an LMS
  5. Australian teaching and learning centres
  6. Modelling of impact of online discussion participation on final unit result
  7. Time sequence analysis
  8. Linking quantitative and qualitative data
  9. Other stuff!



1. Text analytics

In 2014, a new subject in design and manufacturing with composite materials was offered at the Griffith School of Engineering and the Built Environment.  A previous evaluation of the initial subject offering found that the students generally perceived the subject to be valuable, and it also offered insights into potential areas for improvement.  Over the subsequent three offerings of the subject (2016, 2017 and 2018), a range of deliberate changes to the subject learning design were made, with the aim of improving student learning and engagement.  An inspection of the data from the university student evaluation instrument for the first four subject offerings showed essentially no change in mean ratings for the quantitative scale items, even though aspects of the subject learning design had been deliberately changed.

Limitations with the ability of quantitative scale item student evaluation data to reveal meaningful variation in response to changes in learning designs are described in the literature.  The university student evaluation also included the option for students to provide open-ended text comments about the subject.  An investigation was undertaken to determine if computer-based analysis of the student comments (text analytics) could identify differences in the students’ perceptions of the subject that related to the changes in the subject learning design over the first four years of offer.

Multidimensional scaling plot of ‘particularly good’ comments
Multidimensional scaling plot of ‘be improved’ comments

The text analytics analysis revealed distinct clusters of terms in the student open-ended comments by year.  The term clusters observed did capture aspects of the intentional changes to the learning design over the first four years of offer of the subject, providing some evidence that students did actually perceive these intentional subject learning design changes.

From: Palmer, S. and Hall, W. (2019), Using qualitative student evaluation data to illuminate quantitative scale items: seeing beyond the numbers, 2019 Australasian Association for Engineering Education Conference, Brisbane.

Other text analytics investigations I have done span visualising the large-scale, multi-year social media interactions of a national professional body (the Australian Computer Society) on Twitter, to, crowdsourcing product design requirements from Amazon product reviews, and benchmarking those against design requirements elicitation results by traditional means for similar products documented in the literature.




2. Social network analysis

My very first network visualisation of follow-up posts in a discussion forum for an assessed activity.  Student 101 was very active, responding to many other students.  Student 66 (in the centre) made a lot of responses to student 7.

Network of student post interactions on a class LMS discussion board (unpublished)

Social media data can be obtained for ‘free’ from many sources – Twitter, Facebook, etc.  The following is a visualisation of 96,000 tweets mentioning comet ISON over a nearly 12-month period in 2013-14.  The network visualisation reveals many features of the social media communication that are not really possible to see any other way.

Network of 96k tweets related to Comet ISON

From: Palmer, S. (2015), Tracking Comet ISON through the Twitterspehere: Visualizing Science Communication in Social Media, International Journal of Virtual Communities and Social Networking, v7, n4, pp. 57-72.




3. Measuring active learning

At Deakin, there was always concern that off-campus students rated aspects of their study experience lower than on-campus students.  This came into even sharper focus when Deakin participated in the Australasian Survey of Student Engagement (AUSSE) – where the mean active learning scale (AL) rating for off-campus students was significantly lower.  I participated in a project that developed four new items for the active learning scale designed to capture some highly engaging, mostly online, activities.  With the participation of the Australian Council for Educational Research, the extra items were included in one of the versions of the AUSSE used in 2010.  We received 5887 usable responses.

The mean rating for off-campus students for the expanded AL scale (AL11) was significantly higher than the rating obtained using only the original 7 items (AL7).  Based on AL11, on-campus students still had a higher mean rating, but the different between on-and-off-campus students was much reduced.  In fact, based on all demographic divisions of the data set, the mean AL11 rating was significantly higher than the corresponding AL7 rating, pointing to the need to design such instruments/scales carefully and adequately.

Mean scores for AL7 and AL11 for various groups of students (with 95% confidence intervals and Cohen’s d)

From: Carr, R., Palmer, S. and Hagel, P. (2015), Active learning: the importance of developing a comprehensive measure, Active Learning in Higher Education, v16, n3, pp. 173-186.




4. Repeated cross-sectional evaluation of an LMS

After a small-scale pilot in 2003, Deakin implemented the Vista (later BlackBoard Vista) LMS in 2004 as the mandated LMS for every unit of study.  In 2004, 2005 and 2011 we surveyed all student and staff about their perceptions of aspects of the LMS and associated support.  We were able to track the changes in perception of importance of, and satisfaction with, a range of LMS features that did not materially change over that period.  The data set included nearly 6800 student responses.

Importance-satisfaction trajectories for student ratings (out of 5) of LMS features for 2004–2011

We were able to bookend the life of the Vista LMS implementation at Deakin.  We also had similar data sets for 2012, which was the first year of use of the new Desire2Learn LMS.  Despite much rhetoric about the transformative nature of the new LMS, there was essentially no change in any ratings between 2011 and 2012 for students or staff.

From: Palmer, S. and Holt, D. (2012), Trajectories of engagement: A repeated cross-sectional investigation of student perceptions of an online learning environment, Research in Learning Technology, v20, n3, pp. 253-265.




5. Australian teaching and learning centres

As part of an Australian Learning and Teaching Council Leadership for Excellence in Learning and Teaching Program Grant project, we surveyed all directors of T&L Centres in Australia.  One aspect of the data collected was perception of importance of, and satisfaction with, a range of Centre functions.

Mean Importance and Satisfaction ratings for Teaching and Learning Centre Functions

Collectively, Centre directors thought they were doing a good job of supporting staff to apply for national awards and grants, but would’ve liked to do better in staff professional development activities.

From: Palmer, S., Holt, D. & Challis, D. (2010), Australian teaching and learning centres through the eyes of their Directors: characteristics, capacities and constraints, Journal of Higher Education Policy and Management, v32, n2, pp. 159-172.




6. Modelling of impact of online discussion participation on final unit mark

A number of characteristics of student engagement with a formally assessed activity based on an online discussion forum were investigated.  Students had to make at least five posts throughout the semester responding to a structured question, and also make five responses to posts from other students.  If there were more than five posts, the five ‘best’ posts counted for assessment.

Four broad patterns of engagement were observed. 1. Students made the required posts as soon as possible, then left the forum (red).  2. Students started in week one, and spread their posts out over the semester, to varying degrees (yellow).  Students started late, and then tried to ‘catch up’ – with increasing urgency depending on how late they started.  4. Students were completely absent from the forum.

Rank ordered profile of new postings by students across the semester

For each student we had a number of demographic data items, information about their interaction with the discussion forum, and, their final unit mark.  A multivariate linear regression revealed two factors positively associated with final unit mark – 1) prior academic performance (measured by weighted average mark, similar to GPA); and, 2) the number of new forum posts (up to five being counted).

From: Palmer, S., Holt, D. and Bray, S. (2008), Does the discussion help? The impact of a formally assessed online discussion on final student results, British Journal of Educational Technology, v39, n5, pp. 847-858.

In later work, I did a binary logistic regression of the same data looking at the predictors of student success (whether they passed or not).  The key predictors were prior academic performance (the higher, the more likely to pass) and date of first access of the LMS (the earlier, the more likely to pass).




7. Time sequence analysis

During the period 2013-2016 I co-led the Research My World research crowdfunding project.  This was an Australian first, and successfully generated more than $200,000 in new research income for Deakin University staff from all Faculties.  Suspecting the importance of social media activity in project success, I collect social media data for the various projects on Twitter and Facebook.  We had a range of other project data as well.  I investigated the relationships between the time sequence of project income (pledges) and other project activity.  The chart below is from one of the RMW projects that was particularly active, and had a good data set to work with.  To someone who works in signal processing, the chart below looks to exhibit a correlation between total daily Twitter activity (tweets plus retweets plus mentions) and total daily pledges.

Timeline of pledges and total Twitter activity for one Research My World project

There are formal methods for quantifying the correlation between time series, such as the cross-correlation factor (CCF).  Because there may be delays in the response of a system to an input, it is conventional to compute the CCF with one of the data sets displaced by a range of steps (lags), both positive and negative.  The CCF plot below shows that the maximum CCF occurs at zero lag, and is statistically significant, providing some evidence of the relationship between social media activity and project income.

Other evidence from examining the quantitative characteristics of the social media networks related to the projects provided further support for the importance of social media activity to project success.

Cross-correlation analysis of pledges and total Twitter activity for one RMW project

From: Verhoeven, D. and Palmer S. (2015), Because it takes a village to fund the answers: Crowdfunding University Research, in Bennett, L., Chin, B. and Jones, B. (eds), Crowdfunding the Future – Media Industries, Ethics, and Digital Society, Peter Lang: New York.




8. Linking quantitative and qualitative data

Traditionally in the course experience questionnaire, one of the highest ratings is library services, and one of the lowest ratings is library resources.  Similarly, for the Deakin library, the library resources rating was historically one of the lowest items in the former student evaluation of teaching and units (SETU) survey.  All SETU items were typically strongly inter-correlated, providing no clear message about particular associations with library resources rating that might offer a point of leverage.  The scatter plot below shows the relationship between SETU item 1 (This unit was well taught) and SETU item 6 (The library resources met my needs for this unit) for every unit at Deakin from mid-2009 to mid-2010.  There is a strong positive linear correlation apparent.  Many observers concluded that well taught units use the library well, and that’s all there is to it.

Scatter plot of mean unit SETU ratings pairs for item 1 and item 6

Previous work that I had done showed that it was possible to incorporate other easily available data and show some sources of differentiation in SETU item 6 ratings, including unit year level and unit discipline area.  A new idea that I had was, rather than look for exceptional ratings for item 1 and/or item 6, look at the data in the ‘unremarkable’ region of mean ratings for item1 (they grey band above), and look more closely at the associated item 6 data.  The plot below shows the expansion of the grey band above.

Expansion around the overall mean rating for SETU item 1

For those units with ‘extreme’ (low and high) mean ratings for item 6, I examined their unit guide descriptions.  Of the high rating unit group (A-D), three of the four (A, C and D) unit descriptions explicitly identify that students will engage with ‘literature’ of various forms.  In these three units, students would presumably have to engage with the library and/or library-related resources in a context directly associated with their unit studies, and it would seem likely that this would make the relevance and value of such resources readily apparent.  Such explicit references regarding literature are absent from all of the low rating unit group (E-G) unit descriptions.

What seems clear is that those units which explicitly incorporate student interaction with information resources beyond those provided within the relatively enclosed unit environment are more likely to lead students to engage with the library in some form.


From: Palmer, S. (2012), Using quantitative and qualitative unit profiling for identifying the contribution of library resources to teaching quality, Library and Information Research, v36, n113, pp. 81-98.




9. Other stuff (not mentioned in detail above)

Working with Australian census data – so far mainly looking at graduate employment outcomes in relation to qualification/disciplines, occupation, gender, geographic location, age, etc.

Work Integrated Learning – engaging staff with WIL in the curriculum, and exploring how you would measure the impact of WIL on student employability and graduate employment.

Student evaluation of teaching – critically interrogating SET data and systems, and their use.

Characterisation of social media data – in education and other settings.

Engineering education.

I also have a whole technical line on using the two-dimensional discrete wavelet transform to objectively characterise material surface finish quality via computer analysis of image texture.

List of publications here.

Design a site like this with WordPress.com
Get started