Students Selecting Higher Face-to-Face Instructional Delivery Report Higher Levels of Learning Interactions and Outcomes
Michael Seredycz*
Department of Sociology, MacEwan University, Canada
Submission: June 19, 2023; Published: June 26, 2023
*Corresponding author: Michael Seredycz, Assistant Professor, Department of Sociology, MacEwan University, 10700 104th Avenue Edmonton, Alberta, Canada
How to cite this article:Michael S. Students Selecting Higher Face-to-Face Instructional Delivery Report Higher Levels of Learning Interactions and Outcomes. Ann Soc Sci Manage Stud. 2023; 8(5): 555749. DOI: 10.19080/ASM.2023.08.555749
Abstract
This exploratory study examined the learning interactions of 303 undergraduate students who initially enrolled in seven traditional 200-level university criminal justice courses. Participants were offered the opportunity to choose between four different teaching modalities: 90% of a course (in face-to-face interactions; almost exclusively in the classroom) to 70%, 30% and 10% ratios (to almost exclusively online). Students who chose higher levels of blended face-to-face interactions were statistically more likely to report higher levels of learner–learner (LL), learner-instructor (LI) and learner-technology (LT) interactions. There was also no statistical impact on learner-content (LC) interactions suggesting that the amount or rigor of content did not seem to be impacted by a student’s decision on the teaching modality they selected.
Keywords: Online; Blended; Hybrid; Face-to-Face; F2F; Ratio; Interval; Learner-Learner; Learner-Instructor; Learner-Content; Learner-Technology; Student Satisfaction Survey
Introduction
As we know, blended learning is the future with the online learning market reaching $325 billion by 2025 [1]. As such, we should expect post-secondary institutions to market their courses accordingly for their own financial well-being. It’s anticipated that post-COVID from 2020 to 2025, the market will increase exponentially by 200% with 21% of colleges and universities across the United States adopting some form of blended learning [1].
Students prefer blended learning. As Brooks [2] suggests, a significant majority (83%) of students preferred some form of blended instruction rather than a traditional face-to-face (10%) or traditional online (7%) courses. This is also consistent with United Kingdom research by Pandurov [3] reporting that 82% of college students prefer a blended or hybrid course over a traditional face-to-face course. According to Coleman [4], 95% of college and university students indicated being satisfied with online education and that these environments assist them in learning more productively. The Center for Applied Research reports that university students prefer digital mediums that emphasize device ownership (tablets, smartphones, tablets) because they view their technology as important to their success (2016:5). Therefore, the use of technology, student ownership, and a blended learning environment appears to shape how students may select their courses. However, Does this student perspective really impact their interactions and success in and out of the classroom?
While students may appreciate the need for more (rather than less) online environments of instruction, What might be the most appropriate ratios of face-to-face instruction to online instruction? There is substantial evidence that suggest blended learning has a positive impact on learning effectiveness and outcomes based on large meta-analyses including but not limited to: Zhao et al. [5], Sitzmann et al. (2006), Means et al. [6], and Bernard, et al. [7]. However, these studies generally do not subscribe to any specific ratios of blended learning and interaction in which there is a universal or systemic fit for students. This is likely due to each student/learner’s interest in the course content and/or expectations of a university or college education. Blended learning can be delivered in multiple ways (Drysdale et al., 2013) but it certainly is not restrictive to a specific percentage or interval [8]. Several meta-analyses have concluded that students overwhelmingly prefer and rank blended course instruction over traditional face-to-face instructional delivery [9,10], However, within an instructor’s perspective, How much is too much online delivery in terms of its impact on student learning outcomes? As Pandurov [3] argues, 73% of instructors in the UK believe that blended learning increases engagement. Is this perspective accurate? This study seeks to assess this belief.
Attempting to find the perfect, universal, or systematic balance for all students has become difficult. Students certainly want more online and blended work and instructors want to ensure students attain the appropriate amount of content and interactions in or out the classroom. This study explores how student choice of instructional delivery impacts learning outcomes and interactions. Within this study, university students were able to self-select their own instructional delivery choosing between more face-to-face instruction (90% of a class schedule) to the least amount of faceto- face instruction (10% of a class schedule). A post-test survey was administered to gauge variables of interest to determine the impact of ratios of self-selection on student learning outcomes.
Literature Review
Findings on blended learning and learning outcomes appear to be inconclusive and mixed when instructors consider that perfect balance of online versus traditional teaching. Morris and Lim [11] examined the influence of student learner and instructional variables on learning outcomes within blended instruction. Their findings suggest that age, prior experiences with distance learning opportunities, preference in delivery format, and average study times are relevant factors in satisfaction within a given course. However, a study by Coldwell, et al. [12] reported that demographic variables of age and sex had little to no significant differences in blended learning efficacy. While not measuring satisfaction and efficacy in the same manner, it becomes more difficult to ascertain what may or may not work within a course. Furthermore, blended learning learner efficacy and outcomes may also be dependent upon factors not necessarily associated with the university or the course itself. As Park and Choi [13] argue, learning is often a reflection of outside factors such as lack of family and/or peer support. Other studies have also reported that factors including but not limited to lack of income, employment, and (in)adequate online access can also factor into learning outcomes, success and/or satisfaction with coursework [14,15]. This certainly complicates any discussion of blended learning and how instructors must adapt to each individual student to ensure higher levels of satisfaction, outcomes or success (versus potential failure).
Moore and Kearsley’s work in [16] examined three specific student interaction domains: learner-learner (LL), learnerinstructor (LI), and learner-content (LC). A student learner’s interaction with others was considered two-way communication both in and out of the classroom including discussion boards and/or email. Their research is still considered very reliable and consistent as these online mediums of discussion boards, threads, and email/text correspondence are still used in 2023. Beard et al. [17] suggested that many of their student learners were more likely to be successful when involved in more face-to-face learnerlearner and learner-instructor interactions (rather than virtual interactions). Marriot et al.’s research in 2004 further expanded on this scholarly work suggesting that student learners appreciated a face-to-face instructor and other learner interactions within a classroom environment and that online sessions (in their study) only compensated and complimented these in class interactions. This would suggest that students feel blended learning was integral to learning outcomes but that the foundations of these discussions may want to be generated within a face-to-face environment. As Allen and Seaman [18] suggest; interactions within student learners needs to enhance participation and engagement by promoting a sense of knowledge transmission both in and out of the classroom. However, there is still a dilemma of the amount, quality, and design of these learner interactions and whether it can be sustained and/or effective within differing courses.
The learner-instructor (LI) interaction is a more traditional dimension to test the interaction and connection of the student and instructor techniques. A study by Garrison et al. (2000) articulated the importance of how the social and cognitive processes of a student learner in the presence of an instructor was essential in their learning experience but also as a predictor of their satisfaction. A follow-up study by Mahmood et al. (2012) reinforced this previous research by reporting that the role and function of the instructor and their presence plays the most critical role in how students evaluate their own learning and how effective online learning is. However, as Woo and Reeves [19] point out, these student-instructor interactions alone do not always lead to effective learning outcomes. Therefore, there is still a need to capture other interactions (such as learner to learner or learner to content) to ensure some form of genuine or meaningful learning has also taken place.
Learner-content interactions focus solely on a learner’s interaction with a course’s subject matter. This could include a student’s interaction with textbooks or other course content utilized to learning the objectives of the course (that also correspond to performance measures). While this may be seemingly easy to universally apply, each discipline and field of study is unique as is the pedagogy of what works versus what does not work for each instructor’s content. Vrasida [20] suggests that the most fundamental interaction is that of a learner and the content of the course and how we base our educational status; on the acquisition of information and evidence. While it is certainly clear that the interaction between and student and the content is critical (Tuovinen, 2000) within a course, there have been few empirical studies to ascertain its role within blended learning and/or student success and satisfaction [21]. Therefore, there are still gaps in the scholarship of teaching and learning as we consider the type of content, amount/depth of coverage and instructor discretion to ascertain it’s efficacy in blended learning. This study hopes to shed more light on this area as the content is the same across all ratios of instructional delivery that students could select.
Cassidy and Eachus (2000) conceptualized and operationalized the learner-technology (LT) index of interactions as the comfort level each student has with the technology utilized within their online environments, as this could be subject to change. Ke and Kwak [22] further augmented the need for student technology interactions when they identified five elements of student satisfaction which included technology competence along with learner relevance, active learning, authentic learning and learner autonomy. This additional technology domain is crucial according to Hofmann. Hofmann (2014) suggests that learners who find technology too difficult, sophisticated or have accessibility issues may result in the abandoning of learning and potential failure in the course. Therefore, the basics of computer literacy [23] is a critical requirement to ensure success and satisfaction in either online or blended environments.
In an effort to integrate multiple domains and student learning interactions to assess satisfaction, Strachota [24,25] developed an integrated Student Satisfaction Survey. Strachota utilized three dimensions (LL, LI, LC) initially developed by Moore and Kearsley [26] in 1996 (later revised in 2005) and a fourth dimension, developed by Cassidy and Eachus (2000) emphasizing the interaction between the learner and technology (LT). A fifth aspect/dimension was then developed by Strachota to assess student satisfaction. This Student Satisfaction Survey (2006) was designed to encapsulate thirty-five items across five domains. Utilizing previously generated items for reliability and validity, Strachota also utilized factor analysis to further increase construct validity [24]. In a pilot study of approximately 250 online students, Strachota [25] utilized these 35 items within five domains to assess factor loading and eigenvalues associated with Chronach’s alpha. With the addition of previously utilized survey instruments (as stated above), factor loading increased significantly within each of the five domains: (i) learner–learner (LL;.89), (ii) learnerinstructor (LI;.89), (iii) learner–content (LC; .90), (iv) learnertechnology (LT; .97) with (v) learner satisfaction (.90). The general rule is that a Chronbach alpha over .70 is good therefore, the range of .89 to .97 is exceptional if we consider the range from above zero to less than one [25]. The survey items included fourpoint Likert scales varying in response from strongly disagree to strongly agree. This survey appears to encapsulate well addressed learner/ student outcomes within a classroom, blended and online settings..
We know that student learning/interactions and outcomes have a positive impact on student success within a course [6,9,10,27,28]. Kuo et al. [29] further investigated the predictors that contribute to student success within online learning environments and concluded that learner-instructor interactions, learner-content interactions and internet self-efficacy were good predictors of student satisfaction. Each of these interactions were also included within Elaine Strachota’s modelling. Kuo, et al. [29] also reported that interactions among students and self-regulated learning did not necessarily contribute to student success. When their study controlled for demographics (such as sex, class level) and time spent online per week, these variables were found to have influence on the learner-learner interaction and internet self-efficacy [29]. This indicates the role and need of multiple domains to determine efficacy and/or satisfaction of the learner. The work of Kuo et al. [29] and others indicate that all of these interactions have an impact in face-to-face, blended, and online instructional delivery.
While blended or hybrid-based learning has become the new normal within instructional delivery within post-secondary institutions (Noerberg et al., 2011; Ross and Gage, 2006), Is there a tipping point of too much blended instructional delivery online? The Sloan Consortium (renamed the Online Learning Consortium in 2014) has conceptualized blended learning as blending of face-to-face (F2F) instructional delivery with the presence of an online learning environment. However, Allen, et al. [7] suggested that operationally, a course is blended when the ratio of an online environment replaces between at least 30% to 79% of a course time. However, this operationalized definition has since evolved since 2007. The conceptualization of blended learning has become more subjective rather than an objective approach to instructional delivery. This can be dependent on the course offered, the content being provided, and the freedom of each instructor in how they design their course as a part of their own unique pedagogy [30,31].
Twigg [32] argues that there are five course redesign models from supplemental, replacement, emporium, fully online, and buffet. Twigg [32] suggests that supplemental model simply retains the basic structure of a traditional university course (including the same number of face-to-face meeting times). The instructor, within this model, could add additional supplemental content as out of class activity work. However, within the replacement model, some in class time is replaced (rather than supplemented) with online or interactive engagement or learning activities. Within this supplemental model, students may be attending the same number of class meetings but some of these meetings would be online or blended using technology-based materials outside of the classroom (such as digital lectures). The emporium approach [32] offers students a replacement of face-to-face discussions with more online deliverables and resources with far more emphasis on digital lectures to remaining almost entirely online when a student prefers to learn. Generating more structured course resources allows for learners to work at their own pace replacing their in-class learning with online learning. Similarly, a filly online redesign are attributed to an instructor or university’s decision to create an independent or almost monolithic one-off course where “web-based materials are used largely as supplemental resources rather than as substitutes for direct instruction” [32]. Within this scenario, an instructor could be overwhelmed with responding to all student interactions versus a more structured approach they may utilize in the classroom [32]. A buffet style redesign emphasizes an assortment of interchangeable components in the process of learning [32]. Students can customize their experience with different learning opportunities from in class lectures (to recorded lectures), labs (no lab presence), live or remote group work, to oral/written/visual presentations whichever the student prefers [32]. While many research studies emphasize a specific ratio of options that have been tested, very few studies have allowed for a buffet model which provides students the choice to select their own preferred face-to-face and/or online ratios of instructional delivery.
To follow-up the buffet style redesign adopted by Twigg [32], a study by Asarta and Schmidt [33] examined student selfselection and their choices of reduced seat time within a blended course (which did not have a punitive attendance policy). Student participants were able to participate in class lectures in person or online. All other aspects of the course’s performance were the same including assignments and exams, Using what Asarta and Schmidt [33] coin as a skip rate that occurs in their traditional inclass courses, they found a mean reduction of 49% to 63% in seat time chosen by students in the blended version of the course. This would suggest that learner self-selection could be very relevant and that a reduction of one to two classes per week (or 50%) is what students found preferable [33].
Owston et al. [34] investigated the relationship between the proportion of time spent online in a blended course and student perceptions and performance. Those within the medium and high blends performed significantly better than those students in lower blended learning environments. Utilizing 20 undergraduate courses offering four different blended learning proportions, Owston et al. [34] reported that students in a medium blend (36% to 40% online) and high blend (50% online) bad more positive perceptions of blended learning. Students in high blended learning rated the amount and quality of learner-learner (LL) interactions higher than other groups [34]. Furthermore, those in a medium blend rated the amount and quality of learner-instructor (LI) interactions as higher than any other group. Of note, Owston et al. [34] reported that those students with lower levels of blended learning (less than 35%) reported lower levels of learner-learner (LL) and learner-instructor (LI) attitudes.
The operationalized definition of blended learning ratios or redesign models has been adopted uniquely across different studies within the scholarship of teaching and learning. This has allowed for mixed findings in what works, what doesn’t work, and what is promising. This study simply aims to aid in more research within the ratio of blended learning within a buffet style of self-selection that aims to provide each student learner with their own individualized course, while assessing their success and/or failure. The revised nature of how blended learning is operationalized is the subject of this paper with attention paid to whether ratios of blended learning impact student learning domains and satisfaction. This study seeks to respond to call for more nuanced research on aspects of blended learning ratios and its impact on learning outcomes and potential satisfaction [33,34].
Methodology
The participants of the study were conveniently sampled from seven 200-level undergraduate criminal justice courses at a midwestern American university. There were 334 original participants/learners registered for the seven 200-level courses. However, 22 students were removed from the study having dropped or withdrawn from the course throughout the semester. An additional 9 students were removed from the study for not having completed survey instruments. Therefore, the sample size for the purpose of analysis was 303 participants. These students were not randomly selected nor was a comparison group available at the time. As such, due to a smaller sample size, this study is exploratory in nature and few inferences can likely be made from it. However, as stated above, there are very few studies that have utilized a self-selection approach seeking a buffet-style [32] of individualized student instruction and learning..
Each of the seven courses were sixteen-weeks in length and divided into 34 one-hour blocks of class time (within a semester-based system). The course was predicated on utilizing a text that could be offered in both print and online versions. Microsoft power point modules were also used to ensure that additional resources were included in the course to ensure the retention of key concepts, inter-connectivity with the text, and any outside resources. Students would be expected to read the required text for the course in addition to supplemental technical reports, peer reviewed articles and online audio-visual clips. Each course was designed to ensure consistency across performance measurements while also accounting for suitability and feasibility [31]. Performance measures included three examinations (75% of a final grade) and three assignments worth 10%, 5% and 10% respectfully. The three examinations were all proctored in class and were similar in terms of depth of questions, rigor, and expectations. The three assignments were related to course materials and a student’s ability to identify other valid online sources (technical reports and peer reviewed studies) to ensure connectivity and engagement to the text and course content.
The study attempted to alleviate concerns that online courses would require more time to grade engagement measurements. Therefore, no additional instructional time was allocated to an online delivery system that would not be present in a traditional course delivery which Twigg [32] outlined as issues within a fullyonline course redesign. While significant time and energy was devoted into developing the buffet-style approach, no one group of students were asked to do more rigorous work than another group. This prevents what Garrison and Vaughn argue is a “course and a half” [35] issue where students find themselves doing more busy work or more work within online environments than traditional face-to-face courses.
Student learners were asked to self-select into one of four instructional modalities: supplemental (90% in-class : 10% online), replacement (70:30) or two emporium options (30:70 or 10% in-class : 90% online). The most traditional offering was supplemental delivery where 90% of the course would be face-toface [F2F] and 10% within an online environment. The designated 70:30 option offered students 30% of the in-class sear time for lectures to be replaced with online work and video-based lectures. The 30:70 option devoted 3 hours to in class examinations, 10 hours to instructional face to face lectures and 21 hours of original lecture time replaced with 19 hours of digital Camtasia lectures and 2 hours of independent readings. The 10:90 instructional delivery reserved 3 instructional hours for examinations, 3 hours for face-to-face discussions that were pertinent more to assignments and examinations whereas 28 hours of instruction was delivered online. Digital Camtasia lectures and tutorials were utilized to replace all face-to-face lectures while discussion boards and threads were also utilized as forms of engagement (but were not graded). Upon completion of the first exam (one month; 8 classes into the course), students could re-select an option that they initially had not chosen. This offered each student more flexibility if they felt the instructional delivery they first selected was inappropriate or inconsistent with their wants/needs/ expectations. Upon completion of the course, students were asked to complete several surveys which included Elaine Strachota’s Student Satisfaction Survey (2006) to ascertain student learning interactions and outcomes.
Findings
As explained previously, the study sample began with 334 eligible students enrolled in seven 200-level criminology/ criminal justice courses within a liberal arts university in the midwestern United States. Thirty-one students were removed from the study for (i) having dropped or withdrawing from the course or not completing their self-administered surveys. Therefore, 303 students were used for the analysis of this study.
Table 1 highlights the self-selection and/or re-selection of instructional delivery of students. As seen below, a majority of the student learners (48%) selected the 70% face-to-face [F2F] : 30% replacement online instructional modality. The remainder of the student learners selected either the 90:10 supplemental course (25%), a more enticing emporium 30:70 online (18%) or 10:90 (9%) modality. However, it is clear that while most students wanted a blended learning modality, those who revised their schedule did so to attain more (not less) face-to-face instruction..

This could also be due to work that interactions could have also existed in face-to-face lectures, not simply within the online Desire To Learn (D2L) platform. Not surprising, 76% of students reported not attaining a timely response or feedback from other students (within a 48-hour period) which may result in students being frustrated with other students in the course and/or the discussion boards altogether. This finding would suggest that faculty need to consider that students may not utilize the discussion boards and when they do so, that responses may be delayed and feedback not provided seamlessly resulting in potential frustration by students. Despite some of the concerns raised by student responses, 86% of respondents reported the course did encourage students to discuss ideas and concepts covered with other students (which again, was voluntary).
The findings from Table 2 suggest that the instructor was generally inactive in the online student discussions yet present and required when in face to face and online environment instruction and assistance. This was by design, As explained previously (in the methodological approach for redesign), no additional instructional workload would be present within the online environment therefore, engagement was voluntary (similar to any face-to-face experience). Three-quarters of students disagreed or strongly disagreed that the instructor (LI) was an active member of the discussion group offering direction to posted comments. However, questions of feedback became particularly relevant. When asked whether students had received timely feedback (within 24-48 hours) from their instructor, 95% of students agreed or strongly agreed. Furthermore, in a reverse coded question, only 13% of students reported some levels of frustration by a lack of instructor feedback. This suggests that timely feedback and constructive criticism is something that should be taken under advisement when developing courses (as the literature indicates). Nine in ten students (91%) felt that the course was individualized to them specifically so they could attain the appropriate level of attention.
Additionally, it appeared that instruction across all interval levels of blended learning felt that communication was encouraged (95%) and that despite lower ratios of face-to-face coursework in a traditional setting, 95% of students reported that they could feel the presence of the instructor throughout the course.

Within the seven items corresponding to learner – content (LC) dimension, some findings point to additional research requirements into the rubrics and content associated with learning outcomes and performance measurements. Students were generally in agreement to strongly in agreement that preparatory materials corresponding to exams (87%), course documents (86%), assignments (83%) and website usage (74%) facilitated their learning in both face to face and online environments. However, students reported lower levels of problem solving (77%) and critical thinking (74%) in terms of online supplemental activities which were designed to simply assist them in preparation of exams and assignments. While still a majority, 65% of students reported that they had attained enough feedback to attain improved written skills.
The student learner – technology (LT) dimension offers a glimpse into the respondent’s comfortability and self-efficacy of using hardware and software within blended coursework. Initial findings suggest that students agree or strongly agree that they can deal with computer difficulties (93%), they are confident in their ability with technology (96%), working with computers and technology is very easy (97%) and that they enjoy working with technology (99%) which makes them more productive (99%). These percentages are clearly conclusive findings that concur with Brooks (2018) in that students want to work with online technology and the digitization of courses is something this student population felt very comfortable with.
However, in a stark contrast, despite 98% of students reporting that computers and technology aid in learning, only 68% of students reported that the Desire To Learn (D2L) software platform they utilized facilitated learning. This would suggest that platforms that are designed for students may in fact be hindering their learning and that it is not easier.
To attain more insight into self-selection of blended learning instructional delivery, the following table was generated to examine how each group of students in varied modalities scored within each one of the learner interactions/ dimensions associated with Strachota’s [25] Student Satisfaction Survey. Within each of the four dimensions, Likert scale responses were coded from strongly disagree (0) to strongly agree (3). This allowed for the generation of a larger indices of scores within the: (i) learner–learner (LL; 7 items ranging from 0 to 21), (ii) learner–instructor (LI; 6 items ranging from 0-18), (iii) learner–content (LC; 7 items ranging from 0 -21) and (iv) learner–technology (LT; 9 items ranging from 0 – 27) interactions. While each dimension of engagement is unique, a composite overall score was also generated to differentiate the groups where the previously coded 29 variables (with a range of 0 – 3) would be aggregated into an overall score range of 0-87. Multiple scaled indices and a composite scaled index would allow for a potential ranking system of which blended learning group reported higher or lower average scores of each of the four learner dimensions. It should be noted that some questions (denoted with an asterisk in Table 2) required reverse coding for further analysis in Table 3.

Averaging scores of each of the dimensions for each interval/ ratio of face-to-face : online learning modalities, it is clear that students appeared to have more genuine learning and engagement within the replacement 70:30 option over other options. Using an overall scoring index, those students who chose a 70:30 modality had the highest scores within all learning dimensions. Interestingly, those students who selected the least traditional emporium options of instructional delivery were the least likely to report good learner interactions and engagement..
The rank order of scaled indices suggest that students who selected the 30:70 option (where 30% of the course would be face-to-face) scored the lowest mean averages of learning interactions/ outcomes, even when considering a nearly fully online emporium option. Previous research is mixed but many have concluded that the more blending the better, when in fact these findings do not support previous research. This could be due to several issues notwithstanding that there are less rubrics and measurements to engage students within the online environment (based on the design of these courses). It is clear that some students who selected a 10:90 (an almost entirely online class) had better average learner interactions and overall outcomes than those students who selected the 30:70 instructional delivery. This finding might suggest that students who enrolled in an almost fully online environment knew the expectations while those within the 30:70 blended instruction did not.
The research question of this study was to ascertain the impact of student self-selection of instructional delivery on learning interactions. Four OLS linear regression models were used to generate findings for each of the student reported learning dimensions, associated with how Strachota [25] measures satisfaction. Instructional delivery was used as a ratio interval variable for the purposes of the analysis as face-to-0face instruction decreased with each ratio chosen. All of the four linear regression models were found to be statistically significant with a confidence level of 95% with the p < .05 being significantly different than zero. Findings are reported in Table 4 below.

It appears only two variables were able to predict the learnerlearner (LL) interactions of students. Students reporting high levels of needing flexibility and convenience were found to be the most significant predictor within the models, based on the Beta values. Student self-selection of instruction was the next best predictor of LL interactions. Data suggests that students who chose a higher amount of face-to face instructional delivery (rather than lesser amounts) were more likely to have attained higher aggregate scores of LL interactions. This would make sense as there would be a higher likelihood of peer interactions within classroom settings rather than an online setting. What was somewhat surprising was that this model predicted 37% of LL interactions despite that the design of the course did not have any graded measures for student interactions. It should be noted that six cases were removed from the analysis to control for multicollinearity (where the Variance Inflation Factor > 4 and Tolerance level > 2.0).
Learner – instructor (LI) interactions were found to be predicted by only one variable (of the five) within the model. A student’s self-selection of instructional delivery remained a significant impact variable. However, this variable appears to (in combination with the other four variables) explain 34% of LI interactions, Data suggests that students who selected higher ratios of online blended learning were more likely to report less engagement with the instructor. This research would accentuate the need for students to remain in contact with instructors, which in turn likely impacts success. Reiterating, the higher the ratio of F2F instructional delivery chosen by the student, the higher likelihood they would report a higher LI interaction. It should be noted that five cases were removed from the analysis to control for multicollinearity (where the Variance Inflation Factor > 4 and Tolerance level > 2.0).
Based on the Nagelkerke R square, a much lower 29% of the variance of learner – content (LC) interactions were explained with five variables. Only one variable, the need of flexibility was found to be a significant predictor of LC. As such, the more students required flexibility, the higher they reported a learner - content interaction. However, as stated above, instructional delivery did not appear to have any statistically significant impact on learner – content. This is somewhat surprising as it would be expected that the amount of independent time allocated for learning would be associated with the student reported agreement/ disagreement on the interactions of assignments and exams. However, the data could also indicate that the content remained consistent across all types of instructional delivery meaning that it played less of a role in determining student learning. It should be noted that within this model, five cases were removed from the analysis to control for multicollinearity.
Of the five variables within the linear regression model, three variables were found to be statistically related to a learner’s interaction with technology (LT). Interestingly, some demographic variables were found to be significant predictors. Students who self-identified as being White were more likely to report higher levels of LT interaction. Additionally, it appears that women were more likely than men to report higher levels of technology interactions. This is somewhat concerning as interactions with technology is likely a barrier to students who are visible minorities and men, which can certainly impact their success in the course. Additionally, students reporting the need for more flexibility were also more likely to score higher on LT interactions and outcomes. Instructional delivery also remains a predictor, albeit a variable of more importance (based on the Beta values in model 4). Students who chose lower amounts of online blended delivery (more F2F interactions) were also more likely to report higher amounts of learner – technology interactions across the item’s scaled dimension. While this model predicted 41% of LT interactions, four cases were removed from the analysis controlling for multicollinearity.
These findings would infer that even though students may appear to want or select higher levels of online blended instruction, they are also more likely (in this analysis) to score lower on LL, LI and LC interactions, as developed by Strachota [25]. This could correspond to a student’s potential success and satisfaction within the course. Perhaps students who chose higher intervals of online delivery had higher expectations of what the online environment would provide for them. Results from Table 1 would suggest that many of these students chose to change their instructional delivery mode to ensure more success (versus the failure they may have felt). This is an important finding that really conveys that each student may have very different expectations of what an online environment should or could appear to be, which may directly impact their success in the course [36-40].
Implications
As an exploratory case study, the findings suggest that there are certainly differences, and in fact statistically significant differences between the student selection of interval blended environments and the impact on student learning interactions. It appears that the lower the blend of online instruction (either as a supplemental or replacement value according to Twigg [32]), the higher the reported level of learner interactions with other learners (LL), the instructor (LI) and technology (LT). Furthermore, students who chose higher intervals and ratios of online instructional delivery had the undesired effect of lowered learning interactions. The mean distribution of interaction scores provide a glimpse that there is likely a tipping point in how much online blended instruction students want before it impacts learning outcomes. To reiterate the literature, these findings and results are still somewhat hypothetical as it is within one particular discipline within a traditional American university 4 year degree granting campus. This study was also a convenient sample of students who were offered an opportunity to choose their instructional delivery. However, due to the low sample size of the student population, findings should be taken with a measure of caution. Additionally, there was no matching group for this study with no experimental design or probability-based sampling utilized. The consistency or lack of consistency with other studies in blended learning may have more to do with the uniqueness of the subject matter, instructional design decisions, and the level of facilitation required for both online and face-to-face learning contexts [31].
As such, intervals of blended learning appear to have both a positive and negative effect (perhaps a curvilinear effect) on learning interactions and/or outcomes that are obviously correlated with success. This poses the question, Is there a tipping point of too much blended instruction? It could be hypothesized that those students who selected an almost fully online course (10:90) were more likely to report higher on learning interactions (on average) than students who chose a 30:70 instructional delivery. Was this due to their expectations of how the course would be delivered rather than higher expectations of learning outcomes? Is there a tipping point where students seek to disengage from the content and/or interactions simply to complete the course knowing learning or success doesn’t matter as much as other courses they register in? Does significantly lessening face-to-face instructional delivery hinder success? These are all questions that require more answers (across multiple disciplines). This research serves to add to the knowledge of the scholarship of teaching and learning beyond comparing the learning outcomes of face-to-face versus online courses. Blended courses appear to be the future of learning within post-secondary institutions [2] so determining effectiveness to assist each individual learner (through a buffet style approach [32] is likely on instructor’s process when designing or re-designing a course.
References
- Runga K (2023) 100+ must know online learning statistics in 2023.
- Brooks D (2016) ECAR study of undergraduate students and information technology. Louisville, CO: ECAR.
- Pandurov M (2021) 35 exciting, blended learning statistics.
- Coleman H (2021) How did the COVID-19 pandemic change the education industry forever?
- Zhao Y, Jing Lei, Bo Yan Chun Lai, Hueyshan Sophia Tan (2005) What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record 107(8): 1836-1884.
- Means B (2013) The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record 115(3): 1-47.
- Bernard R, Eugene Borokhovski, Richard F Schmid, Rana M Tamim, Philip C Abrami (2014) A meta-analysis of blended learning and technology use in higher education: From the general to the applied. Journal of Computing in Higher Education 26: 87-122.
- Allen I, Jeff Seaman, Richard Garrett (2007) Blending In: The extent and promise of blended education in the United States. Sloan Consortium.
- Spanjers I, Karen D Könings, Jimmie Leppink, Daniëlle ML Verstegen, Nynke de Jong, et al. (2015) The promised land of blended learning: Quizzes as a moderator. Educational Research Review 15: 59-74.
- Moskal P, et al. (2006) Peer Review: Emerging trends and key debates in undergraduate education. Learning and Technology, p. 8.
- Lim D, Morris M (2009) Learner and instructional factors influencing learner outcomes within a blended learning environment. Educational Technology and Society 12(4): 282-293.
- Coldwell J, Craig A, Paterson T and Mustard J (2008) Online students: Relationships between participation, demographics and academic performance. The Electronic Journal of e-learning 6(1): 19-30.
- Park J, Choi H (2009) Factors influencing adult learners’ decision to drop out or persist in online learning. Educational Technology and Society 12(4): 207-217.
- Cohen K (2012) Persistence of master’s students in the United States: Developing and testing of a conceptual model. PhD Dissertation, New York University, USA.
- Thompson E (2004) Distance education drop-out: What can we do? In: Pospisil R & Willcoxson L (Eds.), Learning Through Teaching (Proceedings of the 6th Annual Teaching Learning Forum (324-332). Perth, Murdoch University, Australia.
- Moore M, Kearsley G (1996) Distance education: A system view. Belmont, CA: Thomson-Wadsworth.
- Beard L, Cynthia Harper, Gena Riley (2004) Online versus on-campus instruction: student attitudes and perceptions. Tech Trends 48(6): 29-31.
- Allen I, Seaman J (2014) Grade change: Tracking online education in the United States. Babson Survey Research Group.
- Woo Y, Reeves T (2007) Meaningful interaction in web-based learning: A social constructivist interpretation. The Internet and Higher Education 10(1): 15-25.
- Vrasidas C (2000) Constructivism versus objectivism: Implications for interaction, course design, and evaluation in distance education. International Journal of Educational Telecommunications 6: 339-362.
- Zimmerman T (2012) Exploring learner to content interaction as a success factor in online courses. The International Review of Research in Open and Distributed Learning 13(4): 152-165.
- Kwak D, Flavio M Menezes, Carl Sherwood (2013) Assessing the impact of blended learning on student performance. Educational Technology & Society 15: 127-136.
- Rovai A (2003) In search of higher persistence rates in distance education online programs. Computers and Education 6(1): 1-16.
- Strachota E (2003) Student satisfaction in online courses: An analysis of the impact of learner-content, learner-instructor, learner-learner and learner-technology interaction. Doctoral dissertation, University of Wisconsin-Milwaukee. Ann Arbor, Michigan, UMI Publishing, United States.
- Strachota E (2006) The use of survey research to measure student satisfaction in online courses. Presented at the Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, University of Missouri-St. Louis, MO, p. 4-6.
- Moore M, Kearsley G (2005) Distance education: A system view. Belmont, CA: Thomson-Wadsworth.
- Dziuban C, et al. (2005) Higher education, blended learning and the generations: Knowledge is power - No more. In: Bourne J and Moore J (eds.), Elements of quality online education: Engaging communities (85-100). Needham, MA: Sloan Center for Online Education.
- Dziuban C, et al. (2011) Blended courses as drivers of institutional transformation. In: Kitchenham A (eds.), Blended learning across disciplines: Models for implementation (17-37). Hershey, PA: IGI Global, USA.
- Kuo Y, Andrew E. Walker, Brian R. Belland, Kerstin E. E. Schroder (2013) A predictive study of student satisfaction in online education programs. International Review of Research in Open and Distributed Learning 14: 16-39.
- Alammary A, Angela Carbone, Judy Sheard (2015) Identifying criteria that should be considered when deciding the proportion of online to face-to-face components of a blended course. 48th Hawaii international conference on system sciences, pp. 72-80.
- McGee P, A Reis (2012) Blended course design: A synthesis of best practices. Journal of Asynchronous Learning Networks 16(4): 7-22.
- Twigg C (2003) Improving learning and reducing costs: New models for online learning. Educause Review 38(5): 28-38.
- Asarta C, Schmidt J (2015) The choice of reduced seat time in a blended course. The Internet and Higher Education 27: 24-31.
- Owston R, York D (2018) The nagging question when designing blended courses: Does the proportion of time devoted to online activities matter? The Internet and Higher Education 36: 22-32.
- Garrison D, Vaughan N (2008) Blended learning in higher education: Framework, principles, and guidelines. San Francisco, CA: Jossey-Bass.
- Garrison D, Kanuka H (2004) Blended learning: Uncovering its transformative potential in higher education. Internet and Higher Education 7(2): 95-105.
- Lim D, et al. (2006) Online versus blended learning: Differences in instructional outcomes and learner satisfaction. Journal of Asynchronous Learning Networks 11: 27-42.
- Marriot N, Pru Marriott, Neil Selwyn (2004) Accounting undergraduates’ changing use of ICT and their views on using the internet in higher education-A Research note. Accounting Education 13: 117-130.
- Packham G, Paul Jones, Christopher Miller, Brychan Thomas (2004) E-learning and retention key factors influencing student withdrawal. Education and Training 46: 335-342.
- Vaughan N (2007) Perspectives on blended learning in higher education. International Journal on E-Learning 6(1): 81-94.