Program Development

In order to compare learning output variables in web-based instruction, two WBI (Web-based Instruction) programs were developed as college level courses. One course (called structured instruction in this paper) was developed and implemented mainly as a highly structured resource-based self-learning mode, and the other course (called interactive instruction in this paper) was developed and implemented to carry mainly interpersonal interaction with the least course/contents/interface structure. Elements and design/management factors of each instructional program, such as elements for course/contents/interface structuralization, or management tips for the interactive course, were shown in Table 1. Programs were designed and developed according to traditional ISD (Instructional Systems Development) model, considering these elements. Three WBI experts and practitioners verified these two different programs throughout the development process. Subject matters of both courses were same, which were about general social science. The development process for two instructional programs is as follows; Analysis and Planning (Learning contents, students, learning environments were analyzed first and research design was planned considering these analysis results) [right arrow] First instructional design and draft storyboarding [right arrow] Validation by three experts (Ph.Ds in the field) and developers [right arrow] Second instructional design and storyboarding ? Production of digital materials [right arrow] Pilot implementation of course [right arrow] Revision and redevelopment of program [right arrow] Validation by three experts and completion of final instructional programs (The programs were also validated by students during the process of implementation and data collection; see Results).

Participants and setting

Sixty seven and fifty six juniors at a cyber university in Korea, ages 20s to 50s, were randomly assigned and required to take each course respectively for one semester. There was a pre-test to know prior knowledge and the results showed no significant difference between two groups (Table 2). Same instructor and same contents were applied to both classes. In terms of student characteristics that might have influence on the results of this research, perceived learner characteristics were analyzed a week prior to each program’s implementation. Students used a five-point Likert scale to analyze whether students were extrovert vs introvert, social (or outgoing) vs rather individual, active vs passive in class. As a result, students in the highly structured instruction were more extroverted (p<0.05) and more active in class (p<0.01) than the students in the interactive instruction were. However, the student factor doesn't seem to be really influential in this context because students' characteristics were reversed in the previous pilot implementation and showed same results as this paper. That is, in the pilot implementation, students in the interactive instruction were more extroverted (p<0.05) and more active in class (p<0.05) than the students in the structured instruction were. But the results of the pilot implementation showed no significant difference from the result in this paper. Therefore, student's characteristics weren't a critical factor as much as instructional method in this research. Implementation

Two classes were homogeneous in terms of instructor and subject matter, since same instructor and same subject matter was assigned to both classes. Main difference was instructional design and management way (Table 3). In the highly structured instruction, students were required to study highly structured web-based material and to undertake assignments given every week, usually alone. Instructor’s feedback was provided as little as possible. Instead, most of the possible feedbacks by the instructor were structured into the web materials. For example, materials were written with natural oral language just like a simulation of conversation. And a teaching assistant cartoon character was used all along the learning process in the material. In addition, various structuring elements in Table 1 were reflected to the program and the web-based material. The instructor was trying to neither encourage nor discourage interactions among students. Meanwhile, in the highly interactive instruction, students were required to undertake assignments and to participate in team discussions assigned throughout the course. The instructor’s immediate feedback to assignments and questions were provided, although well-structured materials were not provided. All interactions were recorded on an asynchronous web bulletin board. Table 3 shows some examples of course structure difference between two classes.

Procedure

This research was done over a period between 2002 and 2004 from program development to data collection and analysis. Design and development of two different types of instructional programs were started from early 2002. The treatment was first operated with these programs as a pilot implementation at fall semester of 2002. After some revision, finally developed instructional programs were implemented in 2003 again. And then interview and analysis of collected data was conducted until 2004. The results were found to be credible enough to be replicable since there was no difference between two implementations.

The students’ cognitive achievement data was collected in two different ways. One was about receptive learning, which consisted of declarative knowledge, information, concepts, or theories that were mostly accepted without criticism; ex. Introducing concepts and definitions of the terminology related in the class. In receptive learning, students don’t have to criticize what they learn, what they do is just understanding, memorizing, and recollecting. That is, reception in itself. The other was about critical thinking learning, which contained controversial issues requiring arguments, criticism and discussion; ex. Does distance education using high technology reproduce classes of society? Does it make the rich-get-richer and the poor-get-poorer?, or Is distance education cost-effective?

For evaluation reliability, three evaluators’ gradings were correlated (Pearson r = 0.84 **, p< 0.01). In terms of satisfaction level, a satisfaction measurement tool which was developed and validated by Kim and Ryu (2000) was used after modification. Twenty students were interviewed at the end of each course to verify all quantitative data and to provide more detailed information regarding the factors of learning outputs. Besides, questionnaires, achievement scores, satisfaction levels, online messages, interview data were collected and analyzed. Various statistical analysis methods such as correlation analysis, contents analysis, t-test, and frequency analysis, were applied to the data. Results and Interpretations

Cognitive achievement

Students’ cognitive achievement, shown in Table 4, was evaluated in terms of receptive learning and critical thinking learning. In cognitive achievement, the well-structured course was not inferior to the teacher’s interaction in receptive learning. However, in critical thinking learning, students in interactive instruction showed higher achievement than those in structured instruction. In receptive learning, well-structured instructional program was provided for the structured course students, while simple text scripts and instructors’ feedback was provided for the interactive course students. In critical thinking learning, a well-structured contents, including pros/cons on issues, was provided for the structured course students, while live discussions and arguments (mainly student-student interaction, instead of instructor-student interaction) occurred on the board in the interactive class.

A 26-item survey was used to measure participants’ satisfaction toward their instructional program. This survey, modified from Kim and Ryu (2000)’s tool, consisted of three categories: structure (course, contents, interface), interpersonal interaction (instructor-student, student-student), and overall attractiveness of instruction. Respondents used a five-point Likert scale (5=very satisfied, 1=very dissatisfied) to rate their satisfaction level. The alpha reliability coefficient of the survey was .93. Table 6 shows t-test results of those students in the structured instruction who were satisfied more with the structure factor of the program whereas the students in the interactive instruction were satisfied more with interpersonal interaction factor of the program. There was no significant difference about the attractiveness of the program. This result was verified in Table 6.

Additionally students were asked to identify their perception of critical satisfaction factors by checking the most critical factor and the second critical factor. As a result, students in the structured instruction considered the design of web-based material as the most critical satisfaction factor. On the other hand, students in the interactive instruction answered instructor’s feedback as the most critical satisfaction factor. Both of them considered the contents of the program as a secondary satisfaction factor (see Table 7). These findings implicate that instructional programs in this research were developed appropriately, along with design intention. Students were satisfied with the program more than at an average level, so an error from program dissatisfaction was avoided. Table 8 also supports two instructional programs were appropriately developed, since the perception of students in the structured course showed their course is highly structured while the perception of the students in the interactive course showed their course is highly interactive one. The instructional programs developed in this research were validated again by these results.

Discussion and Conclusion

The purpose of this study is to analyze the influence of instructional design and management style on student achievement and satisfaction in a web-based distance learning environment. Results indicate that program structure is possibly able to somewhat replace interpersonal interaction as early distance educators expected, especially when the contents have receptive characteristics like understanding, memorizing, and recollecting facts/concepts/principles without criticism. This is the contradictory empirical evidence to the general belief that the essential of learning is only from interpersonal interaction between teacher and student whatever the learning contents are. This research suggests that a well-structured instructional program can be provided as a substitute for teacher’s interaction in receptive learning.

On the other hand, we need to focus more on the design of interpersonal interaction rather than structuring the program in critical thinking learning that pursues improving critical thinking ability. The result shows that cognitive achievement in interactive instruction is significantly higher than the achievement in structured instruction when the learning contents are to pursue critical thinking ability. But interpersonal interaction does not need to be the teacher-student interaction all the time. Student-student interaction was rather effective enough in this research. This gives us an important implication. One of the reasons that the teacher’s interaction with students may not be implemented in the best way in reality is because the teachers feel overloading for interaction with the students (Lao, 2002). Since the interaction is labor-intensive and therefore expensive, distance universities may not be able to finance this type of instruction for all students. If it is come to be understood that the effect of student-student interaction is not inferior to the effect of student-teacher interaction, it could save a substantial budget for hiring instructors or tutors. In this regard, findings in this research implicate that interactions among students need to be designed elaborately to relieve teacher’s overloading and to maximize learning effects at the same time.

Meanwhile, instructional method factor seems to have greater influence on student achievement than learner characteristics factor does. Perceived learner characteristics were analyzed a week prior to each program’s implementation in this research as well as in the pilot implementation. And the results showed no difference regardless of learner’s characteristics. It implies that a student’s learning pattern would be more influenced by instructional methods or teacher factors rather than by learner characteristics. This means that between variable (instructional method, different in group between) is more critically influential than within variable (learners’ characteristics, different within group) on student learning. Learning is, of course, influenced by the learner’s cognitive, emotional, and social characteristics. But understanding teaching and learning situation needs to be approached comprehensively. In any case, it’s impossible to instruct a perfectly homogenous group as there are students’ differences all the time. In spite of the existence of individual’s differences, some classes’ average GPA does go up, and popular instructors and successful programs exist. This indicates that there are certain effective teaching methods in a context. This makes us aware of the importance of instructional design (ID) for effective teaching method.

In conclusion, this research provides us a significant implication for understanding theoretical framework in distance education. As reviewed in Theoretical Background, the contrary and complementary relationship between structure and interaction has been pervasive in academia. The idea of when structure increases, interaction decrease, and vice versa, implicates one can be a substitute for another. This research shows empirical evidence that structure can be a substitute in a certain case although it’s not always the case. With regard to pedagogic optimum between two extremes, with an extremely structured course at one end and extremely interactive course at the other end, this paper suggests that the optimum should be located in structure side in receptive learning, whereas it should be moved a lot toward interaction side in critical thinking learning. The practical implication of this is that we need to consider the characteristics of learning contents first, such as receptive learning or critical thinking learning, in web-based distance learning program development, because the learning outputs are influenced more by characteristics of the learning contents, especially in deciding the priority between self-study mode with highly structured program and interactive mode with highly interpersonal interactive program, or how to combine each mode.

Finally recommendations for further research are suggested as follows: first, this research does not consider each structuring elements’ effect or influence on learning, although many structuring elements from previous literature were applied to structured instructional development. But there could be many different design types among structured instructional program. Analysis and comparison of each element’s influence on learning would be a good theme for further research. Second, more objective analysis of learner’s characteristics with verified measuring tools, including special characteristics such as learner autonomous, aptitude, or self-regulating ability, would give us more articulated implications. Third, further research might study on structuring of interaction. Stein et al.(2005) mentioned that the combination of learner-initiated interaction with instructor-initiated interaction built into the course explained more about satisfaction with perceived knowledge gained than satisfaction with structure alone. Thus, research on structuring of interaction will enhance our flexible understanding of two extreme concepts in distance education. Fourth, although basic principles of pedagogy is not easily changed, further study can explore if there is any difference in the results according to IT technology evolution which might have influence on the way of contents development and interaction than before. Finally, further research providing quantitative data from more cases with various learning contents could verify and generalize the findings of this research. Also research considering emotional or social evaluation as well as cognitive evaluation on various learning contents could be recommended for more comprehensive understanding in web-based distance education.