Do the tutorials and review programs cost the user anything?

Many are free to the public. As the user makes progress, records are accumulated at the site for use by the website managers for refinement of the content where necessary. However, the users’ records from the website cannot be openly accessed for verification by an instructor due to technical and privacy issues.

How could course or workshop instructors assign the tutorials or review programs to users and assure they accomplished them at a minimum level of proficiency?

One way is for an instructor to point students to a free publicly accessible tutorial or review program. Students can make screenshots of the final frame in each tutorial or review program. This final frame shows the percent correct score and could be emailed to the instructor as a record of completion.

Another way is similar to the one above with the added feature of the instructor composing a posttest, administering the test during a class or workshop meeting, and then scoring the tests. This, however, can require substantial instructor time and effort, as well as the consumption of valuable instructional time.

An easier, more secure, and inexpensive solution is to arrange for a website manager to email periodic records of student performance directly to the instructor. However, this requires the the website manager to customize some programming for that instructor’s workshop or course. While instructor would not have direct access to data on the website, they can ask for student performance records to be emailed to them in alignment with assignment deadlines (e.g., weekly or monthly). If such customization and feedback is desired by the instructor, each student or workshop member will be required to pay a nominal fee (e.g., $5 through PayPal). The instructor will not receive records of students who failed to pay this fee. To commence the process of setting up a custom course for only your students/participants, send an email message to a website manager.

Why is this form of instruction more effective?

The technology of automated interactive instruction carefully employs the differential reinforcement of developing behavior. To be reinforced, a behavior must first be emitted. To some extent we privately “emit” behavior as we passively read and listen. In a sense, we “echo” it when we are already inclined to say it. However, during instruction the strengthening of this emitted behavior is not assured without external confirmation and reinforcement. For example, one can read passively “read” or “listen” while actually thinking about something else for some moments. In doing so we miss the concurrence of concepts being presented. This problem is prevented with automated interactive delivery. Concepts are deliberately paired together during instruction and such concurrence is assured when the user MUST OVERTLY emit required components in order to move forward. Thus, differential reinforcement and confirmation of learning can be precisely arranged in sequential steps. Experimental research has clearly demonstrated that required overt emission of behavior during the process of instruction more firmly establishes the behavior and, further, the behavior more readily generalizes to application elsewhere.

See Kritch and Bostow, 1998.

What is different about the tutorials at this site from most of on-line instruction?

Those familiar with operant conditioning argue that “learning” is best defined as a change in behavior, not a mental process. Even when one “does math in one’s head” these chains of behavior are almost always first established overtly before they recede to the covert level. Unfortunately, most on-line instruction today fails to employ differential reinforcement of active overt behavior. Instead, much of it is little more than a textbook with links to pages and video. Few creators of the on-line instruction are familiar with operant conditioning techniques–significantly those called priming and prompting. In many cases, on-line instruction does not sequentially build from basic concepts to complex ones. In contrast, well-designed programmed instruction requires the construction of a matrix of concepts, sequential ordering of the concepts, and deliberately placed frames that interrelate the developing concepts. In well programmed instruction some frames contain rules and examples, some examples then rules. Review frames are carefully inserted. Generalization to new situations is carefully structured. And most importantly, trial testing of new programs identifies the inadequacy of instructional sequences. With well programmed instruction user PERFORMANCE is the hinge pin for refinement of the program design. This recognizes that the student is always “right.” The solution to poor student performance is program revision, not claiming student inadequacy. The ultimate test of good instruction is, of course, whether the learner engages more effectively with the world at large.

To ask questions about any of the above, please send a message here.