PARALLELISM IN WORKING-, LEARNING- AND DO-ENVIRONMENTS
The Parallel Instruction Theory for Coaching in Open Learning Environments for Simulation
By Rik Min
Also published (as small paper) under the title: Shortcomings monitors; the problem of linear presentation media in learning situations; the importance of parallelism in open learning and working environments. (Published in the proceedings of EuroMedia 96; Telematics in a multimedia environment, dec. 19-21, 1996; A publication of the society for computer simulation International (SCS) (Eds. A. Verbraeck & P. Geril) (published on WEB)
Key wordsdesign, design theory, userinterface, HCI, environments, learning tools technology, simulation, simulation technology, instruction, instruction technology, instrumentation technology, interactivity, simultaneous processes, parallelism, parallel instruction theory and applied educational theory.
Abstract / SummaryThis article is about research in the field of human-computer interaction (HCI) and learning tools technology. Patterns and experiences found in a large number of different working-, learning- and do-environments have been linked. The concepts 'parallelism' and 'interactive work environments' are discussed as well as a design theory for such do-environments. The design theory and ideas around the concept of parallelism are supposed to offer a solution for a number of recurring problems with the organisation of an educationally sound, interactive learning environment. Min's 'PI-theory' which is primarily intended as a theoretical framework for the organisation of learning environments for simulation, also has its effect on other kinds of interactive working-environments. The concepts 'parallel' and 'simultaneous' are dealt with. Parallel data flows and simultaneous processes occur everywhere in society. The focus is on how people handle several processes which are forced upon them simultaneously. The article ends with a few results of this philosophy and an advice on how work environments should be organised. If particularities and patterns which have been discovered and are here described are not taken into account for the organisation of a work environment, then this is considered to have failed.
Parallel and SimultaneousIn daily life people gather impressions continually. They are also expected to act all the time. This applies to working in an interactive learning environment as well.
In an interactive computer-driven working environment there are continually all sorts of simultaneous processes providing different types of visual and auditive information side by side, mixed and on top of each other. The attention is mainly on electronic products or interactive working environments. These are working environments based on a computer or a monitor, in contrast to (paper) working environments. The word 'interactive' means 'on line' here, working environments that are 'computer-based' with software either 'at a distance' or not.
A person receives relevant and less relevant information from all sides. When working in an electronic environment there is a continuous process of supply and demand. People have learned to handle this fairly efficiently. A mechanism has been developed enabling a person to select and use only that which he needs. However, in some interactive working environments this is not possible. The question is whether this is due to the transmitter or the receiver: the equipment or the user?
The impressions concerned are auditive or visual; image or sound, but wind, storm, heat cold and rain somehow influence the human being too. Brains are fed with information through all channels of perception. How does man select and remember what he needs in those continually changing circumstances?
Eyes and ears are the main channels of perception. We are trained to select the right image and the right dosage from vast masses of information. As the eye is able to focus immediately on the one correct spot out of an immense amount of parallel impressions, so the ear can focus on the one source it wants to hear among a quantity of different sources. From a series of voices, man is able to pick out the story he wants to hear.
This continuous and almost unconscious selection consumesa lot of the untrained persons' energy with traditional media. Some educationalists say that when offering information to inexperienced people this should be free from all kinds of disturbing information. However, other scientists say that an instruction- or learning-environment may well be exciting, challenging or tempting. In literature there are numerous directions how a perfect transfer-, learning- or working-environment should look. But one should not follow these directions literally. Depending on the circumstances and the target group and in certain fields a design-condition why something should or should not be applied may well be interpreted differently.
Figure 1. Two screens in practice: children likes it...
Within educationalism, and in particular in applied educational science there are instruction technologists who want an instruction to be clear, stripped from all trimmings. The reason being that it may distract the receiver. But there are learning psychologists who say the reverse. Some researchers say that a learning environment may well be full of information or even that it has to be like that. The question is of course: who is right?
Figure 2. A journal is an example of parallelism in real life. A reader can skip very quickly through unimportant data to a message he likes and want to read.
Figure 2b. Some leavelets are an example of parallelism in real life.
The above remarks also refer to situations involving the common media such as television, video-recorders and interactive tape-slide series with remote control. Human beings are quite capable of seeing and hearing selectively and handling simultaneous processes. However, for computer-based working environments and in particular interactive open working environments, other laws apply. Linear situations are much less frequent there. Relative linear media are traditionally a world for instruction technologists. Really interactive media should be approached differently.
Learning versus InstructionInstruction theories differ from learning theories. Experts in the field of learning ('learning technologists') have meanwhile developed well-structured ideas which are often at right angles to ideas from the traditional world of instruction ('instruction technologists'). This is of course due to the fact that instruction is something being offered (the 'supply side') and learning is more the gathering of knowledge by the user. This could be called the 'demand side' in education. 'Asking for certain information' and 'offering certain information' are two entirely different fields of science, both traditionally and ideologically. Just compare Gagné's and Romiszowski's ideas with those of Van Parreren and Papert. Van Parreren and Papert have approached the subject from the learning angle: student-controlled learning. Their points of departure, approach and targets in their solutions for educational problems are quite different.
In the world of educational instrumentation technology ('educational technology') there are two different fields: the world of 'learning technology' and that of 'instruction technology'. In other words: there is the world of the designers of learning tools and thye world of the designers of instruction. For the analyses below it is essential to distinguish clearly between 'learning tools' and 'means of instruction'. In principal, learning tools do not contain knowledge or very little and instruction means do contain knowledge. If learning tools do contain knowledge then it is really a combination of learning and instruction tools. If not, we should call the underlying models or systems knowledge. However, that is not the issue here but in practice one often comes across a combination of these two. For the benefit of this discussion the two functions should be considered and analysed separately.
An example of this fundamental difference between instruction-rich and instruction-poor encvironments is a chemistry lab with nothing but a set-up for a practical. Apart from reading manuals there is no knowledge transfer. On the other hand there is a course on television where knowledge is transferred, dosed in a certain way, with very little interaction. These are two opposites: learning tool versus means of instruction.
Computer-based learning environments have to meet other requirements than computer-based instruction programs. With learning tools nothing happens until the user asks for it. With instruction programs the user has to do exactly what the designer wanted.
In that type of courseware a target can not be compared with targets in open learning environments. The main target with instruction tools is of course knowledge transfer. This is different for learning tools because then insight is trained or tested or knowledge that is already present is deepened. In general, knowledge is already acquired elsewhere (let's call that knowledge 'static') and somehow a deeper understanding should be achieved by means of another learning tool. Then 'static knowledge' will become 'dynamic knowledge' and all that has been learned will be better applicable.
Plain simulation programs and tools usually do not contain knowledge. Knowledge about the domain of the simulation or the subject has already been touched upon in a traditional way, as part of the curriculum or a course. In general, curriculums or courses have a differentiated offer of learning tools. Simulation could be one of them. Lessons, tutorials, study materials, laboratory sessions, instruction books or an excursion are possible other tools. Each form has it's pros and cons. Instruction materials play a part both in one-way and in two-way tools. In the latter case, instruction plays a secondary but essential part. The question is what type of instruction plays a part in this investigation to create a proper learning environment? A number of forms have been investigated, viz.:
In 1990 Van Schaick Zillesen and Gmelich Meijling investigated certain simulations and the role of paper materials versus electronic instruction materials beside or on a monitor (Min,1995). In the years 1992 -1994, that research and the pattern in the results really showed us the way as to what the pros and cons of sequential situations are and what the order of parallelism is, no matter the medium used.
PaperSimulation programs have always been provided with paper support or instruction materials. Simulations with this form of coaching with all sorts of paper materials proved the most successful in practice. This in itself was an indication that parallelism plays a certain part here. However, we were not aware at the time of how important parallelism was to become for our research as research topic and subject variable. The electronic instruction method -at least the linear form - proved unsuccessful for simulation environments. Apparently the medium or the software design methods were not yet sufficiently developed. At the time multitasking was not yet used for learning tools. This was also shown in our research. Instruction via the monitor did not work on crucial moments and paper materials were not given sufficient credit by teachers and users. Often it was not read at all and carefully prepared cases were left unused. Instead people tended to play with a simulation, including the teachers themselves. Simulations which were used without instruction or coaching often disappeared without a trace, however well-designed.
Research showed that simulations which were successful had achieved this through carefully composed work sheets or booklets with set tasks and cases. It was practically always paper instruction material. Therefore paper had a big advantage. But nobody was aware of the main reason for this: parallelism. People were looking for modern solutions and paper had a specific disadvantage: it does not prosper in a monitor culture. It is also difficult to distribute and relatively difficult to copy. The arrival of tele-learning, the down-load possibility, on-line working and delivery on demand method make it necessary to continue searching for suitable electronic possibilities that can become successful. Computers prove to have disadvantages which will be discussed below.
Electronic materialsMany researchers have for various reasons tried to solve coaching and instruction work in open work and learning environments electronically. Those efforts failed in the harsh reality of learning tools practice. Modern learning tools are used for a while and then they disappear again for reasons unknown. There are numerous examples to illustrate this.
Designers often try to solve this problem by using one half of the screen for instruction and the other for the actual learning environment. These solutions, called viewport solutions in our research, indicate the awareness of the designer of the user's need to put information side by side in order to be able to compare things. It proves that users need to have things parallel. This is defined as parallelism of the first order. The problem with this is that the designer needs a larger screen in the end than that provided on the standard PC.
MonitorSpace on the monitor of a standard computer is simply extremely limited. Our research clearly indicates that monitors are actually an extremely deficient and clumsy part of today's PC's. Many users and designers are not aware of this. What is the problem of the monitor? The monitor is derived from the television medium. The television set was designed for the presentation of linear programs. It is true that image and sound were linked, but that form of parallelism provides no problems. The present problem of multimedia computers is parallelism within one channel of perception. Images appear and at the same time other images disappear. This requires the use of one's memory which may well be a problem in certain situations. The same happens in the audio field but we are better equipped to handle it. A watcher's memory is used differently in linear processes than in learning processes.
With simulations someone has to prove his insight. His (passive) knowledge is 'coming alive'. Designers of interactive software want to achieve completely different things with people than film or video designers. A television screen is a one-way medium. With a film or a discussion on television the continuous disappearance of images is no problem. In a well-designed linear program there is always plenty redundant information to get the message across.
With courseware and interactive software in general but in particular with learning environments for simulations, this implicit restriction of the monitor indeed plays a part. One is not always aware that many problems with simulations are related to this handicap of the small monitor. In a teaching program for instance certain information has to be used in a different part of the lesson as well. In other words: information from an earlier moment should also be available later. The student would like to see that information without having to do anything. If a designer hasn't anticipated this with his product then it will stay in the cupboard, never to be used again. This has happened to a lot of MS.DOS products. Macintosh and dtp with windows pointed in the right direction. Meanwhile MacOS System7 is the most suitable operating system for this type of experiments as it allows multitasking in a very natural manner. Windows 3.1 and Windows 95 are a step in the right direction. Most possibilities of tested prototypes designed on Mac can be transferred to the present Intel-computers. However, as yet not all techniques are operating smoothly on that level.
WindowsLearning environments imply a streamlined two-way traffic which should be realised in a natural way. Information presented on a monitor can usually not be directly manipulated. This is done indirectly with the help of a mouse. This gives two response time problems added together. For. we want both a quick presentation and the possibility of fast intervention. But windows that are easily moved and called up again quickly, as well as a fast and accurate mouse make heavy claims on hardware. It is not clear yet what is really essential for a good and effective learning environment. Much of what we want is technically still imperfect.
A designer really wants a larger effective surface screen, even when only virtually. This can not be achieved without windows. But they have to move easily and quickly, without loosing their unique contents and it must be possibnle to call them up since they offer insufficient advantages otherwise.
It is still difficult to retrieve earlier presented and selected information onto the screen. It may well be more practical to print the information on the screen so that this can be put beside the monitor for further consultation, rather than to find the same information again and again and organise it on a screen. The fact that paper materials beside the monitor are quite practical, goes to prove that users are more familiar with working parallel than we think. Before windows became popular on computers, many people have tried to solve problem in various ways. Below you will find a list with efforts from the previous decade:
However, few of these 'old' solutions really work. They usually require a certain skill from the user. But in education every obstacle is one too many. Designers have come up with all kinds of solutions, e.g. by cramming a screen with all kinds of information. This is 1st order parallelism, but it has all kinds of ergonomic disadvantages: text is too compact, syntax is poor due to statements that are really too brief, an excess of information, letters on the screen are too small and so on.
Figure 3. The AKZO project: the BRINE PURIFICATION simulation program for training on the job (van Schaick Zillesen & Gmelich Meijling & Min, 1992). In this training environment there are 5 parallel processes: 3 'active screens' (the monitors) and 2 'passive screens' (the pages of the paper materials). (Demonstration of the project on the Corporate Training Conference; In: Kluwer Acad. Publ., 1995, page 209-226: Eds. M. Mulder & W. Nijhof)
The desk top philosophy together with the arrival of windows around 1982, and later multitasking (around 1990) were not only technical breakthroughs but proved to be a big step forward as well. This was ideal for the beginner who wanted only to be able to work with his program and nothing else. Working with windows was quite like working with loose sheets, books and tools on a desk. The fun of working and learning behind a desk was not unlike working and learning with information sources through windows. Many professional software producers, in particular the traditional informaticians did not see the use of this childishly simple userinterface. The value of the ideas of Adele Goldberg and the Xerox Palo Alto Research Centre on SmallTalk and such like (1980) was not recognised. But people in education understood the approach. Just think of Papert (1980) and the breakthrough of Apple Macintosh computers on schools in the USA (1985-1995). This type of parallelism with windows is defined as parallelism of the second order. The effective screen surface became larger than 100% due to the arrival of multi-windowing applications, which is no small advantage. Pull down menu's also became rapidly popular all over the world. Not lastly because there is a quite distinct form of parallelism in pull down menu's of the second order. You can take a quick peep elsewhere without messing up everything on the screen. This is implicit proof of the use of parallel thinking during the design stage. People who do not want to or can not remember useless things (such as commands), immediately recognised the revolutionary aspect of a fast window technology in particular for education. The computer world and in particular the MS.DOS and Intel people did not understand the impact of pull down menu's, mouse and windows. They realised far too late why this concept was a major step forward compared to scrolling screens and command-based software. Traditional computer scientists who did not believe in this type of event-driven windowing system counted on the sheer force of a processor and a good memory of the user to remember commands rather than looking for a perfect concept that would enable everyone to use software. Primitive copies of the Macintosh concept on 286 and 386 PC's also induced people to be blind to the advanced event-driven programs. For mouse and window mechanisms need to be 100 %. Response times need to be fast. This is still insufficient on ordinary PC's and particularly on the PC's used at schools.
MouseThe use of windows has disadvantages too. The application is a little complicated with a mouse at first and programming of event-driven applications with really good windows and pull down menus troublesome.
Mouse and windows are still far from perfect, this applies to all manufacturers. They certainly don't run on Window3.1 equipment. Mouse movements are influenced by the difficulty of the job. There are good and bad windowing computers. With most computers the response time of the mouse movement depends too much on the status of the primary process. If an extensive read and write operation is active from or to the disk, the mouse moves across the screen in a hopelessly irregular manner. This apart from the problem of the mouse with two or three buttons. A person needs only one mouse button. It has been shown that a mouse with one button is a lot pleasanter for most people. They need not think whether they have to press the right or the left one.
Macintosh System 7 - and Motorola computers in general - are way ahead of the rest. In particular in respect of customer-friendliness, for the developers of this application have moved with the times. This is also due to the fact that there are software directives for these computers. Another reason is that the hardware of Motorola chips is more systematic. With Motorola based computers, the mouse has a small separate independent processor which results in fast response times under all kinds of circumstances, no matter what else is going on inside the computer.
SOLUTIONSWhat are sensible and practical design solutions if we assume that computers and monitors will continue to have some deficiencies for the time being? The solution with the design of the working environment and userinterface is to observe the concept of parallelism and its consequences (the PI-theory a.o.) There are for a start three types of parallelism which all have a different effect on the behaviour of the user viz.
Most of the examples here are forms of pure side by side elements or sources of information. That is defined as 1st order parallelism here. Elements on carriers such as paper, windows etc. and which are partly overlapping are counted as 2nd order parallelism. The last type is 1st order parallelism combined with something else: usually a relative linear coaching element. This is defined as 3rd order parallelism. First something about parallelism in general.
Parallelism as a conceptParallelism occurs everywhere. It is something that people can handle very well as a rule. Sometimes parallelism can be distracting. Therefore the equipment of the environment is essential when taking pros and cons into account.
The phenomenon parallelism covers a wide area, wider than one would think initially. It is present in our daily lives. Information reaches us through various channels. We have to select it if we want it to reach us by means of a perception channel and then we can process it. There are several perception channels through which information can and does reach us.
Figure 4. A museum or exhibition: an example of parallelism in practic. Everything for the public is in view.
A good example of the use of more perception channels and several media sources is the use of a walkman with an audio tape. On this tape, a step by step explanation is given and exercises are evaluated. This method of instruction is insufficiently appreciated and it is often applied in difficult software programs. The student can instruct himself at his own speed. This is a clear case of two parallel processes that are not connected, two simultaneous, a-synchronic processes in other words. There is no technical connection, therefore the term a-synchronic is used. Besides there are two entirely different applications, which are also separately designed: viz. software in the computer and instruction on tape. This division between these two media has considerable advantages for the design technique. However, this is beyond the scope of this article.
Some more examples from daily life to illustrate the important role parallelism plays and some examples of parallelism in computer systems without window techniques:
In all these examples information is put side by side or parallel. The eye or ear determines what is read or seen first and what happens next.
A newspaper is also a strange form of parallelism. Actually the newspaper has been the same for 150 years. And even today we can not imagine life without newspapers.. So it must be a very practical medium. Why is it so practical? Because when you open a page you can decide for yourself what you want to read and what you want to skip. This is a problem with a sequential source of information such as a film or book. The user in 1995 has become too impatient a man for an ordinary linear medium. As a result he enjoys zapping.
In a museum or supermarket one can see at a glance where one is going to start looking or buying and where one will go after that. Museums are set up according to 1st order parallelism, with an optimum amount of freedom for the user. Although certain people would like to limit that freedom (teachers or directors of museums) it is essential because it makes discovering and learning through discovering fun.
Figure 5. IDEM: A museum or exhibition hall: an example of parallelism in practic; everything for the public is in view.
There are many more examples of pure 1st order parallelism; loose overlapping sheets of paper and windows are examples of 2nd order parallelism. We will discuss those later. On this subject see earlier articles by Min (Min 1992, 1994 and 1995). An underlying window can be brought to the surface, including the contents, with a simple click of the mouse. This is a fairly natural and automatic action so that even when the window contains rather unique information which was hard to find we do not hesitate to open another window. For we know that the information will not be lost. In a well-functioning window system, information components can easily be compared to any information placed elsewhere in another window. This is essential for a good working environment in our present information society. However, this 2nd order parallelism which can be manipulated is just as important for the ultimate success of learning tools.
A museum or an exhibition can also be visited in a different way: one can wear a walkman with a tape-recorder which then becomes a sort of electronic 'guided tour'. This can sometimes be highly recommended for certain people. Visiting a museum with a simple walkman on your head then becomes a good example of 3rd order parallelism. This method, coaching with a walkman, links a specific advantage of linear instruction to the advantage of another concept. The instruction on tape is linear. But the instruction tool is parallel to that on the wall or on the panels. Every target group chooses its own form of coaching whether this is in a museum, at an exhibition or on a tourist route. One can have a set route, a catalogue a live guide or a guided tour with a walkman and earphones.
Another example of 3rd order parallelism is the set up of a supermarket. By means of a sort of implicit, but dominant route of preference, indicated by signs and paths, the customer is distracted from his own preferences and coached along the shelves in a certain order. The aim of course being that the customer will buy more because he sees more, a theory founded on scientific research.
The terms 'parallel' and 'linear' should be compared. An example is a speech accompanied by sheets or slides. Loose sheets can easily be used in conformity with the concept of parallelism. PowerPoint presentations however -electronic sheets by means of a wide angle presentation platform - are actually almost linear. The advantages of PowerPoint presentations are obvious. But it lacks the one advantage of loose sheets, as does the slide presentation. Modern tools therefore may have disadvantages as compared to ordinary sheets. When using ordinary loose sheets, the speaker is able to glance through his speech, just before or during his speech. He may even decide to choose a different order. This survey is quickly lost when the same speaker should use the electronic sheet of PowerPoint for the same speech. For he can not see in advance which picture will appear etc. But solutions have been found for this as well, parallel solutions, mind you.
If we compare these ordinary examples to examples with monitors and software programs we see a lot of similarities. Windows on a screen provide a host of new possibilities and the concept of parallelism gets a new dimension. Although windows is not a 100% form of parallelism, they are still very practical. In windowing software one has all the advantages of 1st order parallelism with for instance larger monitors. A number of examples in the present computer practice where this type of parallelism plays an important part are:
The advantages of data offered parallel and data offered quasi-parallel, as is the case in 2nd order parallelism with windows, are obvious. But in certain sectors of society these techniques were only recognised much later. Apple Macintosh's desktop concept is based on parallelism. Yet these concepts are not fully appreciated in learning psychology, instruction technology and instrumentation technology. Probably because drills and linear instruction are still rated very highly. Windows is a sophisticated concept, but more from the concept angle than from the angle of our research, viz. designing and setting up a learning environment for simulation with an instruction or coaching problem.
A lot of educational software still looks terribly linear in spite of the designation 'interactive' and the fact that everything is interactive multi and hypermedia nowadays. This may be due to the conservative outlook of informaticians and computer manufacturers on the one hand and on the other hand the specific 'one-way' thinking of AV specialists. The development of the portable PowerPC, Intendo-games, pocket computer and things like disposable Hintendo pocket games, will soon bring the Intel world face to face with reality. For a user wants 'natural' and preferably 'dedicated' learning tools that are 'turn-key' and portable and therefore come in one piece and weigh and cost next to nothing.
Simultaneous processesNow we should further define simultaneous processes. When is simultaneous also parallel and when and why is it important for a designer to know or realise this?
We all know that in no matter what presentation the use of extra tools is unavoidable. Everyone who speaks live, uses a tool, even if it is only an overhead projector. This is used not merely because a lot of information can be presented in a relatively short time and that one can prepare these things at home, but also because they are retrievable. All these tools, and loose sheets in particular, can be quickly retrieved during the lecture when there are questions about an earlier part.
So beside the speaker, there is a parallel flow of information in the form of sheets, slides or electronic sheets, which besides being handy, clearly enable listeners to remember the message better than if the speaker had merely told his story. This then is the third advantage of tools. One would be inclined to think that one type of information flow - the speaker - would be hard to follow, let alone if there is a second or third flow of information. But this is not true at all, as will be demonstrated below in the experiments with 'talking heads' as a means of instruction with simulation.
These psychological matters are rather complicated, for the reverse would be more logical. Somehow the message is more easily remembered. This may be due to redundancy and to the fact that listeners are capable of more parallel actions than scientists think. Another factor might be that a speaker tends to radiate a certain ease or confidence. This should also be achieved in computer-based instruction. We assume that parallel events - within the scope of the story - may under certain conditions reinforce the main story. Information that supports the main flow of information but in a different form, will create a better understanding in the listener and therefore the information is better remembered. There is of course an ideal balance between too many and too few simultaneous processes and parallel presentations.
The point is whether matters may run parallel in every perception channel or whether this should be avoided. Another question is, which channel of perception is the most critical in this respect; the eye or the ear ? We must carefully consider the conditions under which something may or may not be applied. One should never say never. A pattern that is present in one situation does not necessarily occur elsewhere. Even when situations appear similar. However, people in the world of educational science, the courseware- and media-world often violate this principle.
Parallelism in the computerWindow-techniques and multitasking systems turn out to be good solutions for the lack of screen space in do-environments. They enable things which earlier on were inadequately or clumsily instrumented. The concept of Min's MacTHESIS philosophy (1990) is such a multi-windowing and multitasking system. For a description of this philosophy please consult the articles by Min from 1992 and 1994. The MacTHESIS philosophy and the matching MacTHESIS system proved very productive as a sort of directive to dimension bare simulation environments. It was also applicable in the instruction-supply side. Finally it offered a theoretical grip - through the 'Pi-theory' - for both the success of paper instruction methods as well as certain electronic instructions (van Schaick Zillesen, 1990; Min, 1992; Min, 1994). Showing the contents of a window by means of a simple click is the most practical way to increase the effective surface of a screen. The window in view disappears into the background. The underlying window opens up and becomes the active window. Sometimes processes still continue in the other underlying windows, unless the designer blocks this possibility.
Initially electronic methods for instruction and help systems were not so easy to handle and not as cheap in use compared to paper instruction materials. But copying software was obviously cheap and practical. The inclination of software designers to solve all problems by means of adapting the software, proved more difficult in practice than anticipated. It appeared that the instruction component in software could also be produced electronically (in particular due to the arrival of new techniques). However, this is not always simple for the user. He gets lost in hyperspace or he is distracted during work.
In simulation the best ergonomic presentation form for (electronic) instruction proved to be a separate window. However, within that window there are design problems of a sequential nature. The idea of two separate windows - usually resulting in two different applications - is in principle quite feasible. As the instruction program should be seen as apart from the simulation program and as they have to be used apart from each other, this yields specific advantages for both user and designer. The two different functionalities as a whole may well be two separate applications. This even makes the design and realisation of those applications easier and therefore usually cheaper. Every application can be made with its own specific tools. No feats are required to link the software. Min (1992) used the MacTHESIS system for the bare simulations and HyperCard for instruction. HyperCard is easy to use for teachers. It is in particular a perfect 'poor-man's tool' for the writing of all kinds of instruction materials or cases, even testing systems.
Figure 6. 1st order parallelism: a few viewports of some simultaneous parallel processes (Zwart, Remrhev & Min, 1994). The problem with 1st order parallelism is the screen surface. The designers of the PI-theory prefer 2nd order parallelism
Figure 7. 2nd order parallelism: a serie overlapping windows, multi-tasking, interactive and with three a-synchronic simultaneous processes. In view are (1) a 'talking head' (on the left) as an instructor, (2) a Hyper Card stack as a coach, (3) intelligent feedback as output (desktop video fragments) (on the right) and (4) two windows for the simulation process (CARDIO, version MacTHESIS-5.0x, Min & Reimerink, 1995).
Researchers call this side by side use of these two simultaneous processes an 'a-synchronic' user situation. This word indicates that 'open learning environment' refers not only to the simulation environment, but also to the more or less noncommittal use of the instruction program. Every user can decide for himself if he wants to use a certain component and to what extent. In our philosophy, instruction is only then effective when the student realises its value and asks for it. Our experiments have shown that in interactive open learning environments, users indeed have a usually unconscious need to see things side by side, to put them side by side and if necessary to move them temporarily in order to be able to study underlying information. In other words, parallelism plays an important part with the userinterface in educative software to solve problems that face designers of screen solutions nowadays. The experiments stated here and further down are not described in detail. They can be studied elsewhere (Van Schaik Zillesen, 1990, Min, 1992 and 1994). Traditional user interfaces with a strong serial nature whereby something that can be seen on the monitor disappears again through the next action, often relies too heavily on a person's short-term memory. A proper interactive learning tool should be a two-way medium with optimal reminders for the user. It should not have the disadvantages of the one-way media such as linear programs in which the viewer has often completely forgotten details supplied in the beginning by the time he really needs them.
The Parallel Instruction TheoryThe 'parallel instruction theory' for simulation environments, in short the 'PI-theory', was discovered, developed and published by Min around 1992. In this theory researchers suppose that a user in an open learning environment can only work and learn if the environment has been designed in such a way that all relevant information to take decisions is 'visible' or can be immediately 'retrieved'. On studying various kinds and large numbers of examples, it turned out time and again that traditionally presented instruction methods yielded better results than solutions provided in a modern way. Experiments with simulations realised within instruction-write-systems and experiments with instruction-write-systems plus simulation systems, showed that there was no long term solution. Simulations with paper instruction materials turned out to function much better at the time, although the reasons were not very clear. In the end we discovered the above mentioned deficiencies of monitors and the software technique used at the time, when carrying out a small but not very successful experiment for AKZO in Hengelo (Van Schaik Zillesen & Gmelich Meijling et al., 1995). Learning environments with a malfunctioning instruction component proved on closer study to have been too sequential in design and too rigidly linked in general. In other words, the instruction was too well 'synchronised with the simulation' and too rigid. Also the instruction text disappeared completely from the screen while one was working with the simulation. It was surprising to see that such a design did not work, for the reverse was so much more obvious.
It is clear that users of educative software like to have all information well-ordered and at hand. They want to be able to see the connection or coherence of things. An example will illustrate this. Large screens are favourite in work situations, which can be compared to learning situations. This is proved by the huge success of SUN work stations. They are sold well to people who can afford them. The price is still a problem however. The fact that large screens are so popular partly proves the ideas found on parallelism. Users want to be able to put relevant information somewhere, preferably on a large screen, so that they can consult it without having to move other relevant information from the screen. Users of screen-oriented work stations have an implicit need to put a lot of information sources or tools side by side as a reference point so that comparisons are possible. This is not merely a laziness of the brain but the result of learning being largely a question of making comparisons and trying to see the coherence.
The PI-theory in four points: So far it is assumed that the usefulness of parallelism in open do environments is due to:
1. the user's limited short memory as regards details or loose components; because the monitor always wipes out the image contents partly or entirely when the next image is shown;
2. the user wants to, must and can compare; by comparing things from the past physically to things of the present;
3. the user wants to gain insight into cause/result relations; through repeated verification and comparison;
4. the user wants to create his own frame of reference and should be able to do so; by putting things that pass by on the screen side by side (by means of windows) and comparing them.
Further research will have to reveal which psychological variables and design variables play a part for the user and in software. Also why and under which conditions the user will best develop in such an 'environment'. At the main branch of Randstad in Diemen they faced a similar problem. Visits were paid on both sides and the solution was really quite obvious. Below this project will be discussed in more detail because it is so specific.
All this is no hard evidence in favour of the parallel instruction theory for simulations or do environments. But our findings are such that further empirical research will show whether our hypotheses are correct. In spite of this we published the idea in concept and as theory in 1992. Also because we developed a large number of prototypes with many different design variables which anyone can apply for to experiment with. They will find that the concept - provided it is used under the conditions discovered by us - really works. One way to guarantee a wide spreading of our experimental products was to publish the lot in one go on CD.ROM with a large number of software products, articles, manuals and figures and a book on parallelism and simulation technology (Min, 1993; Min, 1995).
METHOD OF RESEARCHThe statements in this article are based on empirical research into the effects of specific prototypes with the focus on optimal design of open interactive learning environments with simulated phenomena of the type with which a lot of knowledge has been gathered recently. See Min's reference articles from 1992 and 1994. Modern methods and techniques which enable an adequate and dynamic design with high performance possibilities were tested in particular. This research has always had the characteristics of a specific R & D project. Every hypothesis about a design variable which is believed to be important is tested by means of an experiment with a single specific value or tuning of design variables which were caught in a suitable multimedia prototype.
The research subject is always which aspects of design, structure and communication are of importance in well-functioning learning environments in relation to learning targets as the teachers demand.
Those design variables, parameters, preconditions or system determinants were investigated with a view to the usefulness of a tool or program in relation to the learner, teacher and/or designer as being the decisive factor.
New techniques which computer science offered in instrumentation technology were studied and in particular the benefits of techniques such as:
The research on which this study is based has always been design-oriented, i.e. 'prototype-driven' research. This prototype-driven approach in the sense of R & D is the best guarantee that research in the field of instrumentation technology will actually yield verifiable and relevant insights, viz. better functionality, sufficient performance (with emphasis on strong and fast), usability, stimulating and with a verifiable effectivity as regards achieving the set learning targets. Relevant aspects were tested empirically with testees by means of formative evaluation techniques, a.o. with video-recorded observations in the university studio. Some aspects were considered longitudinally and evaluated summatively on test schools. The aim being to develop theories in the field of methodology (design) of educative simulations in particular and for multimedia courseware in general. Some experiments have been carried out with digital video. This multimedia element has tremendous potential in the field of learning tools, in particular as a linear instruction method. A speaking person - the face of the instructor- holds the attention of the audience and the auditive information is highly effective; much more so than written text on a screen. There were also experiments with video as feedback in this type of environment. This feedback is essential for keeping the learning process started by the instruction under steam. Therefore video as instruction is defined in this research - in this process - as 'input' and feedback as 'output'. The whole system of cause and effect keeps the learning process and the learner going.
Figure 8. A working environment with 2nd order parallelism: a few overlapping windows of three simultaneous parallel processes, viz. a pc, an e-mail package and a DTP system. The proposition being that cutting and pasting of parts of text (and other actions) from and to other windows, is a lot easier when the user can put a good - visual - reference frame on the screen himself.
RESULTS / EFFECTThe article has discussed some specific solutions as developed at the University of Twente in the field of design for the realization of open learning environments. A theoretical concept called 'parallelism' or 'parallel instruction' was created, for specific, usually educative simulation environments. The various aspects were: the presentation of these simulation programs, the underlying philosophy, the design of the instruction and the results achieved by means of these methods and techniques.
Book and CD.ROMA book was published on parallelism and all prototypes used have been published on CD.ROM (Min, 1992; Min, 1994). These two products have had considerable influence on the thinking about this neglected item and it has led to several follow-up projects. One case will be discussed in more detail: the Randstad case, in respect of what problems we encountered and how these were solved.
The Randstad projectThe Strategic Planning Division of Randstad Holding NV in Diemen (main office), has to train all Randstad branches in Europe and the USA and help them plan their business strategy. A very complex model was developed for this purpose and implemented in a spreadsheet. There are some 1500 variables in this model and it is therefore huge. The 'calculation sheet', another word for spreadsheet but perhaps clearer in this context, will cover an area of roughly 50 times the size of the screen.
Navigating and comparing input with output from elsewhere is a very demanding job. Min proposed some ideas from the PI-theory to solve this problem. The result was a pilot project which was concluded at the end of 1994 and early in 1995. This pilot project resulted in two overlapping prototypes, developed by the University of Twente. These prototypes were developed with the rapid-prototyping method. One has a special Pascal-procedure library to consolidate and conserve the model and one with the MacTHESIS system to show the obvious advantages of parallelism to the client.
When parallelism proved to be able to solve the problem of the Randstad group, a project was started in which a young enterprise (Koopal & Gritter multimedia) produced a second series of prototypes. The university, faculty and in particular the faculty lab simply had no room, facilities or know-how to do this. The department did but the person involved could not possibly stop with his other daily activities. Other scientific members of staff in the field of modelling (for that is what it was at that stage) were not available.
Late in August 1995, Koopal & Gritter multimedia made a working prototype in HyperCard. When Randstad showed great satisfaction with this prototype, a second stage was entered with the prototype in HyperCard as the starting point for the development of a definite strategic planning system on the Windows-platform. Here Toolbook Multimedia 3.0. is applied. Meanwhile a number of steps of the strategic planning process have been elaborated in Toolbook and the definite program will be finished by the end of January 1996.
The following steps of the strategic planning process have been elaborated: introduction upstart data; data feeding with respect to the current year; points of departure for strategic planning, viz. a unit-planning and a branch-planning; and making a summary of the results. Between the steps with which the user puts in data, there are links of windows in most cases. Every link consists of an input window and an output window that are shown parallel on the screen This way input and output relations can be pictured more clearly so that the user is better able to compare data. Besides, the program uses visual presentation of data in the form of line diagrams, histograms and pie diagrams.
Parallelism has finally managed to make a contribution in finding a solution for a practical problem. We hope that after a number of other tests have been carried out we will be able to say that all potential users and in particular inexperienced software users, will be able to handle the planning-system after a relatively short period of time.
Raising interest and stimulatingLearning tools as investigated by us are primarily intended to motivate pupils to further study of any subject that is being studied. When there is no further stimulation or motivation, the teacher will get into trouble. Learning tools like simulation, animation or adventures are not intended to learn or instruct but they are first and foremost designed to motivate people into further study and to raise interest into the matter in general. Just as a practical is meant to mobilise loose but already present knowledge, and to learn to connect things and to be better able to understand the lessons that will follow. This specific educational target of relatively open learning environments demands a special presentation and a special design. The presentation and userinterface of a stand-alone simulation of this type is completely different from an instruction program or tutorial courseware in which little is left to the imaginative powers of the pupils. Parallelism and the PI-theory can contribute to achieve a better concept.
Learning by doingFormerly teachers thought too much in terms of 'learning by doing', as a result paper instruction materials were not so very important. They were even found annoying by learners. The booklet and/or work sheets supplied were often not used at all. But there has been a change of mind in this respect. Initially electronic instruction was not perfect either. Then most learning tools end up in the cupboard. Electronic instruction can, provided it is well-dimensioned and takes the above mentioned philosophy and the principles of parallelism into account, achieve a change for the better in that type of situation.
Further research will reveal which conditions need to be distinguished and how the design variables here described should be applied, so that the user can work and learn as well as possible in this type of 'environment'. Open environments which are not open for a user and where there is too much control, may induce situations that a teacher does not want and which might well have been prevented by the designer.
Figure 9. Schematic idea about 'parallelism' and the 'PI-theory' of Min on the input side and on the output side of a 'learning process'. On the input side of a learning environment a lot of instruction types and/or formats are possible and on the output side a lot of model-driven feedback forms are possible.
For all the designers the next rule is valid: the a-synchronic, simulatious processes on the 'input side' and on the 'output side' of this learning environment all have to be in view for the user and have to be presented parallel !!
ReferencesAmbron, S. and K. Hooper (Eds) (1990)
Learning with Interactive Multimedia; Developing and Using Multimedia Tools in Education. Microsoft Press, Redmond, Washington. ISBN 1-55615-282-5.
Ball, G., (1993)
Gritter, H. (1993)
Gritter, H., W. Koopal and F.B.M. Min, (1994)
Min, F.B.M., (1994)
Min, F.B.M., (1992)
Min, F.B.M., (1994)
Min, F.B.M., (1992)
Van Schaick Zillesen, P.G. (1990) Methods and techniques for the design of educational computer simulation programs and their validation by means of empirical research. PhD. thesis, University of Twente, Enschede, Holland (promotors: E. Warries and F.B.M. Min).
Schaick Zillesen, P.G. van, F.B.M. Min, M.R. Gmelich Meijling and B. Reimerink (1995).