Memory- Task 1
Evaluate the usefulness of the three models of memory (multi-store model, working memory model and the levels of processing model) and discuss practical implications of memory research. Atkinson and Shiffrin’s Multi- store Model of Memory (1968) hypothesises that there are three stores for memory; Sensory memory, short term memory (STM) and long term memory (LTM). The theory states that a memory passes through each of the stores and that the importance of the memory determines which store the memory is kept in and thus how long it is retained. This relatively simple model is supported by evidence from free recall experiments and from studies performed on patients with brain damage such as ‘HM’. An example of a free recall experiment is Murdock (1962) in which participants were shown lists, of differing lengths, of words for a period of one second, they were then asked to recall as many words as possible. Murdock found that the words at the beginning, primary effect, and end, recency effect, were recalled more accurately than those in the middle. This is known as the serial position effect and proves that there are two separate stores from which the memories were recalled, which supports the multi-store memory models short and long memory stores. Studies on brain damaged patients like HM (Milner et al, 1978), who began to suffer from anterograde amnesia after both of his hippocampi were removed in an operation, also support the working memory models theory that STM and LTM are separate stores. HM was able to recall memories from eleven years before the operation yet wasn’t able to remember who was president of the United States and forgot who he was talking to as soon as he turned away (Jenni Ogden, PhD, 2012). It is argued however that the multi-store memory model oversimplifies the roles of the STM and LTM. Atkinson and Shiffren also failed to acknowledge the interaction between stores something which Baddeley and Hitch (1974) dealt with in their Working Memory Model. The idea that simple rehearsal accounts for the transfer of a memory from the STM to the LTM was also criticised as this doesn’t take into account important events for example a car accident. Events such as this seem to immediately be stored in LTM again this is dealt with by another memory model, Craik and Lockheart (1972) Levels of Processing. The working memory model also fails to recognise memory improvement techniques, for example the ‘method of loci’ which uses points on an imaginary journey as a way of memorising particular items.
Despite accurately theorising that the LTM and STM are separate stores the multi-store memory model fails to account for many instances in which the flow of memory doesn’t follow the three store route and although accurate at the time, research has, over time, out dated it. In contrast Baddeley and Hitch’s (1974) Working Memory Model states that a ‘Central Executive’ allocates resources and decides how attention is directed, although it doesn’t have any capacity it does hold memory long enough to direct attention. An ‘Episodic Buffer’ then integrates the information and is used as a general storage space for acoustic and visual tips. The ‘Phonological Loop’ holds and rehearses words and the ‘Audio-Visual Sketchpad’ allows temporary holding of visual images. The intuitive Working Memory Model is far more complex than either the Multi-Store Model or the Levels of Processing and is high in face validity. The validity of each section of the Working Memory Model is supported through various different experiments such as Baddeley and Lewis’s (1981) study into the importance of the articulatory loop. By asking participants whether or not a sentence was meaningful, for example, “the cow ate the grass” or the “bone ate the dog”, under both normal conditions and when repeating something meaningless to stop them from using their articulatory loop. Doing this seriously reduced recall in the latter results. This, Baddeley and Lewis claimed, supported their theory of a presence of an articulatory loop. In a study by Baddeley et al (1975) the phonological loop was shown to be capable of holding two seconds’ of information. Baddeley et al (1975) asked participants to recall five words in correct order. What the experiment demonstrated was that it was the length of the word and not the discreetness that determined recall. However further studies have gone on to suggest that a determining factor is the time it takes to pronounce a word as opposed to its alphabetical length. James W. Stigler (1986) found in a study on digit memory in Chinese and English that children, used because they were less likely to have knowledge of memory improvement techniques, from China were more efficient at remembering numbers than their English speaking counterparts. This, Stigler theorises, is due to Chinese numbers being shorter to pronounce. The visuospatial scratchpad, referred to sometimes as the ‘inner eye’ is what allows someone to remember shapes such as letters and visualise something that isn’t there, for example, whether or not your car would fit into a particular parking space. It can also allow us to hold static images and manipulate them, thus we are able to appreciate a layout of a particular area and find our way around (Psychology Resources, 2000). Logie (1995) argues that the visuo-spatial scratchpad is in fact far more complex and refers to it as the Visuo-spatial Working memory. This is supported by Baddeley and Lieberman (1980) whose pattern of results suggest that visuo-spatial working memory did indeed comprise a system that was involved in visuo-spatial retention, in visuo-spatial perception, and in motor control. The Working Memory Model is high in face validity and despite being far more complex than either the Multi- Store Model or Levels of Processing Model is reinforced by various experiments. These experiments clearly present the different sections the Working Memory Model.
It also takes into account things the other models do not, for example, the Working Memory Model allows for memory to pass directly into LTM without rehearsal, something Multi-Store and Levels of Processing do not, events that are traumatic or massively significant do not require rehearsal and this is something the Working Model addresses. As well as this Baddeley and Lewis’s model is highly applicable to real world situations, for example, Baddeley et al (1975) showed the word length effect, a real world counterpart would be attempting to remember a shopping list. The only criticism of The Working Memory Model is the lack of information in regards to the Central Executive, however further investigation is required to discover what its capacity actually is. It is theorised that many accidents are caused by its limited capacity and that information on its exact capacity may have serious implication for accident control (Psychology Resources, 2000). Craik and Lockhart’s (1975) Levels of Processing Model focuses on the processes involved in memory instead of the stores/ structures (McLeod, 2007). Craik and Lockhart (1975) took a non-structured approach to the memory model, this was as a result of the criticism garnered by Atkinson and Shiffren’s earlier Working Memory Model.
The chief idea behind the Levels of Processing Model is that memory is a result of processing information and that length of memory is determined by the importance of the information, something referred to as depth of processing. Craik defined depth as “the meaningfulness extracted from the stimulus rather than in terms of the number of analyses performed upon it.”
Orthographic processing is the simplest way in which we process information and thus, according to the model, the one which retains the least memory, it is also referred to as shallow processing. This when we consider the physical features of, for example, a word. Craik and Tulvig (1975) demonstrated this by asking ‘is this word in capital letters?’ this requires you to only process the word physically. Similarly to the Multi-Store Model maintenance rehearsal is also required in order to retain memories the Levels of Processing Model deems shallow.
Medium processing or phonological processing as its sometimes called is the second level of processing. This Craik and Lockhart hypothesise takes into account the acoustic value of a word, this they argue is deeper processing than orthographic because it requires you to process a sound instead of an image. Craik and Tulvig (1975) again demonstrated this by asking ‘does this word rhyme with ……?’ this activates your phonological processing as you have to process the word acoustically (Psychology Resource, 2000).
The deepest level of processing is what Craik and Lockhart describe as semantic processing. Semantic processing is described by Robert Gallo as allowing “subjects to encode more unique features from each word relative to surface processing . . . additional conceptual or semantic features help to differentiate the studied items from each other, making these memories less susceptible to interference . . . “. Semantic processing requires the use of meaning to process a word, Craig and Tulvig (1975) asked ‘would this word fit into a sentence?’ ‘Jane bought some ……… for dinner’ in answering the question you would have to process the meaning.
The basis of the model is quite simple, the more you process the memory, the more likely you are to remember it. However the model fails to explain why this is the case, meaning the model is descriptive rather than explanatory. The model was the first to suggest that process used have an effect on memory and has had implications on memory improvement techniques, for example, the Method of Loci is the process of placing things at various points of a journey in order to make them more memorable, this is semantic processing. What there isn’t is a way in which processing can be measured, Baddeley (1978) points out that well remembered events are attributed as ‘deeply processed’ however because of this the argument that deep processing equals better recall is circular and thus cannot be tested (Psychology Resource, 2000).
Memory research is crucial in many areas and none more so than in Eyewitness testimony. This is a legal term that is often used, and respected in a court of law. A jury can find eyewitness testimony a reliable source of information however research into memory has shown that it may not be as reliable as previously thought. Elizabeth Loftus theorises that memory under any circumstances is subject to inaccuracy and that there are many wrongful convictions every year as a result of faulty eyewitness testimony. Loftus et al. (1987) showed subjects of slides of a customer in a restaurant. In one version the customer was holding a gun, in the other the same person held a chequebook. Those who saw the gun version tended to focus on the gun. Thus they were less likely to identify the customer in a line-up as those who had seen the chequebook version. Clifford and Scott (1978) also supports Loftus’s theory. They found after showing participants a film of a violent attack that recall was less than a control group who saw something less stressful and were able to recall 40 items of information. However this ‘gun focus’ theory has been challenged by Yuille and Cutshall (1986) who found that witnesses to a real shooting in Canada had ‘remarkable accurate memories of the event. This contradicts both Loftus et al (1987) and Clifford and Scott (1978). This leads us to believe that memory recall in stressful situations can be subject to very different levels of interpretation. Where Loftus et al (1987) shows us clearly that people focus on, potentially the most ‘shocking’ item. Yuille and Cutshall (1986) challenge that by using a real event to show some may be able to recall accurately. However this only goes to highlight the radical differences seen in eyewitness testimony.
Jenni Ogden, Ph.D. (2012). HM, the Man with No Memory.