chapter 6
The Acquisition of Memories and
the Working-Memory System
Acquisition, Storage, and Retrieval How does new information-whether it's a friend's phone number or a fact you hope to memorize for the bio exam-become established in memory? Are there ways to learn that are particularly effective? Then, once information is in storage, how do you locate it and "reactivate" it later? And why does search through memory sometimes fail-so that, for example, you forget the name of that great restaurant downtown (but then remember the name when you're midway through a mediocre dinner someplace else)?
In tackling these questions, there's a logical way to organize our inquiry. Before there can be a some new information. Therefore, acquisition-the process memory, you need to gain, or "acquire," of gaining information and placing it into memory-should be our first topic. Then, once you've acquired this information, you need to hold it in memory until the information is needed. We refer to this as the storage phase. Finally, you remember. In other words, you somehow locate the information in the vast warehouse that is memory and you bring it into active use; this is called retrieval. This organization seems logical; it fits, for example, with the way most "electronic memories" (e.g., computers) work. Information ("input") is provided to a computer (the acquisition phase). The information then resides in some dormant form, generally on the hard drive or perhaps in the cloud (the storage phase). Finally, the information can be brought back from this dormant form, often via a search process that hunts through the disk (the retrieval phase). And there's nothing special about the computer comparison here; "low-tech" information storage works the same way. Think about a file drawer-information is acquired (i.e., filed), rests in this or that folder, and then is retrieved.
Guided by this framework, we'll begin our inquiry by focusing on the acquisition of new memories, leaving discussion of storage and retrieval for later. As it turns out, though, we'll soon find reasons for challenging this overall approach to memory. In discussing acquisition, for example, we might wish to ask: What is good learning? What guarantees that material is firmly recorded in memory? As we'll see, evidence indicates that what counts as "good learning" depends on how the memory is to be used later on, so that good preparation for one kind of use may be poor preparation for a different kind of use. Claims about acquisition, therefore, must be interwoven with claims about retrieval. These interconnections between acquisition and retrieval will be the central theme of Chapter 7. In the same way, we can't separate claims about memory acquisition from claims about memory storage. This is because how you learn (acquisition) depends on what you already know (information in storage). We'll explore this important relationship in both this chapter and Chapter 8.
We begin, though, in this chapter, by describing the acquisition process. Our approach will be roughly historical. We'll start with a simple model, emphasizing data collected largely in the 1970s. Well then use this as the framework for examining more recent research, adding r inements to the model as we proceed. e. Demonstration 6.1: Primacy and Recency Effects The text describes a theoretical model in which working memory and long-term memory are distinct from each other, each governed by its own principles. But what's the evidence for this distinction? Much of the evidence comes from an easily demonstrated data pattern.
Read the following list of 25 words out loud, at a speed of roughly one second per word. (Before you begin, you might start tapping your foot at roughly one tap per second, and then keep tapping your foot as you read the list; that will help you keep up the right rhythm.) HIDE
1. Tree
2. Work
3. Face
4. Music
5. Test
6. Nail
7. Window
8. Kitten
9. View
10. Light
11. Page
12. Truck
13. Lunch
14. Shirt
15. Strap
16. Bed
17. Wheel
18. Paper
19. Candle
20. Farm
21. Ankle
22. Bell
23. View
24. Seat
25. Rope
Now, close the list so you can't see it anymore, and write down as many words from the list as you can remember, in any order.
Open the list, and compare your recall with the actual list. How many words did you remember? Which words did you remember?
· Chances are good that you remembered the first three or four words on the list. Did you? The textbook chapter explains why this is likely
· Chances are also good that you remembered the final three or four words on the list. Did you? Again, the textbook chapter explains why this is likely.
· Even though you were free to write down the list in any order you chose, it's very likely that you started out by writing the words you'd just read-that is, the first words you wrote were probably the last words you read on the list. Is that correct?
The chapter doesn't explain this last point, but the reason is straightforward. At the end of the list, the last few words you'd read were still in your working memory, simply because you'd just been thinking about these words, and nothing else had come along yet to bump these items out working memory. The minute you think about something else, though, that "something else" will occupy working memory and will displace these just-heard words. With that base, imagine what would happen if, at the very start of your recall, you tried to remember, say, the first words on the list. This effort will likely bring those words into your thoughts, and so now these words are in working memory-bumping out the words that were there and potentially causing you to lose track of those now-displaced words. To avoid this problem, you probably started your recall by "dumping" your working memory's current contents (the last few words you read) onto the recall sheet. Then, with the words preserved in this way, it didn't matter if you displaced them from working memory, and you were freed to go to work on the other words from the list.
· Finally, it's likely that one or two of the words on the list really "stuck" in your memory, even though the words were neither early in the list (and so didn't benefit from primacy) nor late on the list (and so didn't benefit from recency). Which words (if any) stuck in your memory in this way? Why do you think this is? Does this fit with the theory in the text?
The Route into Memory For many years, theorizing in cognitive psychology focused on the process through which information was perceived and then moved into memory storage-that is, on the process of information acquisition. One early proposal was offered by Waugh and Norman (1965). Later refinements were added by Atkinson and Shiffrin (1968), and their version of the proposal came to be known as the modal model. Figure 6.1 provides a simplified depiction of this model. Updating the Modal Model
According to information first arrives, it is stored briefly in modal the model, when sensory memory. This form of memory holds on to the input in "raw" sensory form-an iconic memory for visual inputs and an echoic memory for auditory inputs. A process of selection and interpretation then moves the information into short-term memory-the place where you hold information while you're working on it. Some of the information is then transferred into long- much larger and term memory, a more permanent storage place. This proposal captures some important truths, but it needs to be updated in several ways. First, the idea of "sensory memory" plays a much smaller role in modern theorizing, So modern discussions of perception (like our discussion in Chapters 2 and 3) often make no mention of this memory. (For recent a assessment of visual sensory memory, though, Cappiello & Zhang, 2016.) Second, modern proposals use the term working memory rather than "short-term memory," to see emphasize the thoughts in this memory are currently activated, currently function of this memory. Ideas or being thought about, and so you're currently working on. Long-term memory (LTM), in contrast, is the vast repository that contains all of your knowledge and all of your beliefs-most of which you aren't thinking about (i.e., they're the ideas aren't working on) at this moment.
The modal model also needs updating in another way. Pictures like the one in Figure 6.1 suggest that working memory is a storage place, sometimes described as the "loading dock" just outside of the long-term memory "warehouse." The idea is that information has to "pass through" working memory on the way into longer-term storage. Likewise, the picture implies that memory retrieval involves the "movement" of information out of storage and back into working memory.
In contrast, contemporary theorists don't think of working memory as a working memory is (as we will see) simply the name we give to a status. Therefore, when we say that ideas are "in working memory," we simply "place" at all. Instead mean that these ideas are currently activated and being set worked on by a specific of operations.
We’ll have more to say about this modern perspective before we're through. It's important to emphasize, though, that contemporary thinking also preserves some key ideas from the modal model, including its claims about how working memory and long-term memory differ from each other. Let's identify those differences. First, working memory is limited in size; long-term memory is enormous. In fact, long-term memory has to be enormous, because it contains all of your knowledge-including specific knowledge (e.g., how many siblings you have) and more general themes (e.g., that water is wet, that Dublin is in Ireland, that unicorns don't exist). Long-term memory also contains all of your "episodic" knowledge-that is, your knowledge about events, including events early in your life as well as more experiences.
Second, getting information into working memory is easy. If you think about a particular idea or recent e of content, then you're "working on" that idea or content, and so this information- some other by definition-is now in your working memory. In contrast, we'll see later in the chapter that getting information into long-term memory often involves some work. Third, getting information out of working memory is also easy. Since (by definition) this memory holds the ideas you're thinking about right now, the information is already available to you. Finding information in long-term memory, in contrast, can sometimes be difficult and slow-and in some settings can fail completely.
Fourth, the contents of working memory are quite fragile. Working memory, we emphasize, contains the ideas you're thinking about right now. If your thoughts shift to a new topic, therefore, the new ideas will enter working memory, pushing out what was there a moment ago. Long-term memory, in contrast, isn't linked to your current thoughts, so it's much less fragile-information remains in storage whether you're thinking about it right now or not.
We can make all these claims more concrete by looking at some classic research findings. These findings come from a task that's quite artificial (i.e., not the sort of memorizing you do every day) but also quite informative. Working Memory and Long-Term Memory: One Memory or Two? In many studies, researchers have asked participants to listen to a series of words, such as "bicycle artichoke, radio, chair, palace." In a typical experiment, the list might contain 30 words and be presented at a rate of one word per second. Immediately after the last word is read, the participants must repeat back as many words as they can. They are free to report the words in any order they choose, which is why this task is called a free recall procedure. People usually remember 12 to 15 words in this test, in a consistent pattern. They're very likely to remember the first few words on the list, something known as the primacy effect, and they're also likely to remember the last few words U-shaped curve describing the relation on the list, a recency effect. The resulting pattern is a between positions within the series-or serial position-and the likelihood of recall (see Figure 6.2 Baddeley & Hitch, 1977; Deese & Kaufman, 1957; Glanzer & Cunitz, 1966; Murdock, 1962; Postman & Phillips, 1965).
Explaining the Recency Effect What produces this pattern? We've already said that working memory contains the material someone is working on at just that moment. In other words, this memory contains whatever the person is currently thinking about; and during the list presentation, the participants are thinking about the words they're hearing. Therefore, it's these words that are in working memory. This memory, however, is limited in size, capable of holding only five or six words. Consequently, as participants try to keep up with the list presentation, they'll be placing the words just heard into working memory, and this action will bump the previous words out of working memory. As a result, as participants proceed through the list, their working memories will, at each moment, contain only the half dozen words that arrived most recently. Any words that arrived earlier than these will have been pushed out by later arrivals.
Of course, the last few words on the list don't get bumped out of working memory, because no further input arrives to displace them. Therefore, when the list presentation ends, those last few words stay in place. Moreover, our hypothesis is that materials in working memory are readily available-easily and quickly retrieved. When the time comes for recall, then, working memory's contents (the list's last few words) are accurately and completely recalled.
The key idea, then, is that the list's last few words are still in working memory when the list ends (because nothing has arrived to push out these items), and we know that working memory's contents are easy to retrieve. This is the source of the recency effect. Explaining the Primacy Effect The primacy effect has a different source. We've suggested that it takes some work to get information into long-term memory (LTM), and it seems likely that this work requires some time and attention. So let's examine how participants allocate their attention to the list items. As participants hear the list, they do their best to be good memorizers, and so when they hear the first word, they repeat it over and over to themselves ("bicycle, bicycle, bicycle")-a process known as memory rehearsal. When the second word arrives, they rehearse it, too ("bicycle, artichoke, bicycle, artichoke"). Likewise for the third ("bicycle, artichoke, radio, bicycle, artichoke, radio"), and so on through the list. Note, though, that the first few items on the list are privileged. For a brief moment, "bicycle" is the only word participants have to worry about, so it has 100% of their attention; no other word receives this privilege. When "artichoke" arrives a moment later, participants divide their attention between the first two words, so "artichoke" gets only 50% of their attention-less than "bicycle" got, but still a large share of the participants' efforts. When "radio" arrives, it has to compete with "bicycle" and "artichoke" for the participants' time, and so it receives only 33% of their attention. Words arriving later in the list receive even less attention. Once six or seven words have been presented, the participants need to divide their attention among all these words, which means that each one receives only a small fraction of the participants' focus. As a result, words later in the list are rehearsed fewer times than words early in the list-a fact that can be confirmed simply by asking to rehearse out loud (Rundus, 1971). participants
This view of things leads immediately observed memory advantage for the early list items. These early words didn't have to share attention with other words (because the other words hadn't arrived yet), were devoted to them than to any others. This means that the early words have a greater chance of to our explanation of the primacy effect-that is, the so more time and more rehearsal greater chance of being recalled after a delay. That's what being transferred into LTM-and so a shows up in these classic data as the primacy effect. Testing Claims about Primacy and Recency This account of the serial-position curve leads to many predictions. First, we're claiming the recency portion of the curve is coming from working memory, while other items on the list are being recalled from LTM. Therefore, manipulations of working memory should affect recall of the recency items but not items earlier in the list. To see how this works, consider a modification of our procedure. In the standard setup, we allow participants to recite what they remember immediately after the list's end. But instead, we can delay recall by asking participants to perform some other task before they report the list items-for example, we can ask them to count backward by threes, starting from 201. They do this for just 30 seconds, and then they try to recall the list.
We've hypothesized that at the end of the list working memory still contains the last few items heard from the list. But the task of counting backward will itself require working memory (e.g., to keep track of where you are in the counting sequence). Therefore, this chore will displace working memory's current contents; that is, it will bump the last few list items out of working memory. As a result, these items won't benefit from the swift and easy retrieval that working memory allows, and, of course, that retrieval was the presumed source of the recency effect. On this basis, the simple chore of counting backward, even if only for a few seconds, will eliminate the recency effect. In contrast, the counting backward should have no impact on recall of the items earlier in the list: These items are (by hypothesis) being recalled from long-term memory, not working memory, and there's no reason to think the counting task will interfere with LTM. (That's because LTM, unlike working memory, isn't dependent on current activity.) Figure 6.3 shows that these predictions are correct. An activity interpolated, or inserted, between the list and recall essentially eliminates the recency effect, but it has no influence elsewhere in the list (Baddeley & Hitch, 1977; Glanzer & Cunitz, 1966; Postman & Phillips, 1965). In contrast, merely delaying the recall for a few seconds after the list's end, with no interpolated activity, has no impact. In this case, participants maintain them in working memory. With no new materials coming in, nothing pushes the recency can continue rehearsing the last few items during the delay and so can items out of working memory, and so, even with a delay, a normal recency effect is observed.
We'd expect a different outcome, though, if we manipulate long-term memory rather than working memory. In this case, the manipulation should affect all performance except for recency (which, again, is dependent on down the presentation of the list? Now, participants will have more time to spend on all of the list items, increasing the likelihood of transfer into more permanent storage. This should improve recall working memory, not LTM). For example, what happens if we slow for all items coming from LTM. Working memory, in contrast, is limited by its size, not by ease of entry or ease of access. Therefore, the slower list presentation should have no influence on working- memory performance. Research results confirm these claims: Slowing the list presentation improves retention of all the pre-recency items but does not improve the recency effect (see Figure 6.4).
Other variables that influence long-term memory have similar effects. Using more familiar or more common words, for example, would be expected to ease entry into long-term memory and does improve pre-recency retention, but it has no effect on recency (Sumby, 1963).
It seems, therefore, that the recency and pre-recency portions of the curve are influenced by distinct sets of factors and obey different principles. Apparently, then, these two portions of the curve are the products of different mechanisms, just as our theory proposed. In addition, FMRI scans suggest that memory for early items on a list depends hippocampus) that are associated with long-term memory; memory for later items on the list do not show this pattern (Talmi, Grady, Goshen-Gottstein, & Moscovitch, 2005; also Eichenbaum, 2017; see on brain areas (in and around the Figure 6.5). This provides further confirmation for our memory model.
A Closer Look at Working Memory Earlier, we counted four fundamental differences between working memory and LTM-the size of these two stores, the ease of entry, the ease of retrieval, and the fact that working memory is dependent on current activity (and therefore fragile) while LTM is not. These are all points proposed by the modal model and preserved in current thinking. As we've said, though, investigators' understanding of working memory has developed over the years. Let's examine the newer conception in more detail. The Function of Working Memory Virtually all mental activities require the coordination of several pieces of information. Sometimes the relevant bits come into view one by one, so that you need to hold on to the early-arrivers until the rest of the information is available, and only then weave all the bits together. Alternatively sometimes the relevant bits are all in view at the same time-but you still need to hold on to them together, so that you can think about the relations and combinations. In either case, you'll end up with multiple ideas in your thoughts, all activated simultaneously, and thus several bits of information in the status we describe as "in working memory." (For more on how you manage to focus on these various bits, see Oberauer & Hein, 2012.) Framing things in this way makes it clear how important working memory is: You use it whenever you have multiple ideas in your mind, multiple elements that you're trying to combine or compare. Let's now add that people differ in the "holding capacity" of their working memories. Some people more elements, and some with fewer. How does this matter? to determine if your (and work with) are able to hold on to To find out, we first need a way of measuring working memory's capacity, memory capacity is above average, below, this measurement, however, has changed or somewhere in between. The procedure for obtaining over the years; looking at this change will help clarify what working memory is, and what working memory is for. Digit Span For many years, the holding capacity of working memory was measured with a digit-span task. In this task, research participants hear a series of digits read to them (e.g., "8, 3, 4") and must immediately repeat them back. If they do so successfully, they're given a slightly longer list (e.g., "9, 2,4, 0"). If they can repeat this one without error, they're given a still longer list ("3, 1, 2, 8, 5"), and so on. The procedure continues until the participant starts to make errors-something that usually happens when the list contains more than seven or eight items. The number of digits the person can echo back without errors is referred to as that person's digit span.
Procedures such as this imply that working memory's capacity is typically around seven items-at least five and probably not more than nine. These estimates have traditionally been summarized by the statement that this memory holds "7 plus-or-minus 2" items (Chi, 1976; Dempster, 1981; Miller, 1956; Watkins, 1977).
However, we immediately need a refinement of these measurements. If working memory can hold 7 plus-or-minus 2 items, what exactly is an "item"? Can people remember seven sentences as easily as seven words? Seven letters as easily as seven equations? In a classic paper, George Miller (one of the founders of the field of cognitive psychology) proposed that working memory holds 7 plus-or-minus 2 chunks (Miller, 1956). The term "chunk" doesn't sound scientific or technical, and that's useful because this informal terminology reminds us that a chunk doesn't hold a fixed quantity of information. Instead, Miller proposed, working memory holds 7 plus-or-minus 2 packages, and what those packages contain is largely up to the individual person. The flexibility in how people "chunk" input can easily be seen in the span test. Imagine that we test someone's "letter span" rather than their "digit span," using the procedure already described. So the person might hear "R, L" and have to repeat this sequence back, and then "F, C, H," and so on. Eventually, let's imagine that the person hears a much longer list, perhaps one starting "H, O, P, T, R A, S, L, U... If the person thinks of these as individual letters, she'll only remember 7 of them, more or less. But she might reorganize the list into "chunks" and, in particular, think of the letters as forming syllables ("HOP, TRA, SLU, . . ."). In this case, she'll still remember 7 plus-or-minus 2 items but the items are syllables, and by remembering the syllables she'll be able to report back at least a dozen letters and probably more.
howHow far can this process be extended? Chase and Ericsson (1982; Ericsson, 2003) studied a remarkable individual who happens to be a fan of track events. When he hears numbers, he thinks of them as finishing times for races. The sequence "3, 4, 9, 2," for example, becomes "3 minutes and 49.2 seconds, near world-record mile time." In this way, four digits become one chunk of information. This person can then retain 7 finishing times (7 chunks) in memory, and this can involve 20 or 30 digits! Better still, these chunks can be grouped into larger chunks, and these into even larger chunks. For example, finishing times for individual racers can be chunked together into heats within track meet, so that, now, 4 or 5 finishing times (more than a dozen digits) become one chunk. With strategies like this and a lot of practice, this person has increased his apparent memory span from the "normal" 7 digits to 79 digits. However, let's be clear that what has changed through practice is merely this person's chunking strategy, not the capacity of working memory itself. This is evident in the fact that when tested with sequences of letters, rather than numbers, so that he can't use his chunking strategy, this individual's memory span is a normal size-just 6 consonants. Thus, the 7-chunk limit is still in place for this man, even though (with numbers) he's able to make extraordinary use of these 7 slots.
Operation Span
Chunking provides one complication in our measurement of working memory's capacity. Another- and deeper-complication grows out of the very nature of working memory. Early theorizing about working memory, as we said, was guided by the modal model, and this model implies that working memory is something like a box in which information is stored or a location in which information can be displayed. The traditional digit-span test fits well with this idea. If working memory is like a box, then it's sensible to ask how much "space" there is in the box: How many slots, or spaces, are there in it? This is precisely what the digit span measures, on the idea that each digit (or each chunk is placed in its own slot.
We've suggested, though, that the modern conception of working memory is more dynamic-so that working memory is best thought of as a status (something like "currently activated") rather than a place. (See, e.g., Christophel, Klink, Spitzer, Roelfsema, & Haynes, 2017; also Figure 6.6.) On this basis, perhaps we need to rethink how we measure this memory's capacity-seeking a measure that reflects working memory's active operation.
Modern researchers therefore measure this memory's capacity in terms of operation span, a measure of working memory when it is "working." There are several ways to measure operation span, with the types differing in what "operation" they use (e.g., Bleckley, Foster, & Engle, 2015; Chow & Conway, 2015). One type is reading span. To measure this span, a research participant might be with asked to read aloud a series of sentences, like these:
Due to his gross inadequacies, his position as director was terminated abruptly It is possible, of course, that life did not arise on Earth at all. Immediately after reading the sentences, the participant is asked to recall each sentence's final word-in this case, "abruptly" and "all." If she can do this with these two sentences, she's asked to do the same task with a group of three sentences, and then with four, and so on, until the limit on her performance is located. This limit defines the person's working-memory capacity, or WMC.(However there are other ways to measure operation span-see Figure 6.7.)
Let's think about what this task involves: storing materials (the ending words) for later use in the recall test, while simultaneously working with other materials (the full sentences). This juggling of processes, as the participant moves from one part of the task to the next, is exactly what working memory must do in day-to-day life. Therefore, performance in this test is likely to reflect the efficiency with which working memory will operate in more natural settings.
Is operation span a valid measure-that is, does it measure what it's supposed to? Our hypothesis higher operation span has a larger working memory. If this is right, then use of this memory is that someone with a someone with a higher span should have an advantage in tasks that make heavy Which tasks are these? They're tasks that require you to keep multiple ideas active at the same time, prediction: People so that you can coordinate and integrate various bits of information. So here's our with a larger span (i.e., a greater WMC) should do better in tasks that require the coordination of different pieces of information. Consistent with this claim, people with a greater WMC do have an advantage in many settings-in tests of reasoning, assessments of reading comprehension, standardized academic tests (including the verbal SAT), tasks that require multitasking, and more. (See, e.g., Ackerman, Beier, & Boyle, 2002; Butler, Arrington, & Weywadt, 2011; Daneman & Hannon, 2001; Engle & Kane, 2004; Gathercole & Pickering, 2000; Gray, Chabris, & Braver, 2003; Redick et al., 2016; Salthouse & Pink, 2008. For some complications, see Chow & Conway, 2015; Harrison, Shipstead, & Engle, 2015; Kanerva & Kalakoski, 2016; Mella, Fagot, Lecert, & de Ribaupierre, 2015.)
These results convey several messages. First, the correlations between WMC and performance provide indications about when it's helpful to have a larger working memory, which in turn helps us understand when and how working memory is used. Second, the link between WMC and measures of intellectual performance provide an intriguing hint about what we're measuring with tests (like the SAT) that seek to measure "intelligence." We'll return to this issue in Chapter 13 when we discuss the nature of intelligence. Third, it's important that the various correlations are observed with the more active measure of working memory (operation span) but not with the more traditional (and more static) span measure. This point confirms the advantage of the more dynamic measures and strengthens the idea that we're now thinking about working memory in the right way: not as a passive storage box, but instead as a highly active information processor. The Rehearsal Loop Working memory's active nature is also evident in another way: in the actual structure of this memory. The key here is that working memory is not a single entity but is instead, a system built of several components (Baddeley, 1986, 1992, 2012; Baddeley & Hitch, 1974; also see Logie & Cowan, 2015). At the center of the working-memory system is a set of processes we discussed in Chapter 5: the executive control processes that govern the selection and sequence of thoughts. In discussions of working memory, these processes have been playfully called the "central executive" as if there tiny agent embedded in your mind, running your mental operations. Of course, there is no were a agent, and the central executive is just a name we give to the set of mechanisms that do run the show.
The central executive is needed for the "work" in working memory; if you have to plan a response or make a decision, these steps require the executive. But in many settings, you need less than this from working memory. Specifically, there are because you're analyzing them right you don't need the executive. Instead, you can rely on the executive's "helpers," leaving the executive settings in which you need to keep ideas in mind, not now but because you're likely to need them soon. In this case free to work on more difficult matters. Let's focus on one of working memory's most important helpers, the articulatory rehearsal loop. To see how the loop functions, try reading the next few sentences while holding on to these numbers: "1, 4, 6, 3" Got them? Now read on. You're probably repeating the numbers over and over to yourself, rehearsing them with your inner voice. But this takes very little effort, so you can continue reading while doing this rehearsal. Nonetheless, the moment you need to recall the numbers (what were they?), they're available to you.
In this setting, the four numbers were maintained by working memory's rehearsal loop, and with the numbers thus out of the way, the central executive could focus on the processes needed for reading. That is the advantage of this system: With mere storage handled by the helpers, the executive is available for other, more demanding tasks.
To describe this sequence of events, researchers would say that you used subvocalization-silent speech-to launch the rehearsal loop. This production by the "inner voice" produced representation of the target numbers in the phonological buffer, a passive storage system used for holding a representation (essentially an "internal echo") of recently heard or self-produced sounds. In other words, you created an auditory image in the "inner ear." This image started to fade away after a second or two, but you then subvocalized the numbers once again to create a new image, sustaining the material in this buffer. (For a glimpse of the biological basis for the "inner voice" and "inner ear" see Figure 6.8.)
Many lines of evidence confirm this proposal. For example, when people are storing information in working memory, they often make "sound-alike" errors: Having heard "F" they'll report back "S." When trying to remember the name "Tina," they'll slip and recall "Deena" The problem isn't that people mis-hear the inputs at the start; similar sound-alike confusions emerge if the inputs are presented visually. So, having seen "F," people are likely situation to report back the similar-looking "E." to report back "S"; they aren't likely in this
What produces this pattern? The cause lies in the fact that for this task people are relying on the rehearsal loop, which involves a mechanism (the "inner ear") that stores the memory items as (internal representations of) sounds. It's no surprise, therefore, that errors, when they occur, are shaped by this mode of storage.
As a test of this claim, we can ask people to take the span test while simultaneously saying "Tah- Tah-Tah" over and over, out loud. This concurrent articulation task obviously requires the mechanisms for speech production. Therefore, those mechanisms are not available for other use including subvocalization. (If you're directing your lips and tongue to produce the "Tah-Tah-Tah" sequence, you can't at the same time direct them to produce the sequence needed for the subvocalized materials.) How does this constraint matter? First, note that our original span test measured the combined capacities of the central executive and the loop. That is, when people take a standard span test (as opposed to the more modern measure of operation span), they store some of the to-be-remembered items in the loop and other items via the central executive. (This is a poor use of the executive underutilizing its talents, but that's okay here because the standard span task doesn't require anything beyond mere storage.)
With concurrent articulation, though, the loop isn't available for use, so we're now measuring the capacity of working memory without the rehearsal loop. We should predict, therefore, that concurrent articulation, even though it's extremely easy, should cut memory span drastically. This prediction turns out to be correct. Span is ordinarily about seven items; with concurrent articulation, it drops by roughly a third-to four or five items (Chincotta & Underwood, 1997; see Figure 6.9).
Second, with visually presented items, concurrent articulation should eliminate the sound-alike errors. Repeatedly saying "Tah-Tah-Tah" blocks use of the articulatory loop, and it's in this loop, we've proposed, that the sound-alike errors arise. This prediction, too, is correct: With concurrent articulation and visual presentation of the items, sound-alike errors are largely eliminated.
The Working-Memory System
As we have mentioned, your working memory contains the thoughts and ideas you're working right now, and often this means you're trying to same time. That can cause difficulties, because working memory only has a small capacity. That's on keep multiple ideas in working memory alll at the important, because they substantially increase working why working memory's helpers are so memory's capacity. Against this backdrop, it's not surprising that the working-memory system relies on other helpers in addition to the rehearsal loop. For example, the system also relies on the visuospatial buffer, used for storing visual materials such as mental images, in much the same way that the rehearsal loop speech-based materials. (We'll have more to say about mental images in Chapter 11.) Baddeley working-memory system) has also proposed another stores (the researcher who launched the idea of a component of the system: the episodic buffer. This component is proposed as a mechanism that helps the executive organize information into a chronological sequence-so that, for example, you can keep track of a story you've just heard or a film clip you've just seen (e.g., Baddeley, 2000, 2012; Baddeley & Wilson, 2002; Baddeley, Eysenck, & Anderson, 2009). The role of this component is evident in patients with profound amnesia who seem unable to put new information into long-term storage, but who still can recall the flow of narrative in a story they just heard. This short-term recall, it seems, relies on the episodic buffer-an aspect of working memory that's unaffected by the amnesia. In addition, other helpers have been deaf since birth and communicate via sign language. We wouldn't expect these individuals can be documented in some groups of people. Consider people who rely on an "inner voice" and an "inner ear"-and they don't. People who have been deaf since birth to rely on a different helper for working memory: They use an "inner hand" (and covert sign language) rather than an "inner voice" (and covert speech). As a result, they disrupted if they're asked to wiggle their fingers during a memory task (similar to a hearing person saying "Tah-Tah-Tah"), and are wiggle their fingers during they also tend to make "same hand-shape" errors in working memory (similar to the sound-alike errors made by the hearing population). The Central Executive What can we say about the main player within the working-memory system-the central executive? In our discussion of attention (in Chapter 5), we argued that executive control processes are needed to govern the sequence of thoughts and actions; these processes enable you to set goals, make plans for reaching those goals, and select the steps needed for implementing those plans. Executive control also helps whenever you want to rise above habit or routine, in order to "tune" your words or deeds to the current circumstances.
For purposes of the current chapter, though, let's emphasize that the same processes control the selection of ideas that are active at any moment in time. And, of course, these active ideas (again, by definition) constitute the contents of working memory. It's inevitable, then, that we would link executive control with this type of memory. With all these points in view, we're ready to move on. We've now updated the modal model (Figure 6.1) in important ways, and in particular we've abandoned the notion of a relatively passive short-term memory serving largely as storage container. We've shifted to a dynamic conception of working memory, with the proposal that this term is merely the name for an activities-especially the complex activities of the central executive together with its various helpers.
But let's also emphasize that in this modern conception, just as in the modal model, working memory is quite fragile. Each shift in attention brings new information into working memory, and the newly arriving material displaces earlier items. Storage in this memory, therefore, is temporary organized set of Obviously, then, we also need some sort of enduring memory storage, so that we can remember things that happened an hour, or a day, or even years ago. Let's turn, therefore, to the functioning of long-term memory. e. Demonstration 6.2: Chunking The text mentions the benefits of chunking, and these benefits are easy to demonstrate. First, let's measure your memory span in the normal way: Cover the list of letters below with your hand or a piece of paper. Now, slide your hand or paper down, to reveal the first row of letters. Read the row silently, pausing briefly after you read each letter. Then, close your eyes, and repeat the row aloud. Open your eyes. Did you get it right? If so, do the same with the next row, and keep going until you hit a row that's too long-that is, a row for which you make errors. Count the items in that row. This count is your digit span.
CA
GTY
RBOS
PSYRL
RBDPNF
YHAREIG
RSOIUTCA
ERSLJTEGF
SDOEUVMKVG
Now, we'll do the exercise again, but this time, with rows containing letter pairs, not letters. Using the same procedure, at what row do you start to make errors?
BI AN
EL ZA IN
ET LO JA RE
CA OM DO IG FU
AT YE OR CA VI TA
EB ET PI NU ES RA SU
RI NA FO ET HI ER WU AG
UR KA TE PO AG UF WO SA KI
SO HU JA IT WO FU CE YO FI UT
It's likely that your span measured with single letters was 6 or 7, or perhaps 8. It's likely that your span measured with letter pairs was a tiny bit smaller, perhaps 5 or 6 pairs-but that means you're now remembering 10 or 12 letters. If we focus on the letter count, therefore, your memory span seems to have increased from the first test to the second. But that's the wrong way to think about this. Instead, your memory span is constant (or close to it). What's changing is how you use that span -that is, how many letters you cram into each "chunk.
Now, one more step: Read the next sentence to yourself, then close your eyes, and try repeating the sentence back.