| 
	
		| Author | Message |  
		| Vert1 
 
				
			 
				
			 
				
			 
				Joined: Aug 28 2011
			 
				
			 
				Posts: 537
			   | 
			
				| 
 
				The book is called Mind At Play: The Psychology Of Video Games by GEOFFREY R. LOFTUS & ELIZABETH F. LOFTUS Basic Books, Inc., Publishers New York
 Preface
 [SPOILER:a5d5496eb9] PREFACE
 Our aim is to shed light on the intriguing phenomenon of video games.
 Along the way, we'll introduce some of the most farranging ideas in
 modern psychology and also provide an entrée to the world of
 computers, for the appeal of the games is largely psychological and
 video games owe their very existence to the computer revolution.
 
 When we set out to write a book on the psychology of video games, we
 tried to adopt a relatively neutral stance. We read everything we
 could about the games. We went to video arcades to play the games and
 to talk to the owners, players, and onlookers. We talked to parents
 and critics. At the same time, we explored the contributions that
 research in psychology might provide for an understanding of the
 games.
 
 After we had poked around for a while, some themes began to emerge. A
 major one was the computer theme: video games are fundamentally
 different from all other games in history because of the computer
 technology that underlies them. The marriage of games and computers
 has produced both costs and benefits. It enables, for example, the
 design of games that are extremely compelling to play. Critics would
 call the games addictive. Proponents would call them great fun.
 
 A second theme involves ability. Playing a video game requires
 intricately tuned skills. How are these skills acquired? What are the
 mental components that go into them?
 
 A final theme revolves around education. We believe that the games
 combine two ingredients—intrinsic motivation and computer-based
 interaction—that make them potentially the most powerful educational
 tools ever invented. We have discovered, much to our delight, a number
 of research projects that are striving to harness this educational
 power. Some are succeeding. More will succeed in the coming years.
 
 While writing this book, we've had help from a variety of people who
 deserve special thanks. Craig Raglund provided a number of perceptive
 suggestions about the potential uses of video games in education. Hank
 Samson and Jim Diaz, who are much better players than we've yet
 become, engaged us in lively discussions about reinforcement. Ellen
 Markman, Delia Gerhardt, and Brian Wandell read and provided useful
 comments on early versions of several chapters. And, finally, there's
 no way to adequately thank Judy Greissman, our editor at Basic Books,
 who initiated the whole idea and who did a magnificent job shepherding
 it through all stages from start to finish.[/SPOILER:a5d5496eb9]
 Chapter 1: Videomania
 [SPOILER:a5d5496eb9] CHAPTER 1: VIDEOMANIA
 Venturing into a video arcade, you find a decidedly mixed crowd. To be
 sure, most players are "typical teenagers," who play the video games
 for at least a few hours every week. 1 But a not uncommon sight is the
 corporation executive, the housewife, the construction worker.
 According to one survey, about half the game players (in arcades and
 elsewhere) are over the age of twenty-six. 2
 
 The economics of the video game craze are staggering. Each year more
 than $5 billion is spent in the video arcades alone. 3 And while the
 video parlor operators are busily collecting their quarters,
 microcomputer manufacturers are expected to make similarly large sums
 selling both home computers and the software to go with them.
 Advertisements for home computers in traditional publications describe
 the virtues of keeping the checkbook balanced, maintaining Christmas
 card lists, and teaching the children to program. However, by far the
 major use of home computers is for video games, and indeed the
 potential home video game market provided a major incentive for the
 development of many home computers in the first place. Six or seven
 years ago hardly any video games existed. But today arcade and home
 video games comprise an industry that has reached over $7 billion.
 
 While questioning people in the course of preparing this book, we
 uncovered a wide range of feelings about the games, most of them quite
 passionate. A twenty-three-year-old computer engineer, David, was
 playing portable video games non‐ stop on a flight we shared with him.
 "What do you like about these games?" we asked. His answer was quite
 definite: "I think they're entertaining. They fascinate me. I can't
 believe I can hold something almost as small as a credit card that can
 play a game I haven't mastered. They're a challenge. What's most
 intriguing is that I know because of my work that there is a pattern
 to these games. And I haven't yet figured it out. But I keep getting
 closer. I keep getting better."
 
 On the other hand, Glen, a twenty-five-year-old property manager,
 hates video games. He says just as definitely: "I get no satisfaction
 out of beating a machine!" And Jane, a thirty‐ eight-year-old
 management consultant, sees them as a soporific for teenagers, an
 aesthetic nightmare, and is adamant that they are no good at all for
 anything whatsoever. The opinions of public figures reflect this
 controversy. The U.S. Surgeon General, Everett Koop, decries video
 games, while Isaac Asimov, one of the most respected science writers
 in the United States, extolls their educational benefits. 4 Given
 these extreme differences of opinion, we find the job of trying to
 understand the video game explosion even more challenging.Figure 1.1
 shows the "family tree" of video games. Their immediate parents were
 the digital computer and the arcade game. The computer side of this
 parentage will be traced in chapter 6; the arcade side, in chapter
 4.Most games involve competition of one sort or another. But somewhere
 along the line, solitary games evolved, in which competition, if it
 even exists, is with yourself (for example, trying to top your
 previous best score) or with some abstract entity such as a deck of
 cards or a machine. Most video games are, or can be, solitary games.
 You play chiefly against the machine.Three conceptual ingredients
 enter into the immediate background of video games:
 1.      Sound and fury. Flashing lights, bizarre noises, and continuously
 displayed, astronomical scores were incorporated in pinball machines.
 Often associated with sleazy bars and arcades and thought to be
 controlled by organized crime, nonetheless pinball machines managed to
 build up a mystique. They were colorful and gaudy. Presumably in an
 effort to give the illusion of variety, different games in an arcade
 represented an enormous variety of concepts, ranging from the Vietnam
 War to the Indianapolis 500 to the Playboy penthouse. However, all
 these games were virtually identical in terms of how they were played
 and what the goals were.
 
 FIGURE 11 The family tree of video games.
 
 2.      Death and destruction. In the 1960s a new kind of game began to
 compete with pinball for arcade space. These games were usually
 automated in some fairly sophisticated way and usually involved
 violence of one sort or another. In Bomber Pilot, for example, the
 player, after inserting a quarter, was seemingly placed at the
 controls of a bomb-laden jet plane and presented with varying terrain
 passing below. The goal was to drop bombs on targets that would appear
 for a few seconds beneath the aircraft and then vanish. Points were
 awarded for successful hits, with the highest numbers of         points
 being awarded for the destruction of high-density population areas,
 such as large cities, and strategic targets, such as enemy missile
 bases. The player was constantly under threat of enemy antiaircraft
 fire and therefore had to worry about taking evasive action as well as
 aiming the bombs. Like pinball, these arcade games were supplemented
 by exotic flashing lights, violent noises, and rapidly increasing
 scores, which were prominently displayed.
 3.      Computer control. In the 1970s another game arrived, unobtrusively,
 on the scene. This newcomer, Pong, differed from its predecessors in
 several ways. First, and most important, it was entirely under the
 control of a computer, and except for the player's joysticks, there
 were no moving parts. Everything was electronic. In a major way, Pong
 heralded the dawn of a new era.
 Pong's second distinction was that it somehow acquired an immediate,
 broad social acceptance. It suddenly appeared in all sorts of
 places—in cocktail lounges, train stations, airliners— where no one
 would dream of putting either pinball or the death and destruction
 games. Although the reasons for this broad social acceptance are not
 entirely clear, it is interesting to speculate. First, size doubtless
 played a part. The older games, which used heavy mechanical parts,
 were large and difficult to transport (and were certainly not welcome
 in places like airplanes where size and weight are at a premium).
 Pong, with a computer at its heart, was much more mobile. Second, in
 the years following its introduction, Pong's price—along with the
 prices of all other computer-based goods—fell rapidly, and thus the
 game became widely available. In fact, in the mid-1970s versions of
 Pong—primitive by today's standards, but revolutionary then—began to
 find their way into individual households. And, finally, Pong's
 central theme was not the violence and kitsch of the previous arcade
 games. Instead, it mimicked the then-genteel racquet games such as
 tennis and squash. This feature may well have provided the lubrication
 necessary to ease the game into polite society.
 
 For whatever reasons, Pong managed to escape from the smoky, seedy
 atmosphere of its pinball arcade predecessors, and it set the stage
 for the widespread status currently enjoyed by today's video games. As
 we have indicated, the computer basis of Pong, with its attendant
 implications for cost and mobility, was a critical ingredient of this
 transition. In chapter 6 we shall summarize the computer revolution
 and its critical role in the psychology of video games.
 
 Throughout this book, we are going to take the theories and
 experiments of psychologists and use them to understand the video game
 phenomenon that has sent many children into video arcades and many
 parents into fits of nervousness. When the surgeon general marches
 through the country crying, in essence, "Warning. Video games may be
 hazardous to your children's health," should we believe him? Dr. Koop
 has argued that there is nothing constructive about the games and that
 in fact they may be teaching children to kill and destroy since that's
 what most of the games are about. In this book, however, we'll take
 the position that his fear may be completely unwarranted. Video games,
 at least in some form, are going to be with us for quite some time,
 and it is important to analyze dispassionately their psychological
 costs and their benefits. We should not ban video games without a deep
 and thoughtful analysis, any more than we should ban hopscotch or
 Monopoly.
 
 When people ask "What good are these games, anyhow?" the suggestion is
 often heard that they have a direct benefit of increasing some skill
 like eye-hand coordination. But so do many activities, such as
 baseball and sewing. What are not usually considered are the indirect
 benefits that video games can and do yield. These can be quite
 unexpected and enormously powerful. We refer to such benefits as the
 creation of an intense interest in computers, which has led many of
 the game players of the early 1980s to jobs as computer programmers
 with major corporations. We interviewed one such man, Greg, who at the
 age of twenty-three had landed a programming job with a growing
 software company just south of San Francisco. Greg spends his days
 writing computer programs and claims he's happier than he has ever
 been in his life. Five years earlier, Greg's parents worried that he
 was spending too much time playing video games. They thought he might
 be "addicted" to the games the way other kids seemed to become
 addicted to drugs or alcohol. Now—five years later—they take
 tremendous pride in their son's work. They have come to realize that
 the games were the start of his intense interest in computers that led
 to his career.
 
 What was it about video games that Greg found so appealing? Why was he
 willing to forgo sporting events and trips to the beach to spend time
 in video arcades? To address such questions, we now draw upon the
 field of psychology.[/SPOILER:a5d5496eb9]
 
 Chapter 2: Why Video Games Are Fun
 [SPOILER:a5d5496eb9] CHAPTER 2: WHY VIDED GAMES ARE FUN
 Syndicated columnist Ellen Goodman has described her own initiation
 into video games. One cloudy day she was waiting for an airplane in
 the Detroit airport. She had time on her hands and thought she would
 try a quick game of Pac-Man. Before she knew it, she was hooked;
 Pac-Man took her for every last quarter she had. It began innocently
 enough. She put in her first quarter but, not yet having a feel for
 the game, she shoved Pac-Man into the arms of the nearest monster.
 Undiscouraged by defeat, she tried again. She did a little better this
 time. Something about the game made her think she could win. So she
 kept at it. Fortunately for Goodman, she was able to break the habit
 before it broke her. Once she gained some distance —away from the
 clutches of Pac-Man—she thought about it
 clearly. "Pac-Man hooks only those people who confuse victory with
 slow defeat." 1
 
 Why do people find the games so compelling? In this chapter we will
 illustrate how the psychological concepts of reinforcement, cognitive
 dissonance, and regret help explain the process of video game
 addiction. Although very few psychological studies deal directly with
 the issue of why video games are fun, we found one that does. At the
 end of this chapter, we'll describe it.
 
 Pac-Man, by Way of Example
 In describing how experimental psychologists might view and explain
 video game behavior, it's useful to have one example to rely on
 throughout. Because it has been one of the most popular games, we've
 chosen Ellen Goodman's nemesis, Pac‐ Man. For the benefit of any
 readers who have been living somewhere besides Earth for the past few
 years, we'll provide a brief description of Pac-Man here.
 
 The game Pac-Man gets its name from the Japanese term paku paku, which
 means "gobble gobble." The character Pac‐ Man is a little yellow
 creature who looks like he's smiling. He is set in a somewhat complex
 maze that's initially filled with yellow dots. The player controls
 Pac-Man's movements with a four-directional joystick or control knob,
 thereby allowing him to move left, right, up, or down. His jaw faces
 the direction in which he's moving.
 
 As he glides around through the maze, Pac-Man gobbles up the dots.
 Simultaneously, however, he's vigorously pursued by four monsters
 (each a different color) named Inky, Blinky, Pinky, and Clyde. If one
 of the monsters catches him, Pac‐ Man slowly folds up and wilts away
 while the machine provides sympathetic noises. Pac-Man can be eaten
 only three times before the game ends, the score reverts to zero, and
 another quarter is required for further play.
 
 Pac-Man has a variety of ways of combatting and outwitting the
 monsters. First, by adept maneuvering, he can keep away from them.
 Second, at each of the four corners of the board is a glowing,
 extra-strength dot called an energizer. Whenever Pac-Man eats an
 energizer, some, or all, of the monsters turn blue. When a monster is
 blue, contact between it and Pac-Man results in the monster's demise
 rather than in Pac-Man's. However, monsters remain blue only for brief
 periods following Pac-Man's consumption of an energizer; moreover, a
 monster destroyed by Pac-Man doesn't stay destroyed, but instead
 returns to action following a short interlude in the penalty box.
 
 If Pac-Man eats all the dots in the maze, the player has completed a
 "board" and a new maze with fresh dots appears. This provision of new
 boards can continue indefinitely; however, with each successive board,
 things become more difficult. The monsters move faster and are blue
 for shorter periods of time; Pac-Man moves slower; and so on. In
 between each board, a short, amusing skit occurs on the screen.
 
 When playing Pac-Man, the player is rewarded in many ways. For
 example, points are awarded for gobbling up the dots and for
 destroying monsters. Additionally, there is some symbol in the center
 of the board. What the symbol is depends on how many boards the player
 has managed to get through. For instance, a cherry appears on the
 first board, a strawberry on the second, until finally, a key appears
 on the twelfth board and all subsequent boards. Naturally, Pac-Man
 aficionados have memorized the sequence of symbols, and the symbols
 that signify that many boards have been accomplished are the most
 prestigious. The symbols themselves can be eaten by Pac-Man for
 progressively increasing numbers of points. And finally, additional
 sources of reinforcement are the amusing skits that occur between
 boards.
 
 The first time you drop a quarter into a Pac-Man game, you might get a
 score of 1,000, if you're lucky. Less than a minute will pass, and
 your Pac-Man will be eaten three times in disconcertingly rapid
 succession. But chances are you'll play again. And again. And again.
 You might get through a couple of boards, and your score might get up
 to 5,000 or so. But what will puzzle you most is this: the top score
 for the day will be posted on the machine. It might be 56,000. Or
 102,000. It could be over 500,000.
 
 How could anyone get such a score? In the next chapter we'll focus on
 the mental and physical skills that go into video game facility. For
 the moment, however, let's concern ourselves with the question of how
 people become so motivated that they'll play video games hour after
 hour, day after day. To do so, we'll consider the phenomenon of
 reinforcement.
 
 Mechanics of Reinforcement
 On a clear Vermont night, a young boy sits methodically scanning the
 sky in search of shooting stars. In a sleazy Las Vegas casino, a
 glassy-eyed old woman mechanically deposits nickel after nickel in a
 slot machine. And in a posh Chicago suburb, a teenager spends hours
 playing Gallaxian at the video parlor every afternoon when school lets
 out.
 
 These three situations have a common element: in each one, behavior is
 dictated by what psychologists refer to as the partial reinforcement
 effect. To understand this effect—which is a critical psychological
 ingredient of video game addiction—it will be useful to provide a
 thumbnail sketch of reinforcement itself and the role it plays in
 shaping behavior. Video games are designed to take your money, and
 they have an uncanny way of doing so. They play on the ordinary
 person's weaknesses for reinforcement.
 
 BASIC CONCEPTS
 Reinforcement is the provision for you of something that you like. In
 each of the three preceding examples, reinforcement of some sort is
 involved. For the aspiring astronomer, seeing a shooting star is a
 reinforcement. For the Vegas gambler, the slot machine's payoff is a
 reinforcement. And for the video game player, beating a previous high
 score or winning a free game or shooting down enemy spaceships is a
 reinforcement. There are a variety of psychological theories designed
 to explain the role of reinforcement in behavior. Central to all of
 them, however, is the idea that any behavior that is followed by
 reinforcement will increase in frequency. In short, video games that
 do something to make a player feel good will be played again and
 again.
 
 Certain elementary kinds of human behavior can be analyzed nicely by
 referring to well-studied principles of reinforcement. To get the
 complete picture, of course, we will need to go beyond reinforcement
 and examine other important topics such as motivation and social
 pressure. For the moment, though, we consider only the subject of
 rewards.
 
 From observations made with rats, pigeons, monkeys, and other
 organisms—and some studies with humans—we have come to know a great
 deal about how reinforcement hooks people and gets them to behave in
 certain predictable ways.
 
 Experiments with rats are the easiest to do, and certain laws of
 reinforcement emerge in their simplest form in a rat study.
 
 A typical experiment to investigate reinforcement involves a white
 laboratory rat placed in what is called a "Skinner box," named after
 Harvard psychologist B. F. Skinner, who invented it. A Skinner box is
 a cage containing a protruding lever that the rat can push and a small
 container into which food can be dispensed by the experimenter. When a
 novice rat is initially placed into the box, it wanders around,
 performing the sorts of behaviors that rats typically perform:
 exploring, sniffing, grooming. Eventually—probably out of sheer
 boredom—it presses the lever, whereupon a rat pellet appears in the
 container. This is a reinforcing event, which undoubtedly makes the
 rat very happy. The event, like any reinforcement, leads to an
 increase in the behavior that just preceded it—in this case,
 lever-pressing. Eventually the rat will be pressing the lever at a
 rapid clip and eating rat pellets to its heart's content. Already you
 may be sensing a correspondence between the rat in the Skinner box and
 the human in front of the "video parlor box." Both are continually
 performing actions that lead to reinforcement. The rat gets crunchy
 food, while the video game player gets higher scores and free games.
 While food would be reinforcing for the video gamer, there are
 undoubtedly times when given the choice between (1) a slice of pizza
 and (2) a chance to play Space Invaders and perhaps achieve the high
 score for the day, the game would be a more powerful reinforcer.
 
 SCHEDULES OF REINFORCEMENT
 Game designers confront many decisions when trying to create a game
 that people will like. One question is: How often should a player be
 reinforced? Is it a good idea to make sure that players never leave
 their first game without some form of reinforcement? Or should the
 games be created so that they are sufficiently difficult and several
 plays are necessary before a single rewarding event occurs? As we
 shall see, game designers have apparently stumbled on the optimal
 strategy for reinforcing people so they (like the rat that keeps
 pressing) will continue dropping quarters at a rapid clip.
 
 To see this, let's return to the rat. In the scenario we have just
 sketched, the rat was reinforced after each lever press. This is
 called continuous reinforcement—the rat gets a reward each time it
 presses the lever. This schedule is usually necessary to get the rat
 started.
 
 However, continuous reinforcement is just one of many possible
 schedules of reinforcement. Suppose that after the rat starts pressing
 away, we stop providing food pellets after each press and instead
 provide them only now and then. What will the rat do? It will continue
 pressing, at least for a while. At some point, however, the rat will
 again be reinforced, since eventually another pellet will follow a
 lever press. This schedule of reinforcement is called partial
 reinforcement—reinforcement is intermittent rather than continuous.
 Partial reinforcement is a powerful way of hooking both rats and
 people. They keep responding in the absence of reinforcement because
 they are hoping that another reward is just around the corner. The
 gambler keeps pulling the slot machine lever even though he has lost
 ten times in a row because he hopes that the next time around he'll
 win big. This would never happen if reinforcement had been continuous.
 If it were, then as soon as the money stopped, the gambler would
 quickly decide that the machine was no good anymore and turn his
 attention elsewhere.
 
 EXTINCTION
 We knew a woman we'll call Ruth who went to Las Vegas at least twice a
 month. Each time she stayed for three days and spent most of her
 waking hours diligently pumping nickels into a slot machine. If you
 happened to wake up as the sun was rising, you'd find Ruth glued to
 the machines. Her case was classic. When she was in her early
 twenties, Ruth went to Las Vegas for the first time to celebrate her
 best friend's twenty‐ first birthday. While there, she experienced a
 "big win" of $850, which was more than her monthly salary. This
 experience made a lasting impression on her and from then on she was
 hooked.
 
 What if reinforcement were cut off altogether? As you might expect,
 Ruth and the other players would not stop playing immediately. Rather,
 they would doggedly continue to play for some time before giving up in
 exasperation. This decline and eventual cessation of behavior (lever
 pumping) in the absence of reinforcement (money) is referred to as
 extinction, and the length of time it takes for the behavior to cease,
 or extinguish, is referred to as the extinction period.
 
 How long will it take Ruth to stop playing these machines? This
 depends heavily on the schedule of reinforcement that she had been
 exposed to earlier. If Ruth was used to winning each time she dropped
 a coin, then she would extinguish very quickly. However, this
 continuous schedule of reinforcement would have been very unlikely in
 Vegas. It is far more likely that she had been reinforced
 intermittently—that she had been on a partial reinforcement
 schedule—and in this case the extinction period is considerably
 longer.
 
 But which partial reinforcement schedule leads to the longest
 extinction periods? It turns out that the "variable" schedules (either
 variable ratio—say, one in five turns, on the average—or variable
 interval—say, every minute, on the average) are the most powerful ones
 because they lead to the longest extinction periods. More precisely, a
 variable schedule with moderately long intervals between
 reinforcements is a good idea for game designers since it leads people
 to continue to play the longest in the face of nonreward. However, if
 the variable schedule is too long, a person might actually extinguish
 when the game designer had not meant this to happen. With a very long
 interval between reinforcements, the person has no way of being sure
 that the reinforcement drought is ever going to end. He or she may
 actually give up playing rather than drop the next quarter that would
 lead to a reward.
 
 In sum, we know a great deal about reinforcement and how it affects
 people. A partial reinforcement schedule leads to behavior that (1)
 occurs more rapidly and (2) is more resilient to extinction than does
 a continuous reinforcement schedule. The dependence of extinction on
 the prior schedule of reinforcement has been dubbed the partial
 reinforcement effect. Taken together, these two effects of partial
 reinforcement produce what looks very much like "addictive behavior."
 
 The question of why the partial reinforcement effect occurs has long
 been of interest to psychologists, and a variety of experiments have
 been carried out to test various theories. For our purposes, however,
 the important thing is that the phenomenon happens at all. Knowing how
 rates of responding and resilience to extinction are affected by
 reinforcement schedules should—in principle, anyway—allow us to
 account for the seemingly addictive behavior engendered by video
 games. Furthermore, we can explain why some video games are more
 addictive than others.
 
 With these concepts in hand, it is easy to see the underlying
 principle that governs behavior not only in the case of Ruth, the
 compulsive gambler, but also in the case of the young, aspiring
 astronomer and the teenager in the video parlor. In all cases, the
 person is under a partial reinforcement schedule: slot-machine payoffs
 occur only infrequently, as do shooting stars and video game wins.
 Moreover, the schedules are of the variable sort (slot-machine
 payoffs, shooting stars, and video game wins do not occur in a fixed,
 systematic way) and therefore produce the most powerful resistance to
 extinction. According to the principles of partial reinforcement,
 therefore, the behaviors involved should be highly resistant to
 extinction, which indeed they are. In all cases, the person is willing
 to pursue the behavior for lengthy periods of time, even in the
 absence of reinforcement.
 
 REINFORCEMENT AND VIDEO GAME DESIGN
 Given an understanding of these principles, the task of a person who
 designs and manufactures video games is more focused. The designer's
 goal, of course, is to make money on the game. This goal is achieved
 by ensuring that the eventual players will insert quarters into the
 game as rapidly as possible. It's to the designer's advantage to
 design a game that reinforces the player on the most addictive
 schedule possible. And this usually turns out to be a variable-ratio
 or a variable-interval schedule.
 
 What this means in the world of video games is that reinforcement will
 be somewhat unpredictable. A reward might come on the average of once
 every ten times a player plays. For example, the player might achieve
 three complete boards in Pac-Man only once every ten times, or so,
 that he or she plays. This is an example of a variable-ratio schedule.
 If a reinforcement came on average once every ten minutes, with the
 actual times ranging randomly from once every ten seconds to once
 every half hour, our player would be on a variable-interval schedule.
 So, for example, if the Space Invader's mother ship appeared and was
 destroyed on this sort of schedule, we would say that the player was
 on a variable-interval schedule. These irregular schedules of
 reinforcement are, in part, what cause video games to be so compelling
 and irresistible.
 
 In trying to implement these reinforcement schedules, an interesting
 problem arises for the video game designer. To understand the nature
 of this problem, it is useful to consider not a video game but,
 rather, pinball. As you probably know, pinball is played with a steel
 ball, initially ejected via a spring mechanism, into the playing area.
 While in the playing area, the ball can strike various knobs, springs,
 and other assorted paraphernalia, all of which cause the score to
 increase. If, however, the ball rolls down to where it originated,
 that ball is eliminated and a new ball must be ejected. The player has
 three kinds of control over what is going on. First, the initial
 ejection of the ball can range from soft to powerful, depending on how
 far back the player draws the spring-loaded plunger. Second, near the
 player is a small gate, partially guarded by flippers that the player
 can manipulate. These flippers, if used properly, eject the ball back
 into the field before it can roll out of play. And finally, the
 player, by using his or her whole body, can tilt the entire machine
 ever so slightly, in order to influence the path of the ball. The tilt
 can't be too much, however, or "tilt" will register on the scoreboard
 and the game will end.
 
 Any game—pinball included—can't be too easy, or it will provide
 continuous reinforcement for practiced players, which, as we've seen,
 doesn't really lead to much of an addiction to the game. On the other
 hand, the game can't be too difficult —that is, reinforcement can't be
 too intermittent—because then most novice players will never get
 enough reinforcement to become addicted to the game in the first
 place. Just as the rat in the Skinner box never really begins pressing
 the lever unless initial reinforcement is more or less continuous, so
 the would-be game addict needs some early reinforcement in order to
 get interested in the game. This means the pinball game designer is
 forced to an intermediate stance—the game is made moderately
 difficult. This solution has two difficulties. First, and probably
 most serious, many potential players won't ever start playing pinball,
 because they don't get reinforced enough when they first start playing
 and can't play very well. Second, a really expert player will be
 reinforced continuously, which, as we've seen, doesn't produce much
 addiction. A colleague of ours named Graham, who teaches at the
 University of Aberdeen, tried a game one evening and got a score of
 zero. He said he never wanted to play again. A half hour later,
 another player —fifteen-year-old Dennis—tried the same game and, much
 to Graham's dismay, gave up after ten minutes because it was "much too
 easy."
 
 Enter video games. The feature that sets them apart from all other
 games is the extremely flexible nature of the digital computer that
 controls them. We'll talk more about computers in chapter 6. For now,
 it's sufficient to realize that the computer can be programmed to make
 the games easy to begin with and progressively more difficult. A good
 example of this is seen in Pac-Man. The major determinants of
 difficulty in that game are such things as the speed of Pac-Man
 himself, the speed of the monsters, the period of time that the
 monsters remain edible by Pac-Man, and so on. These factors change
 from board to board such that the game becomes progressively more
 difficult as play continues. A novice player is usually able to get
 through one complete board after only a few trys. However, only a very
 few experts—who have played literally thousands of games—are able to
 make it up to the highest level of difficulty. The same sort of
 strategy is seen in the design of other popular games such as Space
 Invaders, where the invaders move faster, shelters disappear, and the
 player's life becomes generally more difficult and harrowing as the
 game progresses.
 
 There is another advantage of having the computer control the
 reinforcement schedules. Suppose we turn into a nation of Pac-Man
 experts. Suppose, that is, virtually everyone practiced Pac-Man enough
 to be able to play it perfectly. Wouldn't reinforcement then become
 continuous, with the resultant lack of addiction? No problem. Another
 salient feature of a computer program is that it is very easy to
 modify. It would be a trivial job to make the monsters go even faster
 or make Pac‐ Man go even slower. Less trivial, but still not
 especially difficult, would be the insertion of features that are
 altogether new —for example, a new monster could be created that is
 even more adept at devouring Pac-Man and generally creating havoc than
 are the current ones. In contrast, the capability of easily changing
 the rules of the game is not present with a precomputer game such as
 pinball, where all such changes would involve difficult-to-modify
 mechanical devices rather than simple-to-modify computer programs.
 
 Other Aspects of Reinforcement
 Knowing about the partial reinforcement effect gives any video game
 designer an edge in designing a particularly appealing game. But there
 is more that the designer needs to know. For example, Pac-Man gobbles
 yellow dots that are worth 10 points each. Why 10 points? Is this the
 best number of points to award a player for each dot devoured? These
 questions raise the important issue of how big the reward ought to be.
 Another principle of reinforcement is necessary for understanding
 behavior, and that concerns the size, or magnitude, of reinforcement.
 
 MAGNITUDE OF REINFORCEMENT
 There is no question that behavior is related to the size of the
 reinforcing event. Rats, for example, will run faster and more
 frequently if they are rewarded with more food rather than less food.
 People will work harder and play longer on a slot machine if they have
 a chance of winning $1,000 than if they have a chance of winning only
 $100. Intuitively, of course, this doesn't seem surprising.
 
 When it comes to video games, however, the issue of reinforcement
 magnitude becomes a bit more interesting. You may have noticed that
 the number of points one accumulates in video games always seems to be
 very large, even if the player is just a novice. For example, in
 Pac-Man, a player acquires 10 points for devouring each dot, 200 to
 1,600 points for devouring the monsters, and so on. Thus even on a
 very first Pac-Man effort, a player can generally score in excess of a
 few hundred points.
 
 Why is this? Why not, for example, just one point per dot and 20 to
 160 points for the monsters? At the very least, there'd be less space
 needed on the screen to display the score. When we look at things in
 terms of reinforcement principles, the reason is clear: large rewards
 lead to faster responding and greater resistance to extinction—in
 short, to more addiction— than do smaller rewards. From the point of
 view of the video game manufacturer, of course, points are free—the
 cost of manufacturing and programming the game is the same whether
 small or large numbers of points are awarded to the game's eventual
 players.
 
 Given these considerations, you might ask why the game designers
 stopped where they did in terms of point magnitudes. Why only 10
 points per dot in Pac-Man? Why not 100 or 1,000? The answer is
 twofold. First, the point magnitudes, after all, have to be something,
 and whatever they're made to be they could always be higher. So the
 actual values chosen by the designer are somewhat arbitrary. Second,
 however, at some point people stop having an intuitive grasp of what
 some magnitude means—that is, above some magnitude, any amount is
 psychologically pretty much equal to any other similarly high amount.
 To get a feeling for this phenomenon, imagine that you are a
 participant in a TV game show and you are given the following choice:
 (1) you can either have $1 for sure, or (2) a coin will be tossed, and
 you will receive $10 if the coin comes up heads but nothing if the
 coin comes up tails. Almost invariably people choose the latter
 alternative, figuring that $10 is worth so much more than $1 that the
 chance of winning the $10 is worth the risk of losing the coin toss
 and forsaking everything. But now imagine a new version of the choice:
 either you get a sure $1 million or a coin is tossed and you get $10
 million if the coin comes up heads but nothing if the coin comes up
 tails. Now we find that people almost invariably choose the first
 alternative. For most people, $1 million and $10 million are
 psychologically pretty much the same thing—they're both "very large
 amounts of money." Thus it makes perfect sense, psychologically, to
 opt for the sure thing—the million dollars—rather than the choice that
 involves a 50 percent chance of getting nothing.
 
 Bearing this example in mind, we see why video game scores —despite
 principles of reinforcement—can't be too large. In Pac-Man, for
 example, the designer wants the accomplishment of eating a monster to
 be psychologically much greater than the accomplishment of eating a
 dot. This is done by awarding differential numbers of points, but, as
 in the money example, the absolute value of the points can't be too
 large. A million points for eating a dot would, psychologically, be
 not very dissimilar from 20 million points for eating a monster; they
 are both just "huge numbers of points."
 
 DELAY OF REINFORCEMENT
 We have pointed out that any behavior will increase in frequency if
 that behavior is followed by reinforcement. It turns out that the
 delay between the behavior and the reinforcement is, in most cases,
 very important: the shorter the delay, the quicker will the behavior
 increase in frequency. In other words, short delays lead to more
 powerful reinforcement effects.
 
 In many real-life situations, delay of reinforcement is very long. For
 example, if we save money in a savings account, it's a long time
 before we begin to see the interest accumulate or are able to withdraw
 the saved money in order to make some large purchase. Because of this
 delay of reinforcement, the behavior of saving money isn't as frequent
 as it otherwise would be—in everyday language, we say that saving
 money is difficult. So, many people don't save money; instead they
 spend it as soon as they get it, and reinforcement is immediate.
 
 In the case of video games, however, at least some sort of
 reinforcement is always provided immediately. In most cases, a score
 of some sort is prominently posted somewhere in the display, and the
 score changes the instant we shoot down an enemy ship or eat a
 monster. It is, in part, this instant reinforcement that makes the
 behavior of playing video games so satisfying and therefore so
 prevalent.
 
 MULTIPLE REINFORCEMENTS
 One aspect of video games that sets them apart from most other games
 is that they can be, and usually are, much more complicated than
 arcade-type games have traditionally been. Pac-Man, for example, has
 many different ways to reinforce you. You can eat dots; you can eat
 monsters; you can avoid monsters; you can eat the symbols; you can get
 through boards; you can see between-board skits; hear music; and so
 on. Other games, invented more recently than Pac-Man, are even more
 complicated.
 
 From the standpoint of what makes games fun, these multiple
 reinforcements are important because different people enjoy different
 things. By using a "kitchen-sink" approach— that is, by inserting into
 a game a wide variety of things that might be reinforcing—the designer
 winds up with a game that appeals to a wide variety of people and
 will, accordingly, be widely played. This flexibility of video games
 can be contrasted with that of pinball. Pinball, for all its bells and
 whistles, really provides only limited types of reinforcement—you see
 the ball move, you hear the sound effects, you see your score
 increasing, and that's about it. The reason for this contrast, once
 again, is that the computer program that underlies a video game is
 itself infinitely flexible, whereas pinball, being mechanical, has to
 be kept relatively simple or else it will become prohibitively costly.
 
 If video games are reinforcing in a variety of ways, at least some of
 the reinforcement is no doubt extrinsic, taking the form of praise and
 admiration from peers and other onlookers. But it's perfectly possible
 to play a video game by yourself and feel gratified when you do well
 or when you improve your performance. This kind of reinforcement is
 called intrinsic reinforcement. Video games can provide very powerful
 intrinsic reinforcement, which is probably a very important reason for
 their success. The fundamental source of intrinsic reinforcement—the
 very person receiving the reinforcement—is perpetually present.
 
 Cognitive Dissonance
 As we have seen, video games have a variety of ways of reinforcing
 players. But, at least for the games played in video arcades, there is
 another side of the picture: you have to pay for them. You might
 expect that the reinforcement obtained from the games themselves might
 in some sense be countered by the punishment that stems from having to
 insert quarter after quarter. Interestingly enough, however, a large
 body of social psychological research suggests that the opposite may
 be true: games may be more reinforcing, not less, if you have to pay
 for them.
 
 During the 1950s and 60s, a group of psychologists (led by Leon
 Festinger and his colleagues) developed a theory called cognitive
 dissonance to account for some seemingly paradoxical types of
 behavior. The paradox is that people sometimes seem to enjoy things
 that are less reinforcing over other things that are more reinforcing.
 Consider, for example, an experiment reported by Festinger and
 Carlsmith. 2 In this experiment, a group of people performed a
 repetitious, tedious, and thoroughly boring task. After completing the
 task, the group was asked by the experimenter to lie to a new group of
 people —to tell them that the task was more fun than it actually was.
 One group was offered $20 to lie, whereas the other group was offered
 only $1. Finally, after the lies had been told, the people were asked
 to rate how much they enjoyed the original task. It turned out that,
 contrary to what you might expect on the basis of reinforcement
 effects, the $1 group claimed to like the task much better than did
 the $20 group.
 
 Why did this happen? Cognitive dissonance theory assumes that when a
 person performs acts or holds beliefs that are in conflict with one
 another, the person will act so as to reduce the conflict. In the
 $1/$20 experiment, the conflict was between the people's knowledge
 that they were performing a boring task and their knowledge that they
 had told someone else that the task was fun. Why did they lie? People
 who were paid $20 had adequate justification—they were hired guns,
 paid to lie. The $1 group didn't have this handy justification, and
 their only recourse was to change their attitude about the task. By
 believing that the task was more interesting, they created a
 justification for the positive report that they made about it.
 
 We can recast this sort of finding into a statement about extrinsic
 versus intrinsic reinforcement. Given that a person was performing
 some act to begin with, it's necessary to have some kind of
 reinforcement to account for it. If there's extrinsic reinforcement,
 as there was for the $20 group, that's fine. But if there's
 insufficient extrinsic reinforcement, as was true for the $1 group,
 then intrinsic reinforcement had to be generated—the subjects had to
 decide that the task was more intrinsically fulfulling. This sort of
 effect can be seen more directly in an experiment by Lepper, Greene,
 and Nisbett. 3 Here, nursery school children were given the choice of
 playing or not playing with marking pens. One group of children was
 given a reward for playing with the pens, whereas another group was
 given no reward. It turned out that the reward group played with the
 pens less than did the no-reward group. Again, this appears to be the
 opposite of what you would expect from reinforcement theory; and
 again, it can be explained if you assume that, in the absence of
 external reward, the pens developed a powerful, intrinsically
 reinforcing quality of their own. As an aside, it's interesting to
 note that this experiment has close analogues in real life, since
 parents will often pay their children for academic success. This
 practice is probably unwise, since such extrinsic reward may remove
 the intrinsic motivation that produces optimal and most satisfying
 academic performance.
 
 WHAT IF VIDEO GAMES WERE FREE?
 Video games have the interesting quality that they take your money but
 provide you with no tangible, extrinsic rewards that you can put in
 your pocket and take home. By anyone's definition, having your money
 taken away from you is not reinforcing —on the contrary, it's
 punishing. This means that video games must develop qualities that
 provide powerful intrinsic reward. Since people are standing there
 having their money taken away, they must develop the attitude that
 whatever they're doing is a lot of fun. In other words, if games were
 free, people would probably like them less. To our knowledge, no
 research has ever been done that has appropriately compared free games
 with money-devouring ones in terms of how enjoyable the games are
 perceived to be. But a large body of psychological research indicates
 that games requiring at least some minimal amount of money (like a
 quarter) would be perceived as more enjoyable than free games.
 
 Does this mean that arcade operators should be advised to make the
 games more expensive? Not necessarily. It's important to keep track of
 the distinction between how much a person enjoys the game on the one
 hand and how much the person plays the game on the other. A game that
 costs a dollar might be perceived as more enjoyable than a game that
 costs a quarter. But there comes a point at which, enjoyable or not,
 cost would be prohibitive and the game wouldn't be played. Thus
 there's a tradeoff between enjoyment and cost, which implies that some
 intermediate game cost is the optimal one. Any scientist would shriek
 in agony at this unjustifiable conclusion.
 
 Regret and Alternative Worlds
 So far we've talked about reinforcement in terms of being rewarded for
 things that you have done—like getting high scores. There's another
 side of this motivational coin, however, which is regret over things
 that you haven't managed to accomplish. In most situations regret is
 something that you just have to live with. But that's not true with
 video games. Often when playing a video game, the game ends because
 you've made a mistake, and you immediately know exactly what you've
 done wrong. "If only I hadn't eaten the energizer in this game before
 trying to grab that cherry," you say to yourself. "I knew it was the
 wrong thing to do, and I did it anyway." But now you don't have to
 just sit there being annoyed and frustrated. Instead you can play the
 game again and correct that mistake. So in goes another quarter. But
 in the process of playing again, you make another mistake. And spend
 another quarter to correct it. And so it goes.
 
 Two psychologists, Daniel Kahneman and Amos Tversky, 4 have recently
 been studying the phenomenon of regret. To give you a flavor for the
 kind of things they've discovered, consider the following question:
 
 Mr. Smith and Mr. Jones both have to catch planes. They're on
 different flights, but since both flights leave at 9:00 A.M., they
 decide to take a cab together. Owing to a combination of unfortunate
 circumstances, the cab is late and doesn't arrive at the airport until
 9:30. On consulting with the airline agent, the two men discover that,
 whereas Mr. Smith's flight left on time at 9:00, Mr. Jones's flight
 was delayed and left at 9:28, only two minutes ago. Who is more upset,
 Mr. Smith or Mr. Jones?
 
 Given this question, people invariably and immediately report that Mr.
 Jones was more upset. Why is this? After all, both men missed their
 flights and, objectively, they're both in equal difficulty. Kahneman
 and Tversky offer an explanation in terms of alternative worlds that
 can be constructed in the mind. They propose that, when some
 unfortunate event occurs, the victim constructs an alternate reality
 in which the unfortunate event didn't occur. The less this alternative
 world differs from reality, the worse the victim then feels. In the
 example at hand, it's very easy for Mr. Jones to construct an
 alternative world in which he caught his flight. "If we had just gone
 through that yellow light instead of stopping for it," he might say to
 himself, "then I would have made my flight." Mr. Smith, on the other
 hand, would have a much more difficult time constructing an
 appropriate alternative world. In order for him to have made his
 flight, they would have had to have gone through the yellow light and
 have not gotten stuck behind that trailor truck, and have not had to
 wait so long for the cab to pick them up in the first place. Since Mr.
 Smith's alternative world would differ so substantially from the real
 world, he would wind up with a lot less regret than would Mr. Jones.
 
 Regret in video game play fits quite nicely into this framework. The
 mistake that (in general) ends the game is the last thing, or close to
 the last thing, you did prior to the game ending (eating the energizer
 before trying to get to the other side of the maze in the example we
 gave before). Therefore, the alternate world in which the mistake was
 not made is extremely close to the real world in which the mistake was
 made—and that's just the situation that produces maximal regret.
 Naturally, given the opportunity to make that alternate world a
 reality and eliminate all your regret, you'll avail yourself of the
 opportunity. You play again.
 
 An even more striking example of the alternative world phenomenon as
 it operates in computer games is seen in Adventure. Adventure games
 are very similar to the precomputer game of Dungeons and Dragons. In
 them, the player is placed into some kind of hypothetical mazelike
 environment, where both danger and excitement abound. In one of them,
 for example, you (the player) are in a nuclear power plant and your
 mission is to find and defuse a time bomb that has been placed
 somewhere in the plant by a wicked saboteur. Carrying out this mission
 requires a complex series of actions, not the easiest of which is
 figuring out your way around the plant to begin with: learning not
 only where various rooms, nooks, and crannies are relative to one
 another, but also which actions—complying with the automatic security
 locks and so on—are necessary in order to cross from one place to
 another.
 
 You actually play this game by issuing a sequence of instructions to
 the computer. In return, the instructions are carried out to the best
 of the program's ability; and, also, brief descriptions are provided
 of objects that can be seen and of actions that occur.
 
 A couple of key features heighten the game's interest. First, there
 are lots of ways that you can go wrong and kill yourself. For
 instance, you can accidentally blow up the bomb, you can fall off a
 ledge, you can die of radiation poisoning, and so on. But fortunately,
 you can save the game at any stage, so if you do make a mistake, you
 can go back to the point at which the game had been saved—that is,
 prior to when the mistake had been made. Again we see a classic case
 of an alternative world. "If only I had put on the radiation suit,"
 you say to yourself, "I wouldn't have died that horrible death in the
 radiation chamber." And since the alternative world in which you put
 on the radiation suit is very close to the "actual" world in which you
 didn't, regret is very high. But since you saved the game, you can go
 back and create that alternative world, thereby eliminating the
 regret. So you do. Computer games provide the ultimate chance to
 eliminate regret; all alternative worlds are available.
 
 Research on Video Games
 
 Since video games are a relatively new phenomenon, psychologists have,
 thus far anyway, performed relatively little research on the games
 themselves. One piece of research that has been done, however, is a
 Stanford University Ph.D. dissertation by Thomas Malone.5 Malone was
 primarily concerned with educational techniques, and his dissertation
 was aimed at finding ways of making classroom learning more fun.
 Noting the mass appeal of video games and the intrinsic reinforcement
 that they provide, he ventured to suggest that these games may prove
 to be superb teaching devices. This educational theme recurs
 throughout Malone's dissertation; however, the research that he
 reports was concerned primarily with the features of certain games
 that made them fun to play.
 
 Malone studied school children (kindergarten kids through eighth
 graders). All the children had been playing with computer games in a
 weekly class when the study began in 1979. The survey involved
 relatively nonstandard games, but it is still highly suggestive.
 
 Malone asked the children to rank a variety of computer games they had
 played on a simple four-point scale. He then analyzed the features of
 the games and concluded that incorporation of a specific goal was the
 single most important feature in making a game enjoyable. Other
 popular features were score‐keeping; audio and visual effects; the
 degree to which players had to react quickly; and randomness
 (unpredictable games are preferred). The games ranked highest
 incorporate these features; those ranked lowest don't.
 
 Malone also asked children what they liked about the games. Almost 40
 percent of the reasons the students gave dealt with fantasy. For
 example: "I like it because it's just like Star Wars." "I like it
 because it has bombs." "I like blasting holes in the other snake." Of
 course, children also provided other reasons. Some liked a game for
 its challenge, others because it was easy to do well. One student
 liked Petball because "you can get a high score very fast.... It's
 easy to get bonuses. I like to win; I'm a sore loser." Interestingly,
 while the children talked a lot about fantasy, the games they rated
 most favorably were notably low in fantasy: perhaps children are no
 better than adults at reporting their own motivations.
 
 The results of Malone's survey provide interesting but inconclusive
 suggestions about what makes the games fun to play. His next step was
 to create new versions of particular games in which some of the
 hypothesized important features were missing. He investigated two
 different games, Breakout and Darts. Breakout requires primarily
 muscle skills, whereas Darts requires primarily thinking, or
 intellectual, skills.
 
 Breakout is a derivative of Pong, the first video game, which was a
 simulation of table tennis. Figure 2.1 shows a typical screen display
 in Breakout. In this game, the player manipulates a knob that controls
 the vertical motion of the paddle, shown on the left side of the
 screen. The paddle is used to hit a ball that bounces against the wall
 on the right-hand side of the screen. The wall is made up of multiple
 layers of bricks, and each time the ball strikes the wall, it knocks
 out one brick. The ultimate goal is to knock out all the bricks from
 the wall. In addition, however, points are awarded for each brick that
 is knocked out, and the score is continuously displayed on the screen.
 The player is allowed a total of three balls and uses up one ball each
 time he misses the ball with the paddle.
 
 Based on the results of his survey, Malone generated a list of
 features that, he hypothesized, contributed to Breakout's popularity.
 There are clear goals: increasing the score by knocking out bricks and
 ultimately tearing down the whole wall. The game keeps score. It has
 audio effects—tones sound when the ball bounces off a wall—and visual
 effects—the ball moves, the bricks break from the wall. It has
 fantasy—the player destroys the wall, perhaps imagining himself
 escaping from prison or rescuing a hostage.
 
 Malone then proceeded to create several versions of Breakout and to
 compare them with the original. To see how important was the challenge
 of getting a higher score, he created some variations in which the
 computer did not keep track of the score. To see how important was the
 visual stimulation of watching the bricks break out, he created some
 variations in which the bricks did not actually break away—the ball
 just bounced back and forth against the wall, with a point being
 awarded for each bounce. To answer other questions, he created other
 variations. Malone's subjects—who, by the way, were Stanford students
 this time, rather than the younger children used in his initial
 survey—played the various games and indicated which ones they liked
 and did not like.
 
 FIGURE 2.1 Breakout display. A player manipulates a knob that controls
 the vertical motion of the paddle shown on the very left of the
 screen. The paddle is used to hit a ball, which bounces against the
 wall on the right-hand side of the screen—a wall made up of eight
 layers of bricks. The score refers to the number of bricks in the wall
 that the player has successfully knocked out. A ball is used up
 whenever the player misses the ball with the paddle, and the number of
 balls left before the game ends is shown underneath the score. From T.
 W. Malone, What Makes Things Fun to Learn? A Study of Intrinsically
 Motivating Computer Games (Palo Alto, CA: Xerox, 1980), p. 24. Used by
 permission of the author and the publisher. Subsequently reproduced in
 T. W. Malone, "Toward a Theory of Intrinsically Motivating
 Instruction," Cognitive Science 4 (Ablex, 1981): 345, and used also
 with permission of Ablex Publishing Co.
 
 The results were clear. The most important feature in determining how
 much the game was liked was the breaking out of a brick when the brick
 was hit by the ball. The versions in which the wall remained intact
 when struck by the ball were not liked nearly as well (even though the
 score increased just as it had before). Two other features—the
 computer's keeping score and the ball's bouncing off the paddle (as
 opposed to being simply ejected from the paddle)—were also important,
 but less so than the gradually deteriorating wall. Why is the breaking
 out of the bricks so appealing? Although the experiment doesn't permit
 a definite answer, there are various possibilities. Watching a
 deteriorating wall of bricks provides visually compelling
 entertainment, it provides a cumulative scorekeeping device, and it
 shows you how far you are from reaching the ultimate goal of
 destroying the wall. Any one or a combination of these effects could
 be responsible.
 
 Malone showed clearly that when both the score and the brick
 destruction were removed from the game, people didn't like it at all.
 Without these features, the game had very little purpose. In this
 degenerate version of Breakout, the players might try to keep the ball
 moving as long as possible but they have no easy way of knowing how
 well they are doing. Without the goal the game is no fun.
 
 In Breakout, people learn a sensorimotor skill—they learn how to move
 the paddle in such a way as to successfully maneuver the ball. But
 playing Breakout doesn't require any higher‐ level skills such as
 thinking, remembering, or problem solving. For this reason, Malone
 next turned his attention to a new game—Darts—that did teach a bona
 fide academic skill, that of estimating magnitudes on a number line
 and expressing them as mixed numbers. (A mixed number is an integer
 plus a fraction, such as 1 3/8.) This experiment (in which fifth
 graders were used as subjects) revealed, among other things, some
 intriguing differences between the types of games preferred by boys
 and girls.
 
 In the game of Darts, a number line is presented with specified
 numbers defining the ends of the line, as shown in figure 2.2. There
 are three "balloons" protruding from the line, and the player's job is
 to decide which numbers correspond to the positions of the balloons.
 The player types a guess, and a dart (shown on the right) is moved to
 the position indicated by the player and fired. If the number
 corresponds to a balloon's position on the line, the balloon is burst.
 In the example shown in figure 2.2, if the player were to type 3 3/16,
 the lowest balloon would burst. If the number doesn't correspond to a
 balloon's position, the dart remains stuck in the line and the
 incorrect number that had been typed in is indicated, as shown in
 figure 2.2. Here the player incorrectly typed 3 7/8. A total of three
 darts is provided; thus a perfect player can burst all three balloons.
 
 FIGURE 2.2 Darts display. Thomas Malone describes the set-up this way:
 "Three balloons appear at random places on a number line on the
 screen, and players try to guess the positions of the balloons. They
 guess by typing in mixed numbers (whole numbers and/or fractions), and
 after each guess an arrow shoots across the screen to the position
 specified. If the guess is right, the arrow pops the balloon. If
 wrong, the arrow remains on the screen, and the player gets to keep
 shooting until all the balloons are popped" (p. 31). In this example,
 the player has previously made an incorrect guess that 3 ⅞ is the
 position of a balloon. But he's right this time by typing in 3 ½, and
 the dart will move across the screen to that position, bursting the
 middle balloon. From T. W. Malone, What Makes Things Fun to Learn? A
 Study of Intrinsically Motivating Computer Games (Palo Alto, CA:
 Xerox, 1980), p. 32. Used by permission of the author and the
 publisher. Subsequently reproduced in T. W. Malone, "Toward a Theory
 of Intrinsically Motivating Instruction," Cognitive Science 4 (Ablex,
 1981): 349, and used also with permission of Ablex Publishing Co.
 
 In addition to the obvious visual effects, there are abundant auditory
 effects in this game. For example, circus music begins the game, and
 to reward the player who pops all three balloons a short song is
 played.
 
 Malone again tried to find out what it was about the game that made it
 fun and whether any new variations would make it more enjoyable. He
 created a version in which, after each incorrect try, the player was
 told in which direction and by how much the answer was wrong. In other
 words, the player was given "constructive feedback" such as being told
 "A little too high" or "Way too low." In other variations, the
 balloons were broken, but not by the darts; rather, when the correct
 position was typed in, a balloon over on the right side of the display
 burst. Thus the visual display of bursting balloons was more or less
 the same in the two versions; however, in one the player could
 fantasize that the dart itself was bursting the balloon, whereas this
 fantasy wasn't possible in the other version. Finally, some versions
 of the game had the original music, whereas other versions had no
 music.[/SPOILER:a5d5496eb9]
 |  
				|  |  |  
		|     |  
		|   |  
		| Vert1 
 
				
			 
				
			 
				
			 
				Joined: Aug 28 2011
			 
				
			 
				Posts: 537
			   | 
			
				| 
 
				Chapter 2 Continued
[SPOILER:db06e850b3] Different players played the different versions of the game and
 then indicated how much they liked it.
 
 The most intriguing result to emerge from the experiment was that boys
 and girls differed substantially in terms of which features determined
 their preferences. For every feature that was examined, girls and boys
 reacted in the opposite direction —if boys liked a particular feature,
 girls disliked it, and vice versa. Some of the most striking
 differences were the following: Girls liked music, whereas boys
 disliked it. Girls liked (and boys disliked) being told (verbally) how
 they were doing, whereas boys liked (and girls were relatively
 indifferent to) having a visual, or graphic, representation of how
 they were doing. Finally, boys liked having bursting balloons and
 especially liked the version in which the balloons appeared to be
 burst directly by the darts. Girls disliked both of these balloon
 representations, and especially the latter.
 
 In addition to these game characteristics, Malone also found that
 certain characteristics of the group he studied influenced how well
 the students liked the game. For example, those students who
 considered themselves to be good in math liked the game better than
 those who considered themselves to be poor in math. Students who
 thought they did well in the game liked it better than those who
 thought they did poorly in the game—although preference was unrelated
 to how well students actually did, providing further evidence for a
 distinction between what students think and what they do.
 
 Malone worried that in reporting his results he risked perpetuating
 stereotypes of human beings based upon their gender. There is ample
 reason to believe, he urged, that the game preferences primarily
 reflect differences in the ways boys and girls are socialized in our
 culture. But whatever the basis for these preferences, it is important
 to understand them. For example, if a mathematical game like Darts
 happens to be designed in a way that appeals more to boys than to
 girls, then a sex difference toward mathematics may be unwittingly
 created.
 
 The boys in this study liked the arrows and balloons fantasy, while
 the girls did not. Why? One possibility is that destroying balloons
 with arrows is aggressive, and the aggressiveness underlies the
 difference in preference.
 
 A major reason given for why video games are fun is that they are
 responsive. In a world in which people are often too wrapped up in
 themselves to give you the time of day, the games are just the
 opposite. As a player, you get feedback all the time. The experiment
 with Darts showed that fantasy was more important than feedback, but
 as Malone has pointed out, the fantasy in these games is a unique form
 of responsive fantasy. The fantasy is feedback.
 
 When we watch a movie or read a book, we passively observe the
 fantasies. When we play a computer game, we actively participate in
 the fantasy world created by the game. For this reason alone, the
 computer game might an ideal vehicle for learning. We'll return to
 this educational theme in chapter 5, but for the moment, it's worth
 noting that Malone felt he had the beginnings of a learning theory
 that was "intrinsically motivating." By this, he meant a kind of
 learning in which reinforcement comes from within the person rather
 than from the outside world. For Malone, there are three major
 ingredients inherent in the student-computer game experience that make
 the games such ideal vehicles for learning. These three ingredients
 are: challenge, fantasy, and curiosity.
 
 Challenge comes into play because the games provide a goal to be
 reached and an uncertain outcome. The ideal game, if it is to provide
 the challenge necessary for true intrinsic motivation, undoubtedly
 includes an element of chance—or at least something that seems like
 chance to the learner. In card games, the cards are typically dealt
 randomly to players, and uncertainty is thereby introduced. Similarly,
 in a game like Darts the particular problem to be solved at any
 moment, whether it is the location of 3 1/2 or 2 5/8, is more or less
 randomly determined. Challenge is also achieved by having a "variable
 difficulty level": as the player gets better at the game, the game
 gets harder. The effort, skill, or knowledge required to reach some
 subgoal may increase. This keeps the player from getting bored,
 providing the requisite challenge.
 
 Fantasies are the second ingredient for making a learning environment
 more interesting and more educational. For Malone, fantasy-inducing
 environments are those that evoke mental images—images of physical
 objects such as balloons or images of social situations such as being
 the ruler of a kingdom. Long before Malone's work, theorists such as
 child psychologist Jean Piaget assigned a central role to make-believe
 play in the development of skills in children. So Malone's idea about
 the importance of fantasy is not completely new; his contribution,
 rather, is to identify the important role it plays in making video
 games an ideal vehicle for learning.
 
 The final ingredient for making a learning environment more
 interesting is the evocation of the learner's curiosity. For this, one
 needs to provide an optimal level of informational complexity. By
 optimal level, Malone means that the environment should be neither too
 complicated nor too simple with respect to how much the learner
 already knows. The world of learning should be novel and surprising;
 at the same time, it cannot be incomprehensible. Finding the optimally
 complex environment constitutes the fundamental challenge for the
 designers of the educational video games of tomorrow.[/SPOILER:db06e850b3]
 
 Chapter 3: Games And The Cognitive System
 [SPOILER:db06e850b3]CHAPTER 3: GAMES AND THE COGNITIVE SYSTEM
 Ability is the focus of this chapter. What aspects of mind figure in
 the performance of an act requiring complex skills, such as playing
 video games? Psychologists refer to the mind as the "cognitive system"
 because the "mind" isn't really a unitary entity. Rather it is an
 elegant system of delicately intertwined and finely tuned components.
 The means by which these components are combined into an ability—such
 as the ability to play a video game—is called a strategy. In the pages
 to come, we'll describe both the components themselves and the ways in
 which they can be combined into strategies.
 
 A major theme of the chapter is that quite different strategies can be
 used for accomplishing the same mental goal—the goal, for example, of
 being good at a particular video game. Which strategy is appropriate
 for a particular person depends on which of his or her mental
 components are good. Some people have very fast reaction times, while
 others are good at memorizing things.
 
 A second, related theme is that of time and how long it takes to do
 things. A typical person has a reaction time of about a fifth of a
 second. We'll see that the quality of many mental components is
 measured in terms of the amount of time that a component takes to do
 something. Most video games are designed so that if you're faster than
 the game at something, you win; if you're slower, you lose.
 
 We have referred to the cognitive "system" and its "components." To
 illustrate these concepts, we'll use a familiar example: a stereo
 system. A sophisticated system might consist of a turntable/cartridge;
 amplifier; tuner; reel-to-reel and cassette tape deck; several pairs
 of speakers, any combination of which can be in operation at any given
 time; and two sets of stereo headphones. To evaluate the system, we
 would have to consider the quality of each component and then the
 degree to which the user is adept at combining the components to make
 the system capable of carrying out a variety of functions— producing
 music for a party, providing a soothing background, masking the sound
 of outside traffic, making tapes for the car stereo, and so forth.
 Since the system is so complex, it's capable of doing each of these
 things in a variety of ways. It's the user's job to figure out which
 is the best way for any given task and to configure the system
 accordingly.
 
 The Mind as a System
 The cognitive system, too, can be conceptualized as consisting of
 components, and a particular combination of these mental components,
 designed to accomplish some particular goal,
 is referred to as a strategy. Later we shall discuss how specific
 strategies are appropriate for specific people playing specific video
 games. But first, let us introduce the mental components themselves. 1
 
 SENSORY MEMORY
 At any given moment, our five senses are being bombarded by a
 tremendous amount of incoming information from the environment. When,
 for example, you're standing in a video game parlor playing Donkey
 Kong, visual information originating from the game screen, as well as
 from much of the rest of the video parlor, is entering the cognitive
 system via your eyes. Auditory information in the form of honks and
 beeps from your game and others, along with the cries, whispers, and
 conversation of the denizens of the parlor, is entering the cognitive
 system through your ears. You're receiving tactile information from
 the feel of the buttons and levers of the game through the skin of
 your fingers, olfactory information about the hot dog being consumed
 by the person standing next to you, and gustatory information about
 the soft drink that you're sipping in between button pushes.
 
 All information that enters the system through the sense organs is
 initially placed into a sensory memory. One sensory memory corresponds
 to each sensory modality, thus there are five sensory memories in all.
 Each sensory memory has a very large capacity for holding
 information—indeed, experimental evidence suggests that a sensory
 memory may hold all the information that initially enters the system
 from the environment. But information in it doesn't stay around very
 long. In the case of the visual modality, for example, information
 remains in the sensory buffer for only about a quarter of a second (or
 250 milliseconds). Within this short time it is transferred to the
 next storage area of the cognitive system or decays away and is lost
 forever.
 
 ATTENTION
 If, as in our example, you were at the video parlor playing Donkey
 Kong, you would need some but by no means all of the information
 entering the system through your eyes, only a very small amount of the
 information entering through your ears and skin, and probably none of
 the information entering through your nose or tongue. Not only do you
 not need this excess information, but it you were to hold onto it, it
 would probably hinder you in your attempt to play the game
 efficiently.
 
 However, some portion of the incoming information is critical for you.
 You have to be able to see what barrels are rolling toward you, for
 example, or you're most certainly going to be hit by them. So the
 question is: How do we filter out the information we don't need, while
 at the same time retaining the information that we do need?
 
 This filtering process is what is referred to as attention (or
 selective attention), and people generally filter information very
 efficiently. Psychologists have shown this efficiency in studies of
 the "cocktail party phenomenon." 2 Imagine that you're sitting on a
 couch at a crowded cocktail party in which a number of conversations
 are occurring simultaneously. Bill and Jim are talking on your left
 and, at the same time, Sue and Jane are talking on your right. If you
 attend to Bill and Jim's conversation, then you'll find that you're
 completely unaware of Sue and Jane's conversation. However, it's
 perfectly possible to switch your attention to the right-hand
 conversation, at which point you'll stop being aware of the left-hand
 conversation. This switch of attention doesn't require moving a muscle
 —it is something that occurs completely within your mind. All the
 conversations from the entire party, including the two in question,
 have been entering your sensory memory, but you have been attending
 to, and thus have been aware of, only one conversation at any given
 time. All the others have been eliminated—filtered out of sensory
 memory and quickly lost from the cognitive system.
 
 The cocktail party example involved sound—information coming in
 through the auditory modality. There are analogous instances of such
 attentional effects in the visual modality. Suppose, for example, that
 you're playing the game of Sabotage. Sabotage works as follows: you,
 the player, are in charge of a large cannon that sits on the ground.
 As the game progresses, you're attacked by a variety of flying
 objects, chiefly helicopters, and paratroopers that are dropped by the
 helicopters. You can use your cannon to shoot down both the
 helicopters and the paratroopers. You are charged one point per shot,
 but you earn various numbers of points for everything that you shoot
 down. More points are awarded for destroying helicopters than for
 destroying paratroopers. However, if four paratroopers manage to land
 unscathed, they will team up to sabotage you, thereby resulting in the
 destruction of your cannon and the termination of the game.
 
 In this game you tend to concentrate on destroying helicopters until a
 disturbing number of paratroopers are in the air, at which point you
 concentrate on the paratroopers. Thus you attend to different sets of
 incoming information. While attending to the helicopters, for example,
 you're quite unaware of the paratroopers—indeed, you have to
 periodically shift attention away from the helicopters just to make
 sure that no paratroopers have slipped in unnoticed. Likewise, while
 concentrating on shooting down the paratroopers, you almost completely
 lose track of the helicopters. Again we see that all environmental
 stimuli—both the helicopters and the paratroopers—are perpetually
 registered by the cognitive system in the sense that they all enter
 the sensory memory. 3 However, your attentional abilities allow you to
 attend to only one set of stimuli or the other.
 
 Since performance in video games depends, in large part, on the speed
 at which you're able to do things, the question of how fast you can
 shift your attention from one set of information to another is quite
 important. In Sabotage, for example, if you waited too long to notice
 the paratroopers, your game would quickly be over.
 
 Some shifts of attention involve eye movements. What are eye
 movements? An explanation requires a short digression here. The entire
 area that we can see at all is called the total visual field. The area
 that's directly in the center of the visual field is called the
 central field and the rest is called the visual periphery.
 
 Because of the way our eyes are built, there's much of the visual
 field that we can't see very well at any given instant. Rather, we can
 make out only fine details in the central field, which is quite
 small—less than 1 percent of the total visual field. To demonstrate
 this, try focusing your eyes on one word of text in this book. If you
 keep your eyes steady, you'll find that only about one word is really
 readable. Words on either side of the one you're focusing on—as well
 as words above and below it—are fuzzy and indistinct.
 
 What about the periphery? We can see objects in the visual periphery,
 but we can't see them very well. Nonetheless, the visual periphery is
 very useful. For instance, we're able to detect when something new
 appears in the periphery, or when something moves or changes color.
 Detection of such changes in the periphery is often a sign that
 something interesting or important is happening there. Some event in
 the visual periphery often signifies that you should shift your gaze
 to the area where the event is occurring, in order to assess what's
 happening. Thus we need to make eye movements in order to keep
 ourselves updated on what's happening in the world. When playing
 Sabotage, a quick eye movement to that fuzzy object in the periphery
 can tell you that a bomber is on its way and that immediate action is
 necessary.
 
 Not all eye movements are alike. One common type is called a saccade
 (French for "jerk" or "jolt") which is a quick jump of the eye from
 one place to another. In between saccades are periods during which the
 eye is relatively stationary; these are called fixations. It is during
 these fixations that information gets into the mind; nothing gets in
 while the eye is making a saccade. 4 When doing something like video
 game playing, where things are happening at a rapid clip, making
 saccadic eye movements turns out to be time-consuming. Saccades
 themselves take place quite rapidly—most take less than a thirtieth of
 a second to complete. But a bottleneck arises because once the eye
 arrives somewhere, it is forced to stay there for a minimum of about a
 fifth of a second before it can move again. That is, fixations last a
 minimum of about 200 milliseconds.
 
 This can cause problems if, for example, you move your eye to some
 particular place, quickly assess what's going on there, and then
 notice via your peripheral vision that something else important is
 happening elsewhere. Because of the inherent physiology of our visual
 system, you're stuck where you are for about a fifth of a second
 before you can switch your gaze to investigate this new development. A
 fifth of a second may not seem like much in the grand scheme of
 things, but in a video game events are taking place so fast that the
 difference between being able to do something in, say, a tenth of a
 second instead of a fifth may make a big difference.
 
 In addition to switching attention via eye movements, it's also
 possible to switch attention without moving our eyes if we're
 switching attention between things that are very close together.
 Again, it's easy to demonstrate this to yourself. Stare again at a
 word of text; don't move your eyes. Notice that you can switch
 attention back and forth between two adjacent letters in the word.
 This type of attention shift takes about a twentieth of a second (50
 milliseconds) to carry out.
 
 SHORT-TERM MEMORY
 Via the practice of selective attention, only certain information from
 sensory memory actually gets noticed. But what actually happens to the
 objects that we attend to? Attended information is transferred—that
 is, copied from—sensory memory to a new component of the cognitive
 system referred to as short-term memory.
 
 Short-term memory has several salient characteristics. First, it is
 generally identified with consciousness. That is, whatever we're
 currently aware of, or conscious of, is exactly that information
 currently in our short-term memory. Second, short-term memory has a
 relatively small capacity. In contrast to sensory memory, which
 appears to be of virtually unlimited capacity, short-term memory can
 hold only about seven items—it's large enough to hold a seven-digit
 telephone number, for example. We can access the contents of
 short-term memory very quickly —if, for example, you're holding a
 string of digits (such as a telephone number) in your short-term
 memory, you can scan through them at the rate of about thirty digits a
 second (roughly one digit every 33 milliseconds). We lose information
 from short-term memory moderately quickly; information in it will
 generally be forgotten after fifteen to twenty seconds. So if you had
 just looked up a telephone number and someone interrupted you to ask a
 question, the number would probably be forgotten. However, this
 forgetting process can be prevented by rehearsal: by repeating the
 contents of short-term memory over and over to ourselves, forgetting
 will be prevented. By rehearsing information, we can keep it in
 short-term memory indefinitely. Finally, short-term memory is also our
 "working memory." It's where information is manipulated when we plan
 things, figure things out, and so on. This is important, because if
 we're maintaining a lot of information in short-term memory via
 rehearsal, we'll have less short-term memory capacity left to do other
 things, such as planning strategies and focusing attention. Suppose
 that you're playing Defender. While you're playing, you have a good
 deal of planning to do. You have to be constantly thinking about where
 you'll be aiming, whether you might want to escape into hyperspace,
 and so on. In order to carry out all of this, it is important to have
 your short-term memory clear. Short-term memory is like the amplifier
 in the stereo system; it's the heart of the system, and it's important
 to learn to use it as efficiently as possible.
 
 LONG-TERM MEMORY
 The next major component of the cognitive system is long‐ term
 memory—our repository of general knowledge. It contains such things as
 our name, our ability to speak the language, things that we've learned
 at work or in school, and so on.
 
 The storage capacity of long-term memory is virtually unlimited.
 Further, while information can be forgotten, such forgetting is
 relatively slow. Whereas information is lost from sensory memory in
 less than a second, and from short-term memory in less than a minute,
 information will remain in long-term memory for days, months, years,
 or even decades. How long it will remain depends on how well it was
 originally stored there. Since information makes its way into
 long-term memory via short-term memory, it's necessary to keep
 information in short‐ term memory for some period of time in order to
 get it into long-term memory. This makes intuitive sense. If you're
 told a person's name and don't attend to it at all—or even if you
 attend to it but then immediately forget it—you'll be unable to
 remember the name later on.
 
 In general, you'll find that if you just maintain information in
 short-term memory by rehearsing it, then the longer you maintain it,
 the better it will be entered into long-term memory. The efficiency of
 entering information into long-term memory can be improved by
 so-called elaboration methods. They include such tricks as forming
 mental images of whatever it is you're trying to remember, associating
 the to-be-remembered information to things that you already know, or
 making up rhymes such as "Thirty days hath September "
 
 When you learn a new video game, you have to remember many things
 about how to play the game and what the consequences are of various
 actions. Under what circumstances is it useful to turn tail and run
 instead of taking an offensive stance? How long will your armored
 shield last before becoming useless? How many points does it cost you
 for each shot? And so on.
 
 In playing video games, speed is of the essence—particularly the speed
 with which you can retrieve information from long‐ term memory.
 Psychological experiments have revealed that when you're confronted
 with a very familiar symbol, such as a letter, it takes you about a
 tenth of a second to retrieve, or to recognize, the name of that
 symbol. 5 This fact is important in a game such as Asteroids, in which
 various types of objects (for example, large asteroids, small
 asteroids, UFOs, and so on) appear at random times and in random
 places, and it's your job to identify them as soon as possible so you
 can take appropriate action. Since it takes about a tenth of a second
 to determine what each one is, a limit is placed on how fast you can
 deal with these objects as they appear. A game designer could thwart
 the efforts of most people to play a game by designing the game so
 that players are required to recognize objects in only a twentieth of
 a second.
 
 When you sit down to play a new video game, you will find that the
 games you played earlier in the day can influence how well you do on
 the new game. The earlier games can actually interfere with your
 ability to learn the new one. Interference more generally is an
 important characteristic of long-term memory; it refers to the problem
 you have remembering one thing as a result of learning some other,
 related thing.
 
 Interference can work in two directions—forward and backward. So,
 games you learned earlier can influence a new game you are currently
 learning. But the game you are currently learning can also influence
 the ease with which you will learn future games. This is especially
 true if the games are similar to one another. Knowing the problems
 that interference can create, some choices of what games to play in
 succession are wiser than others.
 
 Suppose you've learned to play Asteroids and you then become intrigued
 with Defender, which is similar to Asteroids but also has some
 important differences. You may concentrate on Defender for a while and
 become quite good at it. However, if you then go back to playing
 Asteroids, you may discover that your game has deteriorated and that
 you're now making responses that are appropriate to Defender—the game
 you just learned—but not appropriate to Asteroids, the game you
 originally learned. Learning Defender would have created retroactive,
 or backward, interference with respect to playing Asteroids.
 
 Similarly, suppose that you have learned a whole series of
 "shoot-'em-down" type games such as Astro Blaster, Space Invaders,
 Gallaxian, and so on. Now you're getting a little bored and want to
 learn a new game. If the new game is another shoot-'em-down type—for
 example, Phoenix—you'll find that it will be hard to learn;
 interference from the games you already know will cause inappropriate
 responses. This would be an instance of proactive, or forward,
 interference. Chances are that you would have an easier time learning
 an entirely new kind of game, such as Pac-Man or Donkey Kong.
 
 One final aspect of long-term memory is pertinent to the learning of
 video games. You may find that when you start learning a new game,
 you'll play continuously for hours and hours. Not only will this tend
 to deplete your supply of quarters, but, it turns out, it's not the
 optimal way to learn. For obvious reasons, this kind of learning
 strategy is referred to as massed practice. Massed practice has been
 found to be inefficient relative to spaced practice, in which you take
 numerous breaks between games. You may have noticed that if you play
 many games in a short period of time, you eventually seem to be
 getting worse rather than better. Moreover, if you take a break and
 return the next day, let's say, then on your very first try you may do
 the best you've ever done. This is known as reminiscence. It's
 probably the most dramatic example of the advantages of spaced
 practice.
 
 EXPECTANCY
 Suppose the game you are playing requires you to press a button the
 moment you notice that an enemy saucer has materialized out of
 hyperspace onto your screen. This is an example of one of the most
 fundamental tasks the cognitive system has to do—it has to respond as
 soon as possible after some event occurs in the visual field. Earlier
 we mentioned that it takes about a fifth of a second to react to such
 a stimulus. However, this figure is highly dependent on the degree to
 which you expect the event to occur. If you're not expecting
 something, it takes longer to react; if you are expecting something,
 it takes less time to react.
 
 When you're learning to play a video game, therefore, it's important
 to know as accurately as possible when things are likely to occur so
 that you can anticipate them and react as quickly as possible. The
 difference between a reaction time of a fourth of a second (250
 milliseconds) and a fifth of a second (200 milliseconds) can easily be
 the difference between shooting down the enemy and getting shot down
 yourself. In any event, being able to anticipate is a matter of
 learning contingencies among various events. In other words, given
 that some particular event has occurred—say, the appearance of an
 enemy ship in Space Invaders—what is most likely to happen next?
 
 In fact, a variety of things could happen next, and what will happen
 depends on the goals of the people who originally designed the game.
 If they wanted to make things very difficult for you, they could
 design things to happen completely randomly, in which case no event
 will be predictive of any other event and you will never be able to
 put expectancy to use. However, most games (and real life) do not work
 this way. Usually, the occurrence of a particular event provides you
 with information about what will happen in the immediate future: some
 events have an increased probability of occurring, whereas others have
 a decreased probability of occurring. It is an important task of
 long-term memory to store these event dependencies, and good players
 concentrate on doing just that. In other words, when learning to play
 a game, they concentrate on what events are likely to follow—or not to
 follow—what other events. This way, they are able to use this
 information in the future and set up appropriate expectancies for what
 is about to happen. The major benefit of this strategy is that these
 players are able to respond faster, and that is one of the major
 reasons that they are good players.
 
 THE VERBAL/VISUAL DISTINCTION
 The graphic designs, the funny bleeping sounds, and the brief verbal
 messages are some of the most enticing qualities of video games.
 Occasionally the mind is strained while it is forced to deal with all
 of this incoming information at once. Coping with visual information
 (such as the designs), auditory information (such as the bleeps), and
 verbal information (such as the messages) simultaneously can, however,
 be a lot easier for a person than coping with multiple visual,
 multiple auditory, or multiple verbal inputs at one time.
 
 To simplify the discussion, let's consider the visual versus verbal
 comparison. It's fairly clear that we have two separate mental
 subsystems to handle these two separate types of inputs. Further, it
 appears that the two mechanisms can operate independent of one
 another. To see what we mean by this, let's return to the stereo
 system example. Suppose you wanted to record from a record and from a
 radio at the same time. You would be able to do this by recording from
 the record on your cassette recorder at the same time that you
 recorded from the radio on your reel-to-reel recorder. Like the
 handling of verbal and visual information by the cognitive system,
 these two operations could be carried out simultaneously and
 independently by the stereo system.
 
 To get a feeling for the presence of both your verbal and visual
 subsystems, try the following demonstration. First, imagine the block
 letter E. Now imagine yourself going around the letter, identifying
 each corner as an "in" corner or an "out" corner. The speed at which
 you can perform this task depends very strongly on the manner in which
 you make the actual identification of each corner. Try it in two
 different ways. First, just say (out loud) either "in" or "out" as you
 mentally arrive at each corner. Now try it again, but this time, point
 to either your left or your right to signify "in" or "out." You'll
 find that the pointing method will take you much longer than the
 speaking method. It's usually a very powerful and dramatic effect. 6
 Why does this effect occur? The reason is that the task of imagining
 the block letter and determining whether a particular corner is an in
 or an out corner is a visual task. Pointing is another visual task,
 whereas speaking is a verbal task. Thus, when you're imagining the
 corners and pointing at the same time, you're doing two visual tasks
 at the same time, which overloads the visual mechanism. However, when
 you're imagining the corners and speaking at the same time, you're
 doing one visual task and one verbal task. Your visual and verbal
 mechanisms don't interfere with one another; they have no trouble
 operating at the same time.
 
 This independence of visual and verbal mechanisms manifests itself in
 a variety of ways when video games are being played. Practiced video
 players are perfectly capable, for example, of holding a
 conversation—with colleagues, with themselves, or with the machine
 itself—without impairing their ability to execute the visual/motor
 activities needed to play the game. Such players can also execute
 these abilities at the same time as they are verbally working out a
 strategy for the seconds to come ("Let's see, I'll pick up that
 energizer in the upper left-hand corner, then zoom to the middle of
 the board for the cherry, then get all the dots in the lower right,
 but leave the energizer intact . . ." a player might say to herself as
 she deftly gobbles up the dots and avoids the monsters). However, the
 same player would be ill advised to imagine one path of Pac‐ Man—a
 visual activity—while at the same time engaging in the other visual
 activity of actually guiding Pac-Man around the maze.
 
 From the standpoint of video game playing, one important facility that
 is associated with the visual subsystem is that of mental
 transformations. In general, a mental transformation is the process of
 taking some visual stimulus and imagining it to be in some physical
 state other than the one it's in. For example, you could look at an
 object in the room, such as a chair, and mentally shrink it or expand
 it, or place it somewhere else in the room, or rotate it to another
 position. To get a feeling for what a mental transformation is,
 suppose you are driving a car, headed south. Suppose also that you
 must make a complex series of turns to get where you are going and you
 must consult a map. But a problem arises: if you hold the map in its
 normal way, with north facing upward, since you are driving south, the
 directions on the map won't correspond to the directions in which you
 must go. There are two common solutions to this problem. Some people
 will keep rotating the map, so that "up" on the map will always be the
 same as the direction in which the car is traveling. Other people,
 however, have the ability to "mentally rotate" the map so that they
 can always imagine it as being oriented in the same direction as the
 car. This latter solution involves a particular kind of mental
 transformation known as mental rotation.
 
 It's easy to see how an ability to perform mental rotations could help
 your video game playing. In Asteroids, for example, you must mentally
 move a target asteroid to where it's going to be in a few seconds and,
 at the same time, mentally rotate your cannon to see if you're going
 to be in the correct position to shoot it down. Likewise, when objects
 move off the screen, you must be able to mentally calculate where
 they're going to reappear if you're going to keep an edge on the game.
 
 The map-reading example illustrates that people differ in their
 ability to perform mental transformations. Some people are able to
 rotate the map mentally, whereas others must rotate it physically in
 order to understand where they're going. Using an ingenious procedure
 developed by Roger Shepard 7 of Stanford University in which people
 are timed while they mentally rotate objects, it has been found that
 people who are good at visualizing things are faster to mentally
 rotate objects than are people who are poor at visualizing. Moreover,
 children and elderly adults are slower to mentally rotate than are
 middle-age adults. Men are occasionally faster than women but
 sometimes the sexes perform equally quickly. This observation enables
 us to explain why a person can perform exceptionally well on one video
 game but not on another. If the second game requires especially fast
 mental rotation and the person happens not to be an especially fast
 mental rotater, he or she may never be able to master it.
 
 People also differ quite substantially in their ability to process
 information visually versus verbally. For example, males tend to do
 better than females on those spatial tasks that require the
 visualization or manipulation of objects in space. However, the
 advantage that males have over females is rather slight, and most
 probably arises from different learning experiences rather than from
 any innate sex distinction. More generally, it is clear that some
 individuals—regardless of their sex— do better at one thing relative
 to another.
 
 We have already suggested that good visualizers have an edge in video
 game playing relative to poor visualizers. The reason for this, of
 course, is that video games, by their very nature, require visual
 thinking. The visually represented objects on the screen are
 constantly changing, and a person who is able to mentally track these
 changes, and who can imagine what the configuration of objects will be
 several seconds hence, is in a better position to plot the appropriate
 actions than is the person who doesn't have these abilities. We can
 speculate that these individual differences in proclivity to use
 visual versus verbal strategies are, in part anyway, what makes some
 people seem inherently good at playing video games whereas others seem
 inherently not so good.
 
 We have already described how different strategies may be used to
 accomplish the same goal. The distinction between visual and verbal
 thinking provides an apt example of how different strategies may be
 put to use. As we have mentioned, most video games emphasize the use
 of visual skills. Where does this leave a person who isn't so good at
 visual thinking? Probably the best solution for playing the games is
 to work out novel strategies that emphasize verbal skills instead. In
 Pac‐ Man, for example, progress can be made in various ways. One way
 is to just rely on your instincts, judging which way the monsters and
 you are going to be headed and trying generally to aim Pac-Man so that
 he and they won't converge when they're not blue but will converge
 when they are. This kind of "seat of the pants" strategy basically
 makes use of visual skills.
 
 But you could also use more rational, logical, verbal strategies. For
 example, you could memorize and plan out various routes that you have
 established as being relatively safe. Or you could devise an
 intermediate strategy of, say, planning the order in which you're
 going to eat the energizers and plan to avoid the monsters as best you
 can in between.
 
 How do you tell whether or not you are a good visualizer? If someone
 looks at a watch and tells you that it is 8:37, can you easily conjure
 up a mental picture of a clock reading 8:37? Or do you have to
 struggle to mentally create this image, slowly picturing the small
 hand set at 8 and then, while trying to keep the small hand glued to
 where it belongs, picturing the large hand pointing to the lower
 left-hand corner? There are several psychological tests that have been
 used to measure how good at visualizing a person is, some of which
 have been used by Canadian psychologist Allan Paivio. 8 For example,
 in one test Paivio asked subjects to think of a cube of a certain size
 and color that is sliced up into many smaller cubes. Next subjects
 were asked how many of the smaller cubes have two colored surfaces,
 how many have three colored surfaces, and so on. Based on the results
 of this test, as well as others, subjects could be characterized as
 being good or poor at visualization.
 
 There is another test that can assist you in determining whether you
 are a good visualizer. In Figure 3.1 you will see a list of pairs of
 states with their shapes shown in the right-hand column. Look only at
 the names on the lefthand side (covering the shapes on the right), and
 place the six pairs in order so that the pair whose shapes are most
 similar are at the top of the list and the pair that is least similar
 is at the bottom. Now repeat the process while looking only at the
 shapes.
 
 FIGURE 3.1 How good a visualizer are you? See the text for instructions.
 
 From M. Matlin, Cognition (New York: Holt, Rinehart & Winston, 1983),
 p. 106, based on R. N. Shepard and S Chipman, "Second-order
 Isomorphism of Internal Representations: Shapes of States," Cognitive
 Psychology 1, no. 1 (1970): 1-17. Redrawn and used by permission of
 Holt, Rinehart & Winston, Inc., Academic Press, Inc., M. Matlin, R. N.
 Shepard, and S. Chipman.
 
 Are your two lists similar to each other? If you put pair B
 (Colorado—Oregon) near the top of both lists and pair C (Oregon—West
 Virginia) near the bottom, you may have fairly good visual imagery.
 Note that this is not a good test for distinguishing exceptional
 visualizers, since most people looking only at the names make
 judgments that are fairly similar to their judgments when looking only
 at the shapes. One exception to this consistency in judgment is
 Nevada; many people from the eastern United States are under the
 erroneous impression that most of the Western states are square, and
 this distorts their ability to judge the similarity of shapes when
 given only the names.
 
 Tests such as these can be used to identify people who have a facility
 with visualization and consequently those who are likely to be good at
 video games that require a visualization skill.
 
 Motor Performance
 We have concentrated so far on how information is gotten from the
 environment and is then manipulated within the cognitive system.
 Operating somewhat independently of the cognitive system is the motor
 system, the part of the mind responsible for initiating muscle
 movements. The sort of skilled movement required for video games is
 called motor performance.
 
 SKILL
 A skill is a precise, finely tuned sequence of muscle movements,
 usually designed to achieve a very specific goal. In general, a skill
 is carried out in conjunction with feedback from the sensory system.
 For example, a golf pro would never have learned his or her skill
 without being informed where the ball landed after each stroke.
 Similarly, to become an expert at playing a video game, you need not
 only to develop the correct muscle patterns but also to coordinate the
 appropriate sequences with the appropriate input from the screen —that
 is, you need to develop what is referred to as eye-hand coordination.
 While playing Pac-Man, for instance, you need to be able to
 appropriately manipulate the joystick (a muscle skill) in a way that
 is dependent on such things as where Pac-Man is in the maze, where he
 is relative to the monsters, and so on.
 
 PRACTICE
 By any measure of performance quality that we use—time to carry out
 the response, correctness of the response, or whatever—performance
 will get better the more practice you've had. Most of the improvement
 occurs when you're just starting. Even if you're very poor when you
 begin learning a game, you'll almost certainly improve rapidly—at
 least at first. Then your "improvement curve" ( figure 3.2) begins to
 flatten out: the longer you play, the slower your subsequent
 improvement will be. The curve is (at least roughly) logarithmic:
 every doubling of the number of practices leads to an equal increment
 in performance. Thus the second practice will produce the same
 improvement as the first. However, to then get the same increment
 again requires two more practices for a total of four. To get it yet
 again, you need four more for a total of eight. Then you need to
 double your practices to sixteen, then to thirty-two, and so on. Small
 wonder it is time-consuming to become a really expert player.
 
 Actually, this logarithmic rule is pertinent to improvement in almost
 anything. A speaker system that costs, say, $1,000 certainly doesn't
 seem ten times as good as one that costs $100, because quality of the
 speaker system is logarithmically related to the effort of making it.
 You keep having to double the effort that goes into the system in
 order to obtain each additional unit increase in the system's quality.
 Since you pay according to the effort, this means that price will rise
 much faster than quality.
 
 FIGURE 3.2 The improvement curve.
 
 The amount of effort (practice) that you have to put into a skill
 will, by the same reasoning, increase faster than the quality of the
 skill. But notice another feature of this curve—it keeps going up
 forever. No matter how much you practice, you'll always keep getting
 better.
 
 Various experiments have demonstrated this assertion. In a study
 completed over twenty years ago, workers in a Cuban
 cigar-manufacturing company who had rolled as many as 10 million
 cigars continued to increase their speed of rolling cigars. However
 here, as in virtually all cases, the rate of improvement decreased. 9
 
 That we can continue to improve is a feature of the human motor system
 that is particularly felicitous when you are becoming skilled at a
 video game since, as pointed out earlier, most video games are
 programmed to keep getting harder and harder as you keep getting
 better and better.
 
 With practice, many motor skills become increasingly automatic. You
 can drive a car and carry on a conversation at the same time because
 both of these skills are highly practiced. When a motor skill becomes
 automatic, it means that it can be done with a minimum of conscious
 control. Since conscious control is not needed anymore for completion
 of the motor skill, it can be used to concentrate on other features of
 the environment. A skilled pianist, for example, can forget about the
 specific motor movements and concentrate instead on interpreting the
 mood of a concerto, and an ace tennis player can play a decent game of
 tennis while carrying on a conversation. The similar "automaticity"
 occurs with truly experienced video gamers. We watched a skilled
 Pac-Man player effortlessly control the joystick while simultaneously
 talking with a friend and periodically reaching with her other hand to
 take a sip of beer. Clearly her impressive performance had reached a
 level of smooth, autonomous mastery.
 
 MOTOR/COGNITIVE INDEPENDENCE
 This autonomy is one consequence of an independence that develops
 between the cognitive and motor systems. Many years ago, one of the
 authors (GL) broke a finger and couldn't drive his sports car.
 Strapped into the passenger seat, he found him-self unable to tell the
 substitute driver where reverse was in the gearshift configurations.
 Instead he had to actually move into the driver's seat and (somewhat
 painfully) go through the motions of putting the car into reverse. His
 motor system knew perfectly well where reverse was, but his cognitive
 system apparently didn't have a clue. (And the motor system wasn't
 about to reveal the whereabouts of reverse to its cognitive
 colleague.) In this episode the cognitive and motor systems apparently
 functioned relatively independently.
 
 When you learn a motor skill, it is not under control of the motor
 system from the start. At first you spend a lot of time thinking about
 what you're doing. As learning progresses, it gets taken over to a
 greater and greater degree by the motor system. This phenomenon is
 nicely illustrated when you learn to touch-type. After you have
 memorized the keyboard and fingering, you still have to take cognitive
 (conscious) steps of going to long-term memory to retrieve information
 about both the location of the key you want to strike and the finger
 responsible for it. Only then is the motor system summoned to perform
 the action. Gradually control is transferred from the cognitive to the
 motor system. In fact, the expert typist, unlike the beginner, is
 typically unable to quickly and accurately reproduce the keyboard any
 more. What the fingers have learned, the mind has forgotten.
 
 In recent years, a pair of books entitled Inner Tennis and Inner
 Skiing have appeared. 10 Their major message is that, when you're
 trying to learn a motor skill such as tennis or skiing, it's highly
 detrimental to think about what you're doing. Instead you should just
 turn control over to the motor system and let it go. The authors of
 these books depict the cognitive and motor systems as two "selves,"
 the cognitive system being "Self 1" and the motor system being "Self
 2." Indeed, a good strategy for something like skiing—which is almost
 entirely a motor skill—would be to think about something else (do
 arithmetic problems in your head, for example), thus disabling the
 cognitive system and rendering it unable to do its mischief.
 
 MORE ON EYE-HAND COORDINATION
 Eye-hand coordination is essentially the ability to perform an
 appropriate sequence of motor skills in response to a particular
 sequence of information entering the visual system from the
 environment. It isn't exactly that some particular pattern of muscle
 movements gets connected to some specific sequence of visual input.
 Rather, the relationship is mediated by some intervening, higher-level
 goal. Suppose, for example, that you are driving down a highway. Your
 hands are on the top of the steering wheel at the two o'clock and ten
 o'clock positions. The connection between visual input and motor
 action seems quite straightforward—if the road curves left, your hands
 "automatically" move left. Road right means hands right. Suppose,
 though, that you shift your hands to the bottom of the steering
 wheel—to the five o'clock and seven o'clock positions. The appropriate
 muscle movements for a particular visual input are now the exact
 opposite of what they were when your hands were on top of the wheel.
 Now when the road curves right, you must move your hands left, and
 vice versa. It's not just that you've learned two visual/motor
 associations, one for "hands on top of wheel" and the other for "hands
 on the bottom of the wheel"; you perform the appropriate muscle
 sequence effortlessly no matter where on the wheel your hands happen
 to be. Thus the appropriate connection can't be between a particular
 visual input and a particular muscle sequence. Rather, the connection
 must be between visual input, the muscle sequence, and some
 higher-level goal (in this case, keeping the car on the road). This is
 an elegant and extremely efficient—but not very well understood—manner
 of designing a system of eye-hand coordination.
 
 The reliance of motor skills on higher-level goals is obviously
 beneficial when video games are being played because it means that
 once a particular skill has been learned, it will transfer to slightly
 different physical configurations of the same game. We knew an expert
 Pac-Man player, for example, who had learned the game at the video
 arcades. She was introduced to a homecomputer version of the game in
 which Pac-Man was directed not by a joystick but by certain keys on
 the computer keyboard. It took her very little time to become just as
 expert at this game as she had been at the original arcade version. In
 this instance, the actual motor response—pressing the appropriate
 configuration of keys—was entirely different from the original
 response of manipulating the joystick. But the higher-level
 goals—guiding Pac-Man to the correct areas of the board, avoiding the
 monsters, and so on—had not changed, and it was these higher-level
 goals at which she had become an expert.
 
 Strategies
 So far we have been primarily concerned with each component of the
 cognitive system as it applies to video games. Thus we have seen how
 focused attention can be useful or detrimental, how various types of
 interference can cause deterioration of video game performance, and so
 on. Now we want to talk a little more about how all the components
 work in concert. A particular choice of which cognitive components
 will be used and how they will get put together is termed a strategy.
 Earlier we discussed how a stereo system's performance depended both
 on the workings of the individual components (a factor over which one
 has only limited control) and on strategy, how the user chooses to
 arrange the components. For example, if you wanted to provide a
 classical music background while you were working in your basement
 workshop, you might set the FM tuner to a classical station and switch
 on the speakers you've set up in the workshop. But if you wanted to
 fill the house with your favorite rock 'n roll music, you might use
 the record player rather than the tuner and use all the sets of
 speakers.
 
 STRATEGIES FOR PLAYING VIDEO GAMES
 There are also appropriate cognitive strategies for playing video
 games, from simple and obvious to complex and subtle. Consider, for
 example, the simple game of Breakout, in which, you will recall, there
 is a brick wall against which you hit a ball using a paddle. Each time
 the ball hits the wall, a brick disappears and you gain some number of
 points. Your goal is to eventually knock out all the bricks in the
 wall.
 
 This game is quite simple and thus requires a fairly simple cognitive
 strategy. In large part, the game calls for focusing visual attention
 on where the ball is relative to where the paddle is. There are
 virtually no memory requirements. But consider, in contrast, a much
 more complex game such as Pac-Man. You need to focus attention on
 where you are, where the nearest escape route is, where the monsters
 are, and whether they're blue or not (recall that a blue monster can
 be eaten by Pac-Man rather than vice versa). You have to use your
 short-term memory to remember such things as what board you're on, how
 many Pac-Men you have left, how many energizers you've consumed, how
 long it's been since the monsters have been blue, and so on. You need
 to use your long-term memory in order to remember the configuration of
 the maze, where the escape tunnels are, and the behavior of the
 monsters in certain situations.
 
 Given this complexity, there are various appropriate cognitive
 strategies. For example, you could rely primarily on long‐ term memory
 and memorize routes that work well in a variety of situations. But
 such a strategy would have several costs. First, you would have to
 memorize the strategies in the first place, which would require a lot
 of time (and a lot of quarters). Second, you would have to devote some
 of your processing capability to remembering where you are in a given
 route and where the appropriate place to go next is. This, of course,
 means less processing capability for such things as focusing and
 switching attention. Another disadvantage is that video game makers
 can easily change the routes of the monsters, thereby rendering your
 carefully learned routes obsolete. Finally, one wrong turn causes your
 route to become fouled up. A different strategy might be to forgo the
 specific routes and concentrate instead on trying as hard as possible
 to avoid the monsters, while still staying in the general vicinity of
 the uneaten dots. This way, use of memory would be kept to a minimum.
 You wouldn't care about exactly where you were in the maze at any
 given time. The general idea would be that if you could keep avoiding
 the monsters, you would eventually get all the dots. This strategy is
 somewhat inelegant, as you keep fussing around, apparently aimlessly,
 for quite some time. However, it avoids the pitfalls of the
 memorization strategy.
 
 Given that there are at least two (and probably more) appropriate
 cognitive strategies to use, which one should you use? One expert, Ken
 Uston, 11 is partial to the route strategy and, in fact, devotes most
 of his book to describing and developing very sophisticated and
 complex routes. Prior to writ-ing his Pac-Man book, Uston had already
 achieved a good deal of fame for his development and popularization of
 gambling strategies, notably for the game of blackjack. 12 Like his
 Pac-Man strategies, his gambling strategies are based on very complex
 memorization strategies. Uston became an expert at these schemes and
 used them to make huge amounts of money in Las Vegas, Reno, and other
 international gambling spots. Thus it is clear that Uston, by his
 nature or through a great deal of practice, is an expert at memorizing
 and, for him, mastering a new strategy based on memorization would be
 natural and easy.
 
 But if you are a poor memorizer, you might want to develop a strategy
 that requires a minimum of memorization. If you are slow at retrieving
 information from long-term memory, you'll want a strategy that
 minimizes such retrieval, and so on. 13
 
 More generally, video game players may be concerned with a "strategy
 for developing strategies." Most current video games are complex,
 requiring complex strategies. One way of combating this complexity,
 which actually applies to problem solving in general, is to break the
 required actions down into constituent parts. In figure 3.3 we have
 done this for the game of Sabotage. This breakdown yields a
 hierarchical, or treelike, structure, where points on the tree are
 goals and subgoals that we wish to accomplish. At the top is our
 overall goal of making as many points as possible. Two major subgoals
 are used to achieve this overall goal—keeping from being bombed and
 keeping paratroopers from accumulating on the ground. Each of the
 subgoals is itself achieved by one or more subgoals that are nested
 underneath it.
 
 FIGURE 3.3 Sabotage: the hierarchy of its goals.
 
 When you break things down this way, it becomes much easier to see
 exactly what has to be accomplished. Notice also that at the bottom of
 the tree are relatively simple motor skills that have to be learned.
 Once you have identified these bottom-level goals—these specific
 skills—you should, if at all possi-ble, practice each one in isolation
 since that will optimally provide you with the action/feedback
 sequences necessary for learning the skill.
 
 To see the utility of using a hierarchical strategy, let us take an
 example of not using it. Recall that the predominant action in
 Sabotage is the appearance of helicopters that, if you don't destroy
 them with your gun, drop paratroopers. Helicopters (being large) are
 both easier to hit and worth more points than paratroopers (which are
 smaller). Moreover, if you concentrate on shooting down each
 helicopter as soon as it appears, it won't get much of a chance to
 drop paratroopers anyway. Thus focusing attention on helicopters and
 trying to hit each one as soon as it appears seems to be a reasonable
 strategy.
 
 However, this strategy works only up to a certain point, because
 eventually helicopters start appearing and dropping paratroopers at
 such a frenetic pace that the sky soon becomes filled with
 paratroopers, and you're reduced to firing blindly and continually.
 Since each shot costs a point, you start losing points faster than you
 gain them. It's hopeless anyway, since even this desperate behavior
 soon becomes inadequate to catch all the paratroopers. So you watch
 helplessly as you're surrounded and eventually blown away.
 
 The strategic error in this line of play is in not developing the
 subskill of shooting down paratroopers during the early phases of the
 game when they appear only infrequently and you can practice at a
 leisurely pace. Realizing this error, you would change strategies and,
 in the early phase of the game, deliberately allow the helicopters to
 survive and to drop their paratroopers. Using this gambit, you could
 concentrate fully on picking off the paratroopers. When you finally
 develop some proficiency at this skill, you can return to the
 strategy—overall, more efficient—of shooting down the helicopters as
 rapidly as possible. But at the same time, you will be secure in your
 knowledge that when the number of paratroopers starts to increase, you
 can accurately and calmly hit each one with a single shot.
 
 Video Games as Problem Solving
 Before we can master a game, we have to learn to play it. The process
 of moving from novice to highly proficient player can be viewed as
 problem solving, a process that has been studied extensively by
 psychologists.
 
 There are three major aspects of a problem-solving situation: (i) the
 original state, (2) the goal state, and (3) the rules. For example,
 imagine your goal is to become proficient at Pac‐ Man. The original
 state, or the situation at the beginning of the problem-solving
 process, might be, "I have played games before but never a video
 game." The goal state, reached when the problem is solved, is to
 become proficient at Pac-Man. You might even make a more specific goal
 for yourself, such as beating the previous high score on the machine
 at the video parlor on Saturday, April 18. The rules refer essentially
 to restrictions that must be followed as you go from the original
 state to the goal state. They might include your wanting to become an
 expert on the arcade rather than the home version of Pac-Man, or to
 accomplish your goal without consulting any of the books on the game.
 
 Good problem solvers seldom strike out randomly. Rather, they plan.
 Often they break the problem into smaller subproblems, then
 concentrate on solving those subproblems. In our observations of
 players in video parlors, we have occasionally come upon a person who
 sits down at a new game, inserts a quarter, and begins moving the
 joystick around without any idea whatsoever about the goal of the
 game. This approach typically gets the person nowhere.
 
 The video player who mindlessly plunges in is failing to appreciate
 the importance of the first phase of problem solving —understanding
 the problem. To understand a problem, you must pay attention to the
 important aspects of it and ignore the rest. In Pac-Man, it is
 obviously important to pay attention to Pac-Man, the monsters, and the
 energizers. The elapsed time since an energizer was consumed is
 important. Whether the monsters are blue or not is vital. However,
 whether a monster happens to be green or pink is unimportant and
 irrelevant.
 
 Once you, the problem solver, figure out what is essential, the next
 step is to try one or more different strategies for solving the
 problem. If your initial goal in Pac-Man is to eat all the dots on a
 board and win a new one, you might begin by randomly moving the
 joystick in one of the four possible directions. If you did this
 enough times, you might eventually stumble on a method for achieving
 your goal. However, this random approach would be inefficient and
 unsophisticated. A more creative approach would be to try selective
 routes, routes that would be more likely to lead to the goal. These
 selective approaches are called heuristics, and they are far more
 efficient than random methods. The novice Pac-Man player might try
 running Pac-Man around the edge of the screen and then attempting to
 eat the dots in the middle areas. This particular heuristic might not
 lead to the desired goal immediately, but it could lead to the
 postulation of other heuristics that might then be tried and might
 ultimately succeed. One problem with the "edge-then-middle" heuristic
 is that all of the energizers would be consumed early in the game, and
 none would be available later, when Pac-Man desperately needed them.
 
 Clearly, a heuristic that does not deplete the supply of energizers is
 called for.
 
 During the course of solving the Pac-Man problem, new strategies will
 be discovered. When the authors first started playing Pac-Man, we used
 to immediately consume an energizer whenever one of the four monsters
 was pursuing. We soon learned that it was far better to wait until the
 monster was close at hand before consuming the energizer. This
 increased our opportunity to contact and destroy the monster while
 still in an energized state. When learning to play a video game, good
 players use many heuristics of this sort.
 
 The process of dividing a problem into a number of subproblems, or
 smaller problems, is called means-ends analysis and is a feature of
 many successful problem-solving strategies. The process has received
 its name because it involves figuring out the "ends," or goals, that
 you want to attain and then devising certain "means," or strategies,
 for reaching them. In general, as you solve the subproblems, you
 continually reduce the difference between your original state and your
 goal state. Suppose your larger goal in playing Pac-Man is to beat the
 previous high score. Rather than going for this largish goal all at
 once, it would make much more sense to break up the problem into a
 number of smaller problems. These might include (1) reaching the first
 energizer and consuming a monster, then (2) completing a board and
 receiving a new maze with fresh dots, then (3) capturing the
 strawberry symbol. Mastering each of the subproblems gets you closer
 and closer to your ultimate goal. Put another way, as you complete
 each subproblem, you continue to reduce the difference between your
 original state (being a novice at Pac-Man) and your goal state
 (beating the previous score). You have used a means-ends analysis.
 
 In everyday life we use means-ends analyses so often that we often
 take them for granted. [/SPOILER:db06e850
 |  
				|  |  |  
		|     |  
		|   |  
		| Vert1 
 
				
			 
				
			 
				
			 
				Joined: Aug 28 2011
			 
				
			 
				Posts: 537
			   | 
			
				| 
 
				Chapter 3 Continued
[SPOILER:0e72f55b18] If you must pick up a friend who is arriving
 tomorrow at the airport and your car has broken down, you might divide
 your problem into two subproblems: (1) borrowing a car and (2) getting
 yourself to the airport. Once the first subproblem is solved, the
 difference between your original state and your goal state is
 substantially reduced. Again, you have used a means-ends analysis.
 
 In some instances, means-ends analysis may not be the best approach,
 or can even mislead you. This occurs when the solution to a problem
 depends on temporarily increasing the difference between the original
 state and the goal state. For example, assume that you have identified
 the consuming of an energizer as a first step in your ultimate plan of
 eventually beating the high score for the day. The consuming of an
 energizer is your first subgoal, and it is natural to think that you
 would want to move the joystick in the direction of the nearest
 energizer. But it sometimes makes sense to first move the joystick
 away from the energizer rather than toward it (for example, if there
 is a monster between Pac-Man and the energizer). In this instance you
 temporarily increase the difference between your original state and
 the goal state. With some games this process actually enhances your
 chances of winning. Discovering that a move away from the goal will
 actually lead you to your ultimate goal involves truly creative
 problem solving.
 
 Expert Learning
 What distinguishes an expert at some skill from a novice? We've seen
 some of the components that go into becoming an expert: you have to be
 a problem solver to figure out the rules of the game, you have to
 devise strategies to optimally accomplish a variety of goals, and you
 have to attach appropriate motor responses to stimuli. In addition,
 particularly when the skill of interest is video game playing, you
 must also learn to perceive game situations as complete units.
 
 What's meant by this? Psychologists talk of perceiving and processing
 "chunks." A chunk is anything stored in long-term memory as a unitary
 whole. For instance, the letter string MGAE is perceived as four
 separate letters—four chunks. But the same letters presented as GAME
 are perceived as one word —one chunk. The fewer the chunks you have to
 process in order to accomplish some task, the more efficiently the
 task can be done.
 
 Various studies have linked the acquisition of expertise in game
 playing to the fusing of many small chunks into fewer larger ones.
 Consider chess. In one experiment, various board positions were shown
 either to chess experts or to chess novices. The board positions were
 either random configurations of the chess pieces or they derived from
 actual games. Later the subjects had to reproduce the board positions
 they had seen. Neither the novices nor the experts could reproduce the
 random board configurations very well. The novices couldn't reproduce
 the actual game configurations very well either, but the experts
 could.
 
 The boards involved perhaps twenty pieces. Apparently, however, the
 experts saw the game configuration boards as a small number of chunks,
 because any configuration resulting from an actual game was bound to
 be very similar to some configuration that the experts had seen many
 times before. This wasn't true for the novices; hence for them twenty
 pieces constituted about twenty separate chunks. The random board
 configurations were unfamiliar to everyone and were thus perceived by
 all as many chunks. The general principle is: The fewer the number of
 chunks in some stimulus, the easier is that stimulus to deal with.
 
 The same principle can be applied to the learning of a video game.
 Take a complex game like Defender. The novice is overwhelmed by what
 takes place on the screen. Each component—each humanoid, each mutant,
 each stretch of terrain— constitutes a separate chunk. It takes a good
 deal of time to analyze all these chunks, and thus the novice is slow
 to respond and is quickly defeated. As you become expert, however, you
 begin to fuse all these chunks into fewer, bigger chunks, and begin to
 see not individual objects but all the objects together as situations.
 Since a situation is only one chunk, it's easy to analyze, and you can
 respond to it rapidly and easily.
 
 Designing New Games
 Most video game futurists do not seem to be taking into account the
 human cognitive system. Instead they emphasize what characters are
 likely to have the appeal of Pac-Man. In an article in Psychology
 Today writer Dan Gutman speculates on what it is about Pac-Man that
 made him the biggest cultural hero between John Lennon and E.T. 14 He
 speculated that the cute, cuddly character, who was involved in an
 essentially nonviolent game, was especially appealing to women, and he
 easily chewed up millions of their quarters. Although there may be
 relatively fewer women at the arcades, when they do go, they seem to
 enjoy Pac-Man. As for candidates for a future Pac‐ Man, Gutman
 proposed "Q*Bert," a mangy-looking, noselike creature who hops on a
 pyramid made of cubes, trying to make them all the same color. Every
 time Q*Bert hops on a new cube it changes color. Or, if not Q*Bert,
 then "Domino Man," who weaves his trail of dominoes through the
 congested shopping-center parking lot. His major role in life is to
 protect the trail from the Bumbling Bag Lady and the Reckless Little
 Boy and his hot-rod shopping cart. Or, if not Q*Bert or Domino Man,
 then perhaps "Millipede," who must defend its homeland from hordes of
 marauding insects. Perhaps the cute, cuddly characters are what draw
 some people to the games, but it seems likely that other
 considerations would be far more important.
 
 Just as video game players may well wish to think about which
 cognitive abilities are required when devising a strategy, video game
 designers may wish to consider what cognitive strategies will be
 involved in any new game they might consider designing. Most probably
 they would consider an existing game and realize that there's some
 cognitive component—or set of components—that the playing of this game
 doesn't really require. Thus, if some modification of the game could
 be designed that would require those missing components, the game
 would be that much more complicated and that much more interesting.
 Let's look at an example of how this might work.
 
 GROUND-LEVEL PAC-MAN
 Certain of the games—Pac-Man and Defender, for example —are especially
 exciting and fast-moving, requiring fast reactions and fine-tuned
 eye-hand coordination. Others—for example, some of the "adventure,"
 maze-running games—are slower-moving but more intellectually
 appealing, making much greater demands on memory (remembering where
 you are, where various rooms are, where you've left various objects,
 how to go about achieving some goal, and so on). Further, some of the
 adventure games stretch the imagination, allowing you to fantasize
 yourself into a "real-life" situation. Imagine a game in which you
 were in the Pac-Man maze, instead of looking down at it. You would be
 swept down the corridors gobbling up dots wherever you found them,
 evading the monsters, and, in general, doing what Pac-Man usually does
 in a Pac-Man game. From your point of view, of course, many things
 would have changed relative to the normal Pac-Man situation. Lacking
 the bird's-eye view of the maze usually enjoyed by Pac-Man players,
 you wouldn't know where the monsters were unless they happened to
 appear in the corridor; thus monsters would unexpectedly leap out from
 behind a corner, or would be lying in wait at the next turn. Moreover,
 you would forget pretty quickly where you were in the maze since you
 couldn't see yourself from the outside. As might be expected, this
 uncertainty would lead to problems—for instance, once you had eaten a
 row (or, as it appeared to you, a corridor) of dots, you wouldn't
 quite remember where the rest of the unconsumed dots were. You
 wouldn't have the traditional luxury of being able to glance around
 and see where the energizers were and how many were left. Finally, you
 rather than your little surrogate face would be the one in danger of
 being obliterated at any moment.
 
 This hypothetical invention, "Ground-level Pac-Man," might become a
 reality; someone will take the concept and program it because,
 technically, such three-dimensionality is entirely feasible. In fact,
 someone has more or less thought of this idea. In Disney's box-office
 hit Tron, the central character is a man called Flynn, who is an
 expert computer programmer as well as a world-class video game player.
 During most of the movie, Flynn is trapped inside a video game trying
 to get out. As he zips through corridors, enemies continually try to
 attack him. In the end—of course—he frees himself.[/SPOILER:0e72f55b18]
 
 Psychology Terms Used In Book:
 [SPOILER:0e72f55b18]Reinforcement-the provision for you of something that you like.
 Schedule of reinforcement (aka partial reinforcement)—reinforcement is intermittent rather than continuous.
 Extinction-decline and eventual cessation of behavior in the absence of reinforcement
 Extinction period-the length of time it takes for the behavior to cease or extinguish.
 Magnitude of Reinforcement-reward
 Delay of Reinforcement
 Multiple Reinforcements
 Intrinsic Reinforcement
 Cognitive Dissonance-paradoxical types of behavior.
 Cognitive Dissonance Theory-theory assumes that when a person performs acts or holds beliefs that are in conflict with one another, the person will act so as to reduce the conflict.
 Regret
 Challenge
 Fantasy
 Curiosity
 Sensory Memory
 Attention
 Filtering Process
 Saccade-(French for "jerk" or "jolt") which is a quick jump of the eye from one place to another.
 Fixations-periods in between saccades during which the eye is relatively stationary.
 Short-term Memory
 Long-term Memory
 Expectancy
 Verbal/Visual Distinction
 Motor Performance-the motor system, the part of the mind responsible for initiating muscle movements. The sort of skilled movement required for video games is called motor performance.
 Skill-a precise, finely tuned sequence of muscle movements, usually designed to achieve a very specific goal. In general, a skill is carried out in conjunction with feedback from the sensory system.
 Practice
 Eye-hand coordination-the ability to perform an appropriate sequence of motor skills in response to a particular sequence of information entering the visual system from the environment.
 Strategies
 There are three major aspects of a problem-solving situation: (i) the original state, (2) the goal state, and (3) the rules.
 Chunk-anything stored in long-term memory as a unitary whole. For instance, the letter string MGAE is perceived as four separate letters—four chunks. But the same letters presented as GAME are perceived as one word —one chunk. The fewer the chunks you have to process in order to accomplish some task, the more efficiently the task can be done.[/SPOILER:0e72f55b18]
 |  
				|  |  |  
		|     |  
		|   |  
		| Douche McCallister 
 
				Moderator
			 
				
			 
				Title: DOO-SHAY
			 
				Joined: Jan 26 2007
			 
				Location: Private Areas
			 
				Posts: 5672
			   | 
			
				| 
 
				I had high hopes that I would be reading this "book" passionately, but it seems this "book" was written in the early 80's which hardly pertains to the games systems and the nature of the whole gamer experience at all.
 I would compare it to an article written about our pschological addiction to our phones only to find out that by phones they mean rotary landline phones.
 |  
				|  |  |  
		|    |  
		|   |  
		| Vert1 
 
				
			 
				
			 
				
			 
				Joined: Aug 28 2011
			 
				
			 
				Posts: 537
			   |  |  
		|     |  
		|   |  
		|  |  
	
		|  |  |  |