Friday 26 September 2014

ENVIRONMENT AND ARCHITECTURE

What LEED Is


 LEED, or Leadership in Energy and Environmental Design, is redefining the way we think about the places where we live, work and learn. As an internationally recognized mark of excellence, LEED provides building owners and operators with a framework for identifying and implementing practical and measurable green building design, construction, operations and maintenance solutions.
With nearly 9 billion square feet of building space participating in the suite of rating systems and 1.6 million feet certifying per day around the world, LEED is transforming the way built environments are designed, constructed, and operated --- from individual buildings and homes, to entire neighborhoods and communities. Comprehensive and flexible, LEED works throughout a building's  life cycle.
LEED certification provides independent, third-party verification that a building, home or community was designed and built using strategies aimed at achieving high performance in key areas of human and environmental health: sustainable site development, water savings, energy efficiency, materials selection and indoor environmental quality.
Developed by the U.S. Green Building Council (USGBC) in 2000, the LEED rating systems are developed through an open, consensus-based process led by LEED committees. The next update of the LEED rating system, coined LEED 2012, is the next step in the continuous improvement process and on-going development cycle of LEED.

Architect Scott M. Kemp designed the house in Ladner, British Columbia, Canada. Houses are designed as small as possible but allow for maximum flexibility – including the adaptation of the use of the aging population and the subsequent reduced mobility. The design combines the features of sustainability and the house has achieved Platinum LEED rating from the Canada Green Building Council. Houses envisioned as a simple shelter – provide protection from the elements while maximizing the connection to the natural environment of the river, by breaking down barriers of space / indoor outdoor. Transparency through the building to maximize exposure to resolve conflicts with a view to the north of the desire for maximum sunlight.


LEED promotes a whole-building approach to sustainability by recognizing performance in key areas:
Sustainable Sites
Site selection and development are important components of a building’s sustainability. The Sustainable Sites category discourages development on previously undeveloped land; seeks to minimize a building's impact on ecosystems and waterways; encourages regionally appropriate landscaping; rewards smart transportation choices; controls stormwater runoff; and promotes reduction of erosion, light pollution, heat island effect and construction-related pollution.

Water Efficiency
Buildings are major users of our potable water supply. The goal of the Water Efficiency category is to encourage smarter use of water, inside and out. Water reduction is typically achieved through more efficient appliances, fixtures and fittings inside and water-conscious landscaping outside.

Energy & Atmosphere
According to the U.S. Department of Energy, buildings use 39% of the energy and 74% of the electricity produced each year in the United States. The Energy & Atmosphere category encourages a wide variety of energy-wise strategies: commissioning; energy use monitoring; efficient design and construction; efficient appliances, systems and lighting; the use of renewable and clean sources of energy, generated on-site or off-site; and other innovative measures.

Materials & Resources
During both the construction and operations phases, buildings generate a lot of waste and use large quantities of materials and resources. The Materials & Resources category encourages the selection of sustainably grown, harvested, produced and transported products and materials. It promotes waste reduction as well as reuse and recycling, and it particulary rewards the reduction of waste at a product’s source.

Indoor Environmental Quality
The U.S. Environmental Protection Agency estimates that Americans spend about 90% of their day indoors, where the air quality can be significantly worse than outside. The Indoor Environmental Quality category promotes strategies that improve indoor air as well as those that provide access to natural daylight and views and improve acoustics.

Locations & Linkages
The LEED for Homes rating system recognizes that much of a home's impact on the environment comes from where it is located and how it fits into its community. The Locations & Linkages category encourages building on previously developed or infill sites and away from environmentally sensitive areas. Credits reward homes that are built near already-existing infrastructure, community resources and transit – in locations that promote access to open space for walking, physical activity and time outdoors.

Awareness & Education
The LEED for Homes rating system acknowledges that a home is only truly green if the people who live in it use its green features to maximum effect. The Awareness & Education category encourages home builders and real estate professionals to provide homeowners, tenants and building managers with the education and tools they need to understand what makes their home green and how to make the most of those features.

Innovation in Design
The Innovation in Design category provides bonus points for projects that use innovative technologies and strategies to improve a building’s performance well beyond what is required by other LEED credits, or to account for green building considerations that are not specifically addressed elsewhere in LEED. This category also rewards projects for including a LEED Accredited Professional on the team to ensure a holistic, integrated approach to the design and construction process.

Regional Priority
USGBC’s regional councils, chapters and affiliates have identified the most important local environmental concerns, and six LEED credits addressing these local priorities have been selected for each region of the country. A project that earns a regional priority credit will earn one bonus point in addition to any points awarded for that credit. Up to four extra points can be earned in this way.

Evaluating the World Wide Web: A Global Study of Commercial Sites

Abstract

While commercial applications of the Internet proliferate, particularly in the form of business sites on the World Wide Web, on-line business is still relatively insignificant. One reason is that truly compelling applications have yet to be devised to penetrate the mass market. To help identify approaches that may eventually be successful, one must address the question of what value is being created on the Web. As a first step, this paper proposes a framework to evaluate Web sites from a customer's perspective of value-added. A global study covering 1,800 sites, with representative samples from diverse industries and localities worldwide, is conducted to give a profile of commercial use of the World Wide Web in 1996.

Introduction

By mid-1996, there were over 250,000 World Wide Web (WWW or Web in short) sites on the Internet, up from 15,000 in 1994 [e-land, 1997a]. Business enterprises–from multinational conglomerates to solo entrepreneurs–are staking their presence on the Internet, all poised to become pioneers in what promises to be the frontier of electronic commerce [Kalakota and Whinston, 1996]. Yet, in spite of estimates ranging from 14.1 million WWW users 16 years of age or older in the US alone [Hoffman, Kalsbeek, and Novak, 1996] to 37.4 million in the US and Canada [CommerceNet, 1997], on-line business is still relatively insignificant. Net merchants were estimated to sell some $750 million worth of goods by the end of 1996, compared to $1.7 trillion for the retail industry and $57 billion for the home shopping industry [e-land, 1997b]. Apart from the obvious difficulties with bandwidth and security [Alpar, 1996], technical issues that can no doubt be resolved eventually, there is the more probing question of what value is being created by information technology in general [Ho, 1994], and on the Web in particular. Certainly, one cannot expect real progress if it is simply the digital replacement of conventional channels such as newspaper ads, TV commercials, phones, and fax [Ho, 1996a].
Since Web-based business models are still in the nascent stage, there are no obvious criteria to evaluate the effectiveness of commercial Web sites. Indeed, the earliest attempts are in the purely subjective form of individual preferences, which are themselves recorded as pages of “Cool Links,”“Top Lists,” and “Hot Sites” (e.g. [USA Today, 1996, June 11].) More organized efforts have since appeared as Web reviews [The Web Magazine, 1997] or popularity polls [IntelliQuest Technology Panel, 1997]. Academic studies are still scarce, with the few examples covering either generic functions of commercial sites [Hoffman, Novak, and Chatterjee, 1995], or applications in specific industries (e.g. hotels ,[Murphy, Forrest, Wotring, and Brymer, 1996] and art galleries [Smith and McLaughlin, 1996]).
This paper proposes a general framework to evaluate Web sites from a customer's perspective of value added. A global study of commercial sites, conducted in May through September, provides a snapshot of the development of this new medium for business in 1996. First, representative samples in North America (US and Canada) from 40 industries, totaling 1000 sites, are evaluated. The results are presented and discussed by industry. Next, 8 other localities worldwide are considered: Australia, France, Germany, Hong Kong, Italy, Singapore, Taiwan, and United Kingdom. A sample of 100 sites from 20 industries is studied for each locality. Comparative results are presented in three groupings. Interpretation of the aggregate sample is then given, concluding with implications and suggestions of future directions of this approach to evaluation.

Monday 15 September 2014

Sony Xperia Z1

The Sony Xperia Z1 is a high-end Android smartphone produced by Sony. The Z1, at that point known by the project code name "Honami", was unveiled during a press conference in IFA 2013 on 4 September 2013. The phone was released in China on 15 September 2013, in the UK on 20 September 2013, and entered more markets in October 2013. On 13 January 2014, the Sony Xperia Z1s, a modified version of the Sony Xperia Z1 exclusive to T-Mobile US, was released in the United States.
Like its predecessor, the Sony Xperia Z, the Xperia Z1 is waterproof and dustproof, and has an IP rating of IP55 and IP58. The key highlight of the Z1 is the 20.7 megapixel camera, paired with Sony's in-house G lens and its image processing algorithm called BIONZ. The phone also comes with Sony's new camera user interface, dedicated shutter button and has an aluminium and glass unibody design.

Specifications


Hardware[edit]

The Sony Xperia Z1's design is "Omni-Balance", according to Sony, which is focused on creating balance and symmetry in all directions. Xperia Z1 has subtly rounded edges and smooth, reflective surfaces on all sides, which are held together by a skeleton frame made from aluminium. The phone features tempered glass, which is Sony's own and they claim is even tougher than Gorilla Glass, front and back, covered by shatterproof film on front and back. The aluminium power button is placed on the right side of the device. A dedicated hardware shutter key for easy access to camera is provided on the lower right side. The location is said to make operation easier. The metallic look and positioning of the power button is inspired by luxury watch crown design. Easy access to external memory card and sim card slots are provided. The sim card can be removed easily with bare hands. The phone is available in three colours: black, white, and purple. The Xperia Z1 is thicker (8.5mm) and heavier (169g)[1] and has thicker screen bezels than the Xperia Z, even though the two phones share the same screen size. Sony said that the frame had to be enlarged due to the larger than average camera sensor.[2] The camera sensor size is 1/2.3" same as commonly are used in bridge camera.[3] The phone is certified waterproof to 1.5 m for up to 30 minutes.[4] The Z1 is dust resistant with an IP rating of 55 and 58. Unlike the Xperia Z, the Xperia Z1 doesn't have a flap covering its headphone jack, but maintains its waterproofing, a move welcomed by many due to the waterproofing warranty on the Sony Xperia Z being reliant on all ports being sealed. Additional less obvious connectivity includes support for USB OTG allowing for the connection of external USB devices[5][6] as well as support for MHL output connection.[7] The Xperia Z1 comes with 2GB of RAM and Qualcomm's quad-core Snapdragon 800 processor clocked at 2.2 GHz. It also contains a 5.0 inch Sony Triluminos and its X-Reality Engine for better image and video viewing. The Sony Xperia Z1 has a 3000mAh battery.[4]

Software[edit]

The Xperia Z1 was initially shipped with Android 4.2 (Jelly Bean) with Sony's custom launcher on top. Some notable additions to the software include Sony's Media applications – Walkman, Album and Videos. NFC is also a core feature of the device, allowing 'one touch' to mirror what is on the smartphone to compatible TVs or play music on a NFC wireless speaker. Additionally, the device includes a battery stamina mode which increases the phone's standby time up to 4 times. Several Google applications (such as Google Chrome, Google Play, Google search (with voice), Google Maps and Google Talk) already come preloaded. Sony also radically changed its camera user-interface; it added new features, such as TimeShift and AR effects.
As of firmware update .290 the bootloader can officially be unlocked without losing camera functionality.
On 28 January 2014 Sony began the roll out of firmware update .136, in addition to bug fixes Sony included the White Balance feature which allows the user to customize the white balance of their display.
On 7 November 2013, Sony Mobile announced via their blog that the Xperia Z1 would receive the Android 4.3 (Jelly Bean) update in December. It also announced that the Android 4.4 update will eventually be released for the Xperia Z1.[8]
On 19 March 2014, the Xperia Z1 received the Android 4.4.2 (KitKat) update.[9]
On 27 June 2014, the Xperia Z1 received the Android 4.4.4 (KitKat) update, which corrected various bugs introduced in the previous Android 4.4.2 update.[10]

Features[edit]

Xperia Z1 camera modes
With a focus on camera, Xperia Z1 also introduces many new camera apps.[11]
  • Social live, developed in cooperation with Bambuser, allows users to broadcast their video live via Facebook and gets comments from their friends in real time.
  • Info-eye instantly gives information about the objects captured by Xperia Z1's camera.
  • Timeshift-burst captures 61 frames within 2 seconds, starting before the shutter button is pressed, allowing users to select the best picture. But shooting in 1080p resolution.
  • AR Effect switches camera to AR Effect mode and adds some fun animations to pictures.
  • Creative Effect gives options for various photographic toning effects.
  • Sweep Panorama takes panoramic views of almost 270 degrees viewing perspective.

Variants[edit]

ModelFCC idCarriers/regionsGSM bandsUMTS bandsLTE bandsNotes
C6902/L39hPY7PM-0500WorldwideQuadPentaN/A[12]
C6906PY7PM-0460North AmericaQuadPenta1, 2, 4, 5, 7, 8, 17[13]
C6916 (Z1s)PY7PM-0590T-Mobile US(USA)QuadPenta4, 17[14]
C6903PY7PM-0450WorldwideQuadPenta1, 2, 3, 4, 5, 7, 8, 20[15]
C6943PY7PM-0650BrazilQuadPenta1, 2, 3, 4, 5, 7, 8, 20This model is identical to C6903 but also supportsISDB-T in Brazil[16]
SOL23PY7PM-0470au by KDDI(Japan)Quad1, 2, 4, 51, 3, 11, 18[17][18]
SO-01FPY7PM-0440NTT DoCoMo(Japan)Quad1, 5, 6, 191, 3, 19, 21[19]
L39tChinaQuaddomestic TD-SCDMA: 34, 39
roaming only UMTS: 1, 2, 5
domestic: 38, 39, 40, 41
roaming only: 3, 7
[20]
All variants support four 2G GSM bands 850/900/1800/1900 and five 3G UMTS band 850/900/1700/1900/2100 (except SO-01F model).


Wednesday 20 August 2014

The Internet as Mass Medium

The Internet has become impossible to ignore in the past two years. Even people who do not own a computer and have no opportunity to “surf the net” could not have missed the news stories about the Internet, many of which speculate about its effects on the ever-increasing number of people who are on line. Why, then, have communications researchers, historically concerned with exploring the effects of mass media, nearly ignored the Internet? With 25 million people estimated to be communicating on the Internet, should communication researchers now consider this network of networks[1] a mass medium? Until recently, mass communications researchers have overlooked not only the Internet but the entire field of computer-mediated communication, staying instead with the traditional forms of broadcast and print media that fit much more conveniently into models for appropriate research topics and theories of mass communication.
However, this paper argues that if mass communications researchers continue to largely disregard the research potential of the Internet, their theories about communication will become less useful. Not only will the discipline be left behind, it will also miss an opportunity to explore and rethink answers to some of the central questions of mass communications research, questions that go to the heart of the model of source-message-receiver with which the field has struggled. This paper proposes a conceptualization of the Internet as a mass medium, based on revised ideas of what constitutes a mass audience and a mediating technology. The computer as a new communication technology opens a space for scholars to rethink assumptions and categories, and perhaps even to find new insights into traditional communication technologies.
This paper looks at the Internet, rather than computer-mediated communication as a whole, in order to place the new medium within the context of other mass media. Mass media researchers have traditionally organized themselves around a specific communications medium. The newspaper, for instance, is a more precisely defined area of interest than printing-press-mediated communication, which embraces more specialized areas, such as company brochures or wedding invitations. Of course, there is far more than a semantic difference between conceptualizing a new communication technology by its communicative form than by the technology itself. The tradition of mass communication research has accepted newspapers, radio, and television as its objects of study for social, political, and economic reasons. As technology changes and media converge, those research categories must become flexible.

Constraints on Internet Research
Mass communications researchers have overlooked the potential of the Internet for several reasons. The Internet was developed in bits and pieces by hobbyists, students, and academics (Rheingold, 1994). It didn't fit researchers' ideas about mass media, locked, as they have been, into models of print and broadcast media. Computer-mediated communication (CMC) at first resembled interpersonal communication and was relegated to the domain of other fields, such as education, management information science, and library science. These fields, in fact, have been doing research into CMC for nearly 20 years (Dennis & Gallupe, 1993; O'Shea & Self, 1983), and many of their ideas about CMC have proven useful in looking at the phenomenon as a mass medium. Both education and business researchers have seen the computer as a technology through which communication was mediated, and both lines of research have been concerned with the effects of this new medium.
Disciplinary lines have long kept researchers from seeing the whole picture of the communication process. Cathcart and Gumpert (1983) recognized this problem when they noted how speech communication definitions “have minimized the role of media and channel in the communication process” (p. 267), even as mass communication definitions disregarded the ways media function in interpersonal communication: “We are quite convinced that the traditional division of communication study into interpersonal, group and public, and mass communication is inadequate because it ignores the pervasiveness of media” (p. 268).
The major constraint on doing mass communication research into the Internet, however, has been theoretical. In searching for theories to apply to group software systems, researchers in MIS have recognized that communication studies needed new theoretical models: “The emergence of new technologies such as GSS (Group Support Systems, software that allows group decision-making), which combine aspects of both interpersonal interaction and mass media, presents something of a challenge to communication theory. With new technologies, the line between the various contexts begins to blur, and it is unclear that models based on mass media or face-to-face contexts are adequate” (Poole & Jackson, 1993, p. 282).
Not only have theoretical models constrained research, but the most basic assumptions behind researchers' theories of mass media effects have kept them from being able to see the Internet as a new mass medium. DeFleur and Ball-Rokeach's attitude toward computers in the fifth edition of their Theories of Mass Communication (1989) is typical. They compare computers to telephones, dismissing the idea of computer communication as mass communication: “Even if computer literacy were to become universal, and even if every household had a personal computer equipped with a modem, it is difficult to see how a new system of mass communication could develop from this base alone” (pp. 335-336). The fact that DeFleur and Ball-Rokeach find it difficult to envision this development may well be a result of their own constrained perspective. Taking the telephone analogy a step further, Lana Rakow (1992) points out that the lack of research on the telephone was due in part to researchers' inability to see it as a mass medium. The telephone also became linked to women, who embraced the medium as a way to overcome social isolation.[2]

Rethinking Definitions

However, a new communication technology can throw the facades of the old into sharp relief. Marshall McLuhan (1960) recognized this when, speaking of the computer, he wrote, “The advent of a new medium often reveals the lineaments and assumptions, as it were, of an old medium” (p. 567). In effect, a new communication technology may perform an almost postmodern function of making the unpresentable perceptible, as Lyotard (1983) might put it. In creating new configurations of sources, messages, and receivers, new communication technologies force researchers to examine their old definitions. What is a mass audience? What is a communication medium? How are messages mediated?
Daniel Bell (1960) recognized the slippery nature of the term mass society and how its many definitions lacked a sense of reality: “What strikes one about these varied uses of the concept of mass society is how little they reflect or relate to the complex, richly striated social relations of the real world” (p. 25). Similarly, the term mass media, with its roots in ideas of mass society, has always been difficult to define. There is much at stake in hanging on to traditional definitions of mass media, as shown in the considerable anxiety in recent years over the loss of the mass audience and its implications for the liberal pluralist state. The convergence of communication technologies, as represented by the computer, has set off this fear of demassification, as audiences become more and more fragmented. The political and social implications of mass audiences and mass media go beyond the scope of this paper, but the current uneasiness and discussion over the terms themselves seem to indicate that the old idea of the mass media has reached its limit (Schudson, 1992; Warner, 1992).
Critical researchers have long questioned the assumptions implicit in traditional media effects definitions, looking instead to the social, economic, and historical contexts that gave rise to institutional conceptions of media. Such analysis, Fejes (1984) notes, can lead to another unquestioning set of assumptions about the media's ability to affect audiences. As Ang (1991) has pointed out, abandoning the idea of the mass media and their audiences impedes an investigation of media institutions' power to create messages that are consumed by real people. If the category of mass medium becomes too fuzzy to define, traditional effects researchers will be left without dependent variables, and critical scholars will have no means of discussing issues of social and political power.
A new communication technology such as the Internet allows scholars to rethink, rather than abandon, definitions and categories. When the Internet is conceptualized as a mass medium, what becomes clear is that neither mass nor medium can be precisely defined for all situations, but instead must be continually rearticulated depending on the situation. The Internet is a multifaceted mass medium, that is, it contains many different configurations of communication. Its varied forms show the connection between interpersonal and mass communication that has been an object of study since the two-step flow associated the two (Lazarsfeld, Berelson, & Gaudet, 1944). Chaffee and Mutz (1988) have called for an exploration of this relationship that begins “with a theory that spells out what effects are of interest, and what aspects of communication might produce them” (p. 39). The Internet offers a chance to develop and to refine that theory.
How does it do this? Through its very nature. The Internet plays with the source-message-receiver features of the traditional mass communication model, sometimes putting them into traditional patterns, sometimes putting them into entirely new configurations. Internet communication takes many forms, from World Wide Web pages operated by major news organizations to Usenet groups discussing folk music to E-mail messages among colleagues and friends. The Internet's communication forms can be understood as a continuum. Each point in the traditional model of the communication process can, in fact, vary from one to a few to many on the Internet. Sources of the messages can range from one person in E-mail communication, to a social group in a Listserv or Usenet group, to a group of professional journalists in a World Wide Web page. The messages themselves can be traditional journalistic news stories created by a reporter and editor, stories created over a long period of time by many people, or simply conversations, such as in an Internet Relay Chat group. The receivers, or audiences, of these messages can also number from one to potentially millions, and may or may not move fluidly from their role as audience members to producers of messages.

Applying Theories to CMC

In an overview of research on computers in education, O'Shea and Self (1983) note that the learner-as-bucket theory had dominated. In this view, knowledge is like a liquid that is poured into the student, a metaphor similar to mass communication's magic-bullet theory. This brings up another aspect to consider in looking at mass communication research into CMC-the applicability of established theories and methodologies to the new medium. As new communication technologies are developed, researchers seem to use the patterns of research established for existing technologies to explain the uses and effects of the new media. Research in group communication, for example, has been used to examine the group uses of E-mail networks (Sproull & Kiesler, 1991). Researchers have studied concepts of status, decision-making quality, social presence, social control, and group norms as they have been affected by a technology that permitted certain changes in group communication.
This kind of transfer of research patterns from one communication technology to another is not unusual. Wartella and Reeves (1985) studied the history of American mass communication research in the area of children and the media. With each new medium, the effects of content on children were discussed as a social problem in public debate. As Wartella and Reeves note, researchers responded to the public controversy over the adoption of a new media technology in American life.
In approaching the study of the Internet as a mass medium, the following established concepts seem to be useful starting points. Some of these have originated in the study of interpersonal or small group communication; others have been used to examine mass media. Some relate to the nature of the medium, while others focus on the audience for the medium.

Critical mass

This conceptual framework has been adopted from economists, physicists, and sociologists by organizational communication and diffusion of innovation scholars to better understand the size of the audience needed for a new technology to be considered successful and the nature of collective action as applied to electronic media use (Markus, 1991; Oliver et al., 1985). For any medium to be considered a mass medium, and therefore economically viable to advertisers, a critical mass of adopters must be achieved. Interactive media only become useful as more and more people adopt, or as Rogers (1986) states, “the usefulness of a new communication system increases for all adopters with each additional adopter” (p. 120). Initially, the critical mass notion works against adoption, since it takes a number of other users to be seen as advantageous to adopt. For example, the telephone or an E-mail system was not particularly useful to the first adopters because most people were unable to receive their messages or converse with them. Valente (1995) notes that the critical mass is achieved when about 10 to 20 percent of the population has adopted the innovation. When this level has been reached, the innovation can be spread to the rest of the social system. Adoption of computers in U.S. households has well surpassed this figure, but the modem connections needed for Internet connection lag somewhat behind.
Because a collection of communication services-electronic bulletin boards, Usenet groups, E-mail, Internet Relay Chats, home pages, gophers, and so forth-comprise the Internet, the concept of critical mass on the Internet could be looked upon as a variable, rather than a fixed percentage of adopters. Fewer people are required for sustaining an Internet Relay Chat conference or a Multi-User Dungeon than may be required for an electronic bulletin board or another type of discussion group. As already pointed out, a relatively large number of E-mail users are required for any two people to engage in conversation, yet only those two people constitute the critical mass for any given conversation. For a bulletin board to be viable, its content must have depth and variety. If the audience who also serve as the source of information for the BBS is too small, the bulletin board cannot survive for lack of content. A much larger critical mass will be needed for such a group to maintain itself-perhaps as many as 100 or more. The discretionary data base, as defined by Connolly and Thorn (1991) is a “shared pool of data to which several participants may, if they choose, separately contribute information” (p. 221). If no one contributes, the data base cannot exist. It requires a critical mass of participants to carry the free riders in the system, thus supplying this public good to all members, participants, or free riders. Though applied to organizations, this refinement of the critical mass theory is a useful way of thinking about Listservs, electronic bulletin boards, Usenet groups, and other Internet services, where participants must hold up their end of the process through written contributions.
Each of these specific Internet services can be viewed as we do specific television stations, small town newspapers, or special interest magazines. None of these may reach a strictly mass audience, but in conjunction with all the other stations, newspapers, and magazines distributed in the country, they constitute mass media categories. So the Internet itself would be considered the mass medium, while the individual sites and services are the components of which this medium is comprised.

Uses and Gratifications

Though research of mass media use from a uses-and-gratifications perspective has not been prevalent in the communication literature in recent years, it may help provide a useful framework from which to begin the work on Internet communication. Both Walther (1992b) and Rafaeli (1986) concur in this conclusion. The logic of the uses-and-gratifications approach, based in functional analysis, is derived from “(1) the social and psychological origins of (2) needs, which generate (3) expectations of (4) the mass media and other sources, which lead to (5) differential patterns of media exposure (or engagement in other activities), resulting in (7) other consequences, perhaps mostly unintended ones” (Blumler and Katz, 1974).
Rosengren (1974) modified the original approach in one way by noting that the “needs” in the original model had to be perceived as problems and some potential solution to those problems needed to be perceived by the audience. Rafaeli (1986) regards the move away from effects research to a uses-and-gratifications approach as essential to the study of electronic bulletin boards (one aspect of the Internet medium). He is predisposed to examine electronic bulletin boards in the context of play or Ludenic theory, an extension of the uses-and-gratifications approach, which is clearly a purpose that drives much of Internet use by a wide spectrum of the population. Rafaeli summarizes the importance of this paradigm for electronic communication by noting uses-and-gratifications' comprehensive nature in a media environment where computers have not only home and business applications, but also work and play functions.
Additionally, the uses-and-gratifications approach presupposes a degree of audience activity, whether instrumental or ritualized. The concept of audience activity should be included in the study of Internet communication, and it already has been incorporated in one examination of the Cleveland Freenet (Swift, 1989).

Social presence and media richness theory

These approaches have been applied to CMC use by organizational communication researchers to account for interpersonal effects. But social presence theory stems from an attempt to determine the differential properties of various communication media, including mass media, in the degree of social cues inherent in the technology. In general, CMC, with its lack of visual and other nonverbal cues, is said to be extremely low in social presence in comparison to face-to-face communication (Walther, 1992a).
Media richness theory differentiates between lean and rich media by the bandwidth or number of cue systems within each medium. This approach (Walther, 1992a) suggests that because CMC is a lean channel, it is useful for simple or unequivocal messages, and also that it is more efficient “because shadow functions and coordinated interaction efforts are unnecessary. For receivers to understand clearly more equivocal information, information that is ambiguous, emphatic, or emotional, however, a richer medium should be used” (p. 57).
Unfortunately, much of the research on media richness and social presence has been one-shot experiments or field studies. Given the ambiguous results of such studies in business and education (Dennis & Gallupe, 1993), it can be expected that over a longer time period, people who communicate on Usenets and bulletin boards will restore some of those social cues and thus make the medium richer than its technological parameters would lead us to expect. As Walther (1992a) argues: “It appears that the conclusion that CMC is less socioemotional or personal than face-to-face communication is based on incomplete measurement of the latter form, and it may not be true whatsoever, even in restricted laboratory settings” (p. 63). Further, he notes that though researchers recognize that nonverbal social context cues convey formality and status inequality, “they have reached their conclusion about CMC/face-to-face differences without actually observing the very non-verbal cues through which these effects are most likely to be performed” (p. 63).
Clearly, there is room for more work on the social presence and media richness of Internet communication. It could turn out that the Internet contains a very high degree of media richness relative to other mass media, to which it has insufficiently been compared and studied. Ideas about social presence also tend to disguise the subtle kinds of social control that goes on on the Net through language, such as flaming.

Network Approaches


Grant (1993) has suggested that researchers approach new communication technologies through network analysis, to better address the issues of social influence and critical mass. Conceptualizing Internet communities as networks might be a very useful approach. As discussed earlier, old concepts of senders and receivers are inappropriate to the study of the Internet. Studying the network of users of any given Internet service can incorporate the concept of interactivity and the interchangeability of message producers and receivers. The computer allows a more efficient analysis of network communication, but researchers will need to address the ethical issues related to studying people's communication without their permission.
These are just a few of the core concepts and theoretical frameworks that should be applied to a mass communication perspective on Internet communication. Reconceptualizing the Internet from this perspective will allow researchers both to continue to use the structures of traditional media studies and to develop new ways of thinking about those structures. It is, finally, a question of taxonomy. Thomas Kuhn (1974) has noted the ways in which similarity and resemblance are important in creating scientific paradigms. As Kuhn points out, scientists facing something new “can often agree on the particular symbolic expression appropriate to it, even though none of them has seen that particular expression before” (p. 466). The problem becomes a taxonomic one: how to categorize, or, more importantly, how to avoid categorizing in a rigid, structured way so that researchers may see the slippery nature of ideas such as mass media, audiences, and communication itself.

 

 

 

 

 

 

Cultural Framing of Computer/Video Games

Since their inception, computer and video games have both fascinated and caused great fear in the politicians, educators, academics, and the public at large. In the United States, this fear and fascination goes back to the early 1980s, when Ronald Reagan extolled the virtues of games to create a generation of highly skilled cold war warriors, while U.S. Surgeon General C. Everett Koop proclaimed games among the top health risks facing Americans. To be sure, such extreme cultural reactions to technological and cultural innovations are hardly new; mid twentieth-century critics feared that television watchers would become addicted to television, never leaving their homes, and critics before them feared that film would pervert viewers.
In educational and social science discourse, the reactions to new technologies, including digital gaming technologies, have been equally excessive. Some advocates of digital game-based learning imply that developing educational games is a moral imperative, as kids of the "videogame generation" do not respond to traditional instruction (See Katz, 2000; Prensky, 2001). Other educators, such as Eugene Provenzo (1991; 1992) worry that games are inculcating children with hyper competitive or warped sexual values. Looking at the range of values and powers that educators ascribe to games, games begin to look a bit like a Rorschach test of educators� attitudes toward modern social, technological, and media change, rather than an emerging and maturing entertainment medium. Indeed, similar statements were made about the potential for radio, film, television, and desktop computers to revolutionize learning, yet the overhead projector continues as the most pervasive piece of technology in most classrooms (Cuban, 1986).
The recent enthusiasm for educational gaming directs researchers, politicians, game developers and the public toward some important, overlooked issues. What are people learning about academic subjects playing games such as SimCity, Civilization, Tropico, or SimEarth? Might games be used in formal learning environments? This essay argues that these are critical questions to game studies, and educational studies, particularly work in the learning sciences, and offers some important practical and theoretical traditions that games studies can draw upon as it matures as a field.

Pawns of the Game: The Current State of Games-Based Social Science Research

In the United States, and increasingly in Europe, games such as Doom or Quake have garnered a disproportionate share of attention in the press, as they have become pawns in a culture war waged by cultural conservatives. As many gamers, critics, media scholars, and social researchers agree, this discussion has been devoid of any serious study of games. For example, in 2001, U.S. Attorney General John Ashcroft cited the game Dope Wars as an example of the "the culture of violence" that may have contributed to a spate of recent deadly school shootings" (Reuters News, April 4, 2001). How a simple, text-based game (based on a nearly 20 year old DOS game) that is downloaded over the Internet, played on Palm Pilots, and features no graphical imagery is contributing to the increased violence among teens, given the amount of violence in American culture is questionable. As this example reveals, much of the rhetoric in this culture work has much less to do with any real knowledge of games than with fears about violence in American culture.
It is difficult for many to make sense of this contentious and politicized cultural debate because to date, there has been very little disciplined study of gaming. Some social science researchers have compared "violent" games like Doom to "non-violent" games like Myst or compared the rates of aggressive and violent behavior between gamers and non-gamers. Unfortunately, this research suffers from many problematic conceptualizations: violent acts are removed from the narratives contexts in which they are situated (Jenkins, 1998); researchers used invalid comparison techniques, studying games from different genres that differ along multiple variables -- such as comparing Myst, a slow-paced puzzle adventure game to Castle Wolfenstein, a fast-paced 3D action shooter (Anderson & Dill, 2000). These studies generally lack any real-world evidence linking game-playing to acts of violence; they ignore broad trends that that show inverse correlations between game-playing and violent behavior; finally, they make wild logical leaps in linking very constrained behaviors in laboratories to violent acts where people really get hurt. Anderson and Dill (2000) found that players who lost a round of Wolfenstein 3D "punished" opposing players with a noise blast that lasted 6.81 seconds, compared to Myst players, who blasted opponents for 6.65 seconds - a .16 second difference (there was no difference between players who won their round of Castle Wolfenstein and Myst players). To suggest that a .16-second increase in duration of a noise blast is qualitatively the same as committing mass murder is not only an illogical leap, but a disservice to the worthwhile enterprise of studying what are the root causes of tragic events like school shootings or youth violence. Fortunately, a handful of social science researchers such as Jonathon Freedman (2001) and Jeanne Funk (2001) have begun to call for more rigorous research and are taking a much more disciplined look at the impact of gaming on people's lives. Hopefully social science researchers will follow suit; as a generation of game players move into academic positions, perhaps such poorly defined research studies will be challenged and a more rigorous body of research will evolve.
What's missing from contemporary debate on gaming and culture is any naturalistic study of what game-playing experiences are like, how gaming fits into people's lives, and the kinds of practices people are engaged in while gaming. Few, if any researchers have studied how and why people play games, and what gaming environments are like. The few times researchers have asked these questions, they have found surprising results. In 1985, Mitchell gave Atari 2600 consoles to twenty families and found that most families used the game systems as a shared play activity. Instead of leading to poor school performance, increased family violence, or strained family interactions, video games were a positive force on family interactions, "reminiscent of days of Monopoly, checkers, card games, and jigsaw puzzles" (Mitchell, 1985, p.134). This study suggests that investigators might benefit by acknowledging the cultural contexts of gaming, and studying game-playing as a cultural practice. If nothing else, it highlights the importance of putting aside preconceptions and examining gamers on their own terms.

Rethinking the role of Educational and Social Science Research in Digital Gaming

Underlying this unease about video game violence research is a growing disconnect between anti-gaming rhetoric and people's actual experiences playing games (See Herz, 1996; Poole, 2000). The first generation of gameplayers is now in its 30s. Despite (and perhaps because of) the hundreds of hours I've spent playing war games, I'm pretty much a pacifist. I love Return to Castle Wolfenstein, yet I'd never own a gun. The successes of such books as Joystick Nation and Trigger Happy suggest there is an maturing generation of gamers who feels the same way: games are integral parts of our lives, yet they've largely gone unexamined.
So far, concerns about the effects of "violent" video games have drawn our attention away from the broader social roles and cultural contexts of gaming. There is some evidence that this trend could be changing - in the past six months humanities researchers have turned more attention to games. Art museums in both the United States and United Kingdom have developed or are planning substantial game exhibits in 2000-2002 (See Barbican, 2002). Panels at conferences are almost ready to give up on the "Are games art?" question and begin asking "What kinds of art are they?" or exploring how and why they work (Jenkins, in press; Jenkins & Squire, 2002). Other humanities researchers are examining games to see what they might teach us about the future of interactive narrative (Murray, 1997).
Despite this increasing attention as a maturing medium, the pedagogical potential of games and social contexts of gaming have been woefully unexamined. Already, entertainment games allow learners to interact with systems in increasingly complex ways. Digital game players can relive historical eras (as in Pirates!), investigate complex systems like the Earth's chemical & life cycles (SimEarth), govern island nations (Tropico), manage complex industrial empires (Railroad Tycoon), or, indeed, run an entire civilization (Civilization series). Or, they might travel in time to Ancient Greece (Caesar I,II, & III), Rome (Age of Empires I, and II), North America (Colonization), or manage an ant colony, farm, hospital, skyscraper, themepark, zoo, airport, or fast food chain. Anecdotal evidence from teachers suggests that the impact of gaming on millions of gamers who grew up playing best-selling games such as SimCity, Pirates!, or Civilization is starting to be felt.
Still, little is known about what players are learning through playing SimCity? Is it deepening their appreciation for geography, helping them develop more robust understandings about their environment, or perhaps promoting misconceptions about civic planning? How does a game such as Civilization III work as a cultural simulation? Does it impact players' conceptions of politics or diplomacy? Is there any way to reappropriate Civilization for use in history classes? Given the immense influence of SimCity and Civilization in present game design, what innovations might be sparked by games built around science, engineering, literature or architecture subjects? How might these innovations have an impact on the rest of game design?
These questions suggest at least three fruitful contributions from an educational or social science perspective: (1) Studying the role that games like SimCity and Civilization play in people's lives and how it mediates their understandings of other phenomena; (2) Examining how such games can be used to support learning in formal and informal learning contexts; (3) Creating and examining new modes of gameplay through games that draw metaphors from other domains. Although there has been woefully little research in this area, there are several research traditions in education and social science outside of media effects research tradition that offer useful models for thinking about gameplay.

Studying the Impact of Gaming

With SimCity more than a decade old, a generation of youth has grown up with edutainment. Yet, we know very little about what they are learning playing these games (if anything). Are sim games, civilization-building games, or war games having any impact on how students perceive social studies? Games such as SimCity depict social bodies as complex dynamic systems and embody concepts like positive feedback loops that are central to systems thinking. Are students developing intuitions about systems as a result of playing these games? Do players think they are learning anything about history or urban planning through these games? Are the perceived educational benefits part of the attraction of these games?
The study of games and learning might begin with qualitative study of game players and game playing communities. Although there have been a few survey or experimental studies of game players (See Malone, 1981; Cordova & Lepper, 1996), there have been few studies characterizing players interactions and experiences in game playing environments since Mitchell’s (1985) study of families who were given Nintendo machines. Mitchell studied how purchasing Nintendo game consoles affected twenty families, finding that playing Nintendo was an important part of family play, and brought families closer together, much as a traditional board game might. More recently, researchers such as Funk and colleagues (1996) have studied correlations between game players' characteristics and popular genres, but these broad statistical studies fail to open up the complex relationships behind game players and their games or acknowledge the social contexts in which game playing is situated. Even a quick glance at fan communities around games such as SimCity, Dance Dance Revolution, Railroad Tycoon, Everquest, or The Sims, each of which has dozens of fans websites where players create and trade game objects, maps, levels, scenarios, and stories points to rich relationships between fans and these games and complex social structures that mediate the game playing experience (See Jenkins, 2001; Squire, 2000; Yee, 2000 for descriptions of these communities).
The closest examples of studying gaming communities may be examinations of online communities. In the 1990s, Sherry Turkle and Amy Bruckman studied MOO players, yielding insights into how people negotiate among their many virtual identities (Bruckman, 1993a; 1993b; 1994; Turkle, 1996). These MUD and MOO studies were not specifically of game playing communities, but they have provided both theoretical models and specific insights about online behavior that have become foundational to the design of online games and learning environments alike. Drawing more explicitly from anthropological, educational and cultural psychology traditions (e.g. Cole, 1996), future study of gaming communities might focus specifically on the shared practices, language, resources, understandings, roles that emerge through game play. Among the outcomes of examining gameplay in naturalistic contexts might be creating guidelines for more usable and playable games, leveraging and promoting social interactions and relationships in the gameplay, and insights for creating games that appeal to broader audiences.

Games in Educational Contexts

Most people assume that games like SimCity are used frequently in geography or urban planning classes. Indeed, Maxis has published a set of resources for teachers on its website, touting that, "SimCity 3000(tm) can be used in the classroom to enhance just about any instructional unit. It can stand alone as an enrichment computer activity, or it can be used as a pivotal activity connected to other activities and projects done before, during, or after using the computer program. Use the lessons in this guide to integrate SimCity 3000 into your curriculum, with minimal preparation, or to create custom lessons to suit your needs."
As Doug Church commented at the 2002 Electronic Entertainment Exposition, most people who have played SimCity recognize that it can be an excellent resource for understanding urban planning, most people would also not want to live in a real city designed by someone who has only played SimCity. As urban planner Kenneth Kolson points out, SimCity potentially teaches the player that mayors are omnipotent and that politics, ethnicity, and race play no role in urban planning (Kolson, 1996). Using SimCity 2000 at Boys and Girls clubs Barab, and colleagues (et al. in preparation) have found that students definitely learn from exploring relationships between supply and demand and population growth and taxation, but they might also develop naive concepts of how cities form, grow, and evolve. For example, one six-year old player noted that people began moving into his city when there was electricity, because people wanted to have lights for seeing in the dark. This example illuminates how the process of interpreting game play, of drawing analogies between symbolic representations in the game and their real-life analogs is one of active interpretation, and suggests that students might benefit from systematic explanations or presentations of information. In similar research in anchored instruction and problem-based learning environments, John Bransford and colleagues have found that students perform best when given access to lectures in the context of completing open-ended complex problem solving tasks (Schwartz & Bransford, 2001).
The challenges behind using games to support learning are far from new, particularly in social studies education. In 1973, Wentworth and Lewis summarized the findings from nearly fifty research studies on learning through gaming: "In the majority of these studies, students did neither significantly better nor worse than other learning experiences in their impact on student achievement as evidenced by paper and pencil scores." In his 1991 review of the research on games and simulations in social studies, Clegg reached similarly inconclusive findings. Consistent with contemporary instructional design theory (e.g. Heinich, Molenda, Russell, & Smoldino, 1996), Clegg argues that the instructional context that envelopes gaming is a more important predictor of learning that the game itself. Specifically, how the game is contextualized, the kinds of cooperative and collaborative learning activities embedded in gameplay, and the quality and nature of debriefing are all critically important elements of the gaming experience. This tradition of games and simulations in instructional technology, chiefly promulgated through the The Society for the Advancement of Games and Simulations in Education and Training and the Sage journal Simulation and Gaming has resulted in a rich body of practical knowledge about designing effective games to support learning; however, there is actually very little agreement among educational technologists as to the theoretical underpinnings of why we should use games, how games should be designed to support learning, or in what instructional situations games make the most sense (Gredler, 1996).
The research on games and simulations in education cautions against overexhuberance about the potential of digital games to transform education. In using a game such as SimCity, minimally, there needs to be a close match among desired learning outcomes, available computer and supporting human resources, learner characteristics (such as familiarity with games conventions), "educational" game play, and potential supplementary learning experiences. Fortunately, one can imagine creating instructional resources around a game like SimCity or Civilization that pushes students to think about their game-playing more deeply. For example, Civilization players might create maps of their worlds and compare them to global maps from the same time period. Why are they the same? Why are they different? Students might be required to critique the game and explicitly address built-in simulation biases. Finally, students might draw timelines, write histories, or create media based on the history of their civilization. The possibilities for using a game like Civilization as a springboard into studying history are endless, but so far, there are less than three magazine or journal articles published on the topic and no one has done empirically-grounded research in the successes and challenges of using such a game to support learning (See Berson, 1996; Hope, 1996; Lee, 1994; Prensky, 2001; Teague & Teague, 1995).

Creating Next-Generation Educational Media

Despite these cautions about the potential of games to support learning, games may be the most fully realized educational technology produced to date. Tom Malone (1981) showed how games use challenge, fantasy, player control, and curiosity invoking designs to create intrinsically motivating environments. More recently, Lloyd Rieber (1996) has argued that digital games engage players in productive play - learning that occurs through building microworlds, manipulating simulations, and playing games. Rieber gives reason for renewed optimism for using games to support learning in leveraging the increasing power of the computer to immerse the player in interactive simulated worlds. Whereas historically educational games have relied heavily on exogenuous game formulas, games where content is inserted into a generic gaming template, like hangman, a game like SimCity might be thought of as an endogenuous game design, where the academic content is seamlessly integrated with gaming mechanics. In an endogenuous game, players learn the properties of a virtual world through interacting with its symbology, learning to detect relationships among these symbols, and inferring the game rules that govern the system.
While edutainment games such as SimCity and Civilization are intriguing educational materials, the most promising developments in educational gaming might come through games that are explicitly design to support learning. One example of such a project is the Games-to-Teach project, a project led by Randy Hinrichs at Microsoft Research and Henry Jenkins of MIT's Comparative Media Studies program. In 2001-2002, the Games-to-Teach Project (http://cms.mit.edu/games/education/), presented 10 conceptual prototypes of next-generation educational games to support learning in math, science, and engineering at the advanced high school and introductory undergraduate levels. Among these prototypes is: The Jungle of the Optics, a game where players use a set of lenses, telescopes, cameras, optical tools, and optics concepts to solve optics problems within a role-playing environment; Hephaestus, a massively multiplayer resource management game where players learn physics and engineering through designing robots to colonize a planet; Replicate!, an action game where players learn virology and immunology through playing a virus attempting to infect a human body and replicate so that the virus may spread through a population. Supercharged! A flying / racing game where players learn Electromagnetism by flying a vessel that has adopted the properties of a charged particle through electric and magnetic fields. The Games-to-Teach team will be developing and testing two of these games in 2002-2003.
Such games will demand a broad, industry-wide investment if they are to succeed. Long-term, this kind of project requires creative game designers who understand the tools and capabilities of the medium, educators who can help ensure an effective product and visionary thinkers who can design a suite of games that will appeal to a broad market. A primary goal of the Games-to-Teach Project has been to create games that will engage a broad audience of players by creating rich characters, nuanced gameplay, complex social networks, and interactive stories that tap into a broad range of emotions and player experiences. Hopefully other projects trying different approaches will emerge in the next few years, as there have been signs that perhaps the industry and medium are ready for such a challenge.
Understanding and unpacking how learning occurs through game play, examining how gameplay can be used to support learning in formal learning environments, and designing games explicitly to support learning are three areas that educational research can contribute to game studies. In the next section, I argue that socio-cultural learning theory, activity theory, and educational research on transfer are three theoretical traditions that might also be of use to game studies. Although I present each of them from an educational technology perspective, each one is interdiciplinary in origin, sitting at the nexus of anthropology, sociology, cultural psychology, cognitive psychology, and educational studies and for simplicity, will be referred to as the Learning Sciences.

Unpacking Gameplay Through The Learning Sciences

A fundamental tension facing game studies is that if games do not promote or "teach" violence, then how can researchers claim that they might have a lasting impact on students' cognitive development? Far from trivial, this concern touches on many core social science research issues. What is the role of the viewer/participant in consuming media? What are the cultural and social contexts of media consumption? How does - or doesn't - knowledge transfer from one context to the next? Educational discussions of transfer, practice, and social activity offer three promising ways for game studies to think about gameplay as cultural practice.
Transfer. Much of the hype and hyperbole surrounding games and their potential impact on human behavior (whether it be fear about games’ impact on human behavior or hope that games are teaching students to think sharper or more quickly) rests on assumptions about activities developed in game–playing contexts transferring to new contexts. In educational research, this phenomena is commonly called the "transfer problem" (See Detterman & Sternberg, 1993). In the early 1900s, E.L. Thorndike and colleagues (e.g. Thorndike & Woodworth, 1901) conducted a pioneering set of studies challenging popular notions that the mind functions as a "mental muscle" and that excellence in general subjects such as Latin or Calculus could result in increased mental functioning. Thorndike and Woodworth (1901, cited in Schwartz & Bransford, 2001) write "The mind is ...a machine for making particular reactions to particular situations. It works in great detail, adapting itself to the special data of which it has had experience.... Improvements in any single mental function rarely brings about equal improvement in any other function, no matter how similar, for the working of every mental function group is conditioned by the nature of the data of each particular case" (pp. 249-250).
One classic example of challenges in transferring thinking across contexts is mathematics. Across industrialized nations, most citizens learn the basic skills needed to solve everyday mathematical problems using fractions or Algebra, but most people rarely use but the most simple computational math in their every day lives. Psychologists working constructivist and situated learning traditions argue that human behavior is circumscribed by context (e.g. Barab, Cherkes-Julikowski, 1999; Brown, Collins, & Duguid, 1989; Solomon, 1993). The purpose of human activity, our goals and intentions, constrain the kinds of information we collect in the environment, and how this information is used (Barab, et al., 1999; Lave, 1988). For example, studies have shown that students who learn Algebra through problem-solving are more likely to use Algebra in solving problems than students who learn Algebra through traditional means (e.g. Cognition and Technology Group at Vanderbilt, 1992). Situational constraints also shape and constrain activity. Studies of navigators sailing ships, office workers using computers, and students in classrooms all show how the tools and resources that are available in our environment both guide thinking and constrain actions (Solomon, 1993). For example, people doing fractions in cooking frequently simplify the problem to make mathematics simpler, or manually divide ingredients using kitchen tools rather than using Algebra. As a result,, people who have learned Algebra become very good at using Algebra to solve textbook-like problems within school situations, but develop very different strategies for solving real-world problems (Bransford, et al., 1977; Lave & Wenger, 1991; Pea, 1993).
Unfortunately for educators looking to use games to support learning, this skeptical transfer limits what we hope players might learn from gaming. While pundits and theorists suggest that game-playing might be increasing kids critical thinking or problem-solving skills (See Katz, 2000; Prensky, 2000), research on transfer gives very little reason to believe that players are developing skills that are useful in anything but very similar contexts. A skilled Half-Life player might develop skills that are useful in playing Unreal Tournament (a very similar game), but this does not mean that players necessarily develop generalizable "strategic thinking" or "planning" skills. Just because a player can plan an attack or develop a lightning quick reactions in Half-Life does not mean that she can plan her life effectively, or think quickly in other contexts, such as in a debate or in a courtroom - one of the main reasons being that these are two entirely different contexts and demand very different social practices.
The particularities of gameplaying as social practice, the contrived and computer-mediated nature of digital game play raise serious questions for educators using gaming to support learning that will transfer across different contexts. What are the goals and intentions of players in gaming environments? Do these overlap with the situational constraints of other social or classroom practices? Do game players have opportunities to think with authentic tools and resources in gaming environments? Examining gameplay as social practice provides one model for approaching these questions.

Game-Playing as Social Practice

Anthropologists Jean Lave and Etienne Wenger (1991) use the term "practice" to discuss how actions are situated in their socio-cultural contexts. Essentially, a practice is an activity that involves skills, resources, and tools, and is mediated by personal and cultural purposes. One way to produce more meaningful educational games would be to design games in which players are engaged in richer, more meaningful practices. A game like Civilization III, which involves analyzing geography in order to determine the best geographic location for a city, negotiating trade deals with other civilizations, and making taxation and social spending decisions, comes closer to the kind of meaningful practices educators would like to produce than, say, Half Life.
Note that despite the wonderful educational opportunities in playing Civilization III, playing the game is still simulated activity – as opposed to participating in historical or social practice. Sasha Barab and Tom Duffy (2000) distinguish between practice fields and legitimate participation in social practice. Playing Civilization III is exploring a simulation / model – whereby learning occurs through interacting with and observing the outcomes of a model, which is clearly not the same as actually participating in social practices valued outside of school - like writing history or in participating in political, government, or commercial institutions that extend beyond the school context, or creating a model for research purposes. In short, playing Civilization might be a tool that can assist students in understanding social studies, but playing the game is not necessarily participating in historical, political, or geographical analysis. Therefore, building on our earlier discussion of transfer, there is very good reason to believe that students may not use their understandings developed in the game - such as the political importance of a natural resource like oil - as tools for understanding phenomena outside the game, such the economics behind The Persian Gulf War or contemporary foreign policy, even in a game as rich as Civilization III.
Understanding learning as participation in social practice, however, also suggests ways for educators to transform game playing into participation in social practice. For example, Civilization could be presented as a tool that can be used for answering historical questions, such as why Europeans colonized North America, instead of vice versa, or the comparative advantages and disadvantages of political isolationism. In a hypothetical Civilization III unit, students might spent 25 percent of their time playing the game, and the remainder of the time creating maps, historical timelines, researching game concepts, drawing parallels to historical or current events, or interacting with other media, such as books or videos. In this way, the educational value of the game-playing experiences comes not from just the game itself, but from the creative coupling of educational media with effective pedagogy to engage students in meaningful practices. Indeed, research on teachers’ adoption and adaptation of materials suggests teachers will adapt the learning materials we create to maximize their potential to support learning regardless of designers' intentions (Squire, Barnett, MaKinster et al., in press). As such, the pedagogical value of a medium like gaming cannot be realized without understanding how it is being enacted through classroom use.

Activity Theory

Conceptualizing practice conceived broadly enough to capture the individual’s goals and intentions, the tools, and resources employed in practice, and the social organization and institutions that mediate practice – all within empirically grounded cases, is challenging. Restated, how can one theoretical framework account for both the moment-to-moment interactions that constitute gameplay (including the player’s goals and intentions) while also accounting for the broader socio-cultural contexts that situate the activity?
Over the past decade, socio-cultural psychologists have been struggling with this issue, and proposed Activity Theory as one theoretical framework for understanding how human activity is mediated by both tools and cultural context (Engeström, 1987; 1993). For an Activity theorist, the minimal meaningful context is the dialectical relations between human agents (subjects) and that which they act upon (objects) as they are mediated by tools, language, and socio-cultural contexts (Engeström 1987; 1993). A generic activity theory system is portrayed in Figure 1. Subjects are the actors who are selected as the point of view of the analysis. Objects are that "at which the activity is directed and which is molded or transformed into outcomes with the help of physical and symbolic, external and internal tools" (Engeström, 1993, p. 67, italics in the original). As such, objects can be physical objects, abstracted concepts, or even theoretical propositions. Tools are the concepts, physical tools, artefacts or resources that mediate a subject’s interactions with an object. The community of a system refers to those with whom the subject also shares transformation of the object; the cultural-historical communities in which a subject’s activity is situated. Communities mediate of activity through division of labor and shared norms and expectations.
 
Figure 1: Visual Depiction of an Activity System
Understanding the basic components of an activity system can be a useful way of mapping and categorizing key components of experience. However, for Activity Theorists, it is not the presence of these components in isolation that make for meaningful analysis, but rather, the interactions within among these components. Engeström (1993) refers to such relations as primary and secondary contradictions. Primary contradictions are those that occur within a component of a system (e.g. tools), while secondary contradictions are those that occur between components of a system (e.g. subjects and tools). In a situation where Civilization III is used in formal learning environments, one might imagine tensions between winning Civilization III and learning social studies as the object of an activity system, depending on whether the student or the teacher is the subject of the activity system.. Predicated on Hegelian / Marxist philosophy, Activity Theory suggests that the synthesis and resolution of such contradictions brings change and evolution to the system, and Activity Theorists argue that characterizing the tensions of an activity system can help participants understand and react to changes in the system.
Activity Theory offers a theoretical framework with strong intuitive appeal for researchers examining educational games. Growing out of Vgotsky’s discussion of the mediating role of artifacts in cognition (1978), Activity Theory provides a theoretical language for looking at how an educational game or resource mediates players’ understandings of other phenomena while acknowledging the social and cultural contexts in which game play is situated. Learning is conceptualized not as a function of the game itself - or even a simple coupling of the player and game; rather, learning is seen as transformations that occur through the dynamic relations between subjects, artifacts, and mediating social structures.
As games studies matures as a field, no doubt it will draw theoretical concepts from a range of disciplines and research traditions. Thusfar, most social science research around gaming has come from the media effects tradition, leaving a range of other research traditions unrepresented. The impact of digital games on learning and behavior, as conceptualized through researchers in the learning sciences communities is an important, but frequently overlooked area of games studies. My hope is that in the upcoming months, discussions around gaming and cognition will draw upon research in the learning sciences. While I have argued for the value of theoretical positions developing out of cultural psychology, cognitive science, and educational psychology, certainly there is room at the games studies table for other researchers in these fields contributing their theoretical models, as well as researchers from the Humanities, History of Science, Media Studies, and other disciplines.


The author would like to thank Henry Jenkins, Principal Investigator of the Games to Teach Project, for sharing his vision of using educational games to expand the cultural sphere of gaming and his contributions to this paper. The author would also like to thank Alex Chisholm, Co-Producer of our first Games-to-Teach Project prototype on optics, for comments on earlier drafts of the paper.