Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Physical activity enjoyment scale questionnaire pdf

29/10/2021 Client: muhammad11 Deadline: 2 Day

Discussion

Information Systems Success: The Quest for the Dependent Variable .

William H. DeLone Department of Management The American University Washington, D.C. 20016

Ephraim R. McLean Computer Information Systems Georgia State University Atlanta, Georgia 30302-4015

A large number of studies have been conducted during the last decade and a half attempting to identify those factors that contribute to information sys- tems success. However, the dependent variable in these studies—I/S success —has been an elusive one to define. Different researchers have addressed different aspects of success, making comparisons difficult and the prospect of building a cumulative tradition for I/S research similarly elusive. To organize this diverse research, as well as to present a more integrated view of the concept of I/S success, a comprehensive taxonomy is introduced. This taxon- omy posits six major dimensions or categories of I/S success—SYSTEM OUALITY, INFORMATION QUALITY, USE, USER SATISFACTION, INDIVIDUAL IMPACT, and ORGANIZATIONAL IMPACT. Using these dimensions, both conceptual and empirical studies are then reviewed (a total of 180 articles are cited) and organized according to the dimensions of the taxonomy. Finally, the many aspects of I/S success are drawn together into a descriptive model and its implications for future I/S research are discussed. Information s)'!)(em« •iucce^s—Inrormition systems •ssessmenl—Measurement

Introduction

At the first meeting of the International Conference on Information System (ICIS)in 1980, Peter Keen identified five issues which he felt needed to be resolved in order for the field of management information systems to establish itself as a coherent research area. These issues were:

(1) What are the reference disciplines for MIS? (2) What is the dependent variable? (3) How does MIS establish a cumulative tradition? (4) What is the relationship of MIS research to computer technology and to MIS

practice? (5) Where should MIS Tesearchers publish their findings?

I047.7047/92/0301/0O6O/SOI.25

Copyright © 1992. The Inslilulc of Management Stienees

60 Information Systems Research 3 : I

Information Systems Success

Of the five, the second item, the dependent variable in MIS research, is a particu- larly important issue. If information systems research is to make a contribution to the world of practice, a well-defined outcome measure (or measures) is essential. It does littie good to measure various independent or input variables, such as the extent of user participation or the level of I/S investment, if the dependent or output variable —I/S success or MIS effectiveness—cannot be measured with a similar degree of accuracy.

The importance of defining the I/S dependent variable cannot be overemphasized. The evaluation of I/S practice, policies, and procedures requires an I/S success mea- sure against which various strategies can be tested. Without a well-defined dependent variable, much of I/S research is purely speculative.

In recognition of this importance, this paper explores the research that has been done involving MIS success since Keen first issued his challenge to the field and attempts to synthesize this research into a more coherent body of knowledge. It covers the formative period 1981 -87 and reviews all those empirical studies that have attempted to measure some aspects of "MIS success" and which have appeared in one of the seven leading publications in the I/S field. In addition, a number of other articles are included, some dating back to 1949. that make a theoretical or conceptual contribution even though they may not contain any empirical data. Taken together, these 180 references provide a representative review of the work that has been done and provide the basis for formulating a more comprehensive model of I/S success than has been attempted in the past.

A Taxonomy of Information Systems Success Unfortunately, in searching for an I/S success measure, rather than finding none,

there are nearly as many measures as there are studies. The reason for this is under- standable when one considers that "information." as the output of an information system or the message in a communication system, can be measured at different levels, including the technical level, the semantic level, and the effectiveness level. In their pioneering work on communications. Shannon and Weaver (1949) defined the technical level as the accuracy and efficiency of the system which produces the infor- mation, the semantic level as the success of the information in conveying the in- tended meaning, and the effectiveness level as the effect of the information on the receiver.

Building on this. Mason (1978) relabeled "effectiveness" as "influence" and de- fined the influence level of information to be a "hierarchy of events which take place at the receiving end of an information system which may be used to identify the various approaches that might be used to measure output at the influence level" (Mason 1978. p. 227). This scries of influence events includes the receipt of the information, an evaluation of the information, and the application of the informa- tion, leading to a change in recipient behavior and a change in system performance.

The concept of levels of output from communication theory demonstrates the serial nature of information (i.e., a form of communication). The information system creates information which is communicated to the recipient who is then influenced (or not!) by the information. In this sense, information flows through a series of stages from its production through its use or consumption to its influence on individual and/or organizational performance. Mason's adaptation of communication theory

March 1992

DeLone •

Shannon

Weaver (1949)

Mason (1978)

Categories ofVS

Success

McLean

Technical

Level

Production

System Quality

Semantic

Level

Product

Information Quality

Receipt

Use

Effectiveness or hifluence

Level

Influence on

Recipent

User Individual Satisfaction Impact

Influence on

System

Organizational Impact

FIGURE 1. Categories of [/S Success.

to the measurement of information systems suggests therefore that there may need to be separate success measures for each of the levels of information.

In Figure 1, the three levels of information of Shannon and Weaver are shown, together with Mason's expansion of the effectiveness or influence level, to yield six distinct categories or aspects of information systems. They are SYSTEM QUALITY, INFORMATION QUALITY, USE. USER SATISFACTION, INDIVIDUAL IM- PACT, and ORGANIZATIONAL IMPACT.

Looking at the first of these categories, some I/S researchers have chosen to focus on the desired characteristics of the information system itself which produces the information (SYSTEM QUALITY). Others have chosen to study the information product for desired characteristics such as accuracy, meaningful ness, and timeliness (INFORMATION QUALITY). In the influence level, some researchers have ana- lyzed the interaction of the information product with its recipients, the users and/or decision makers, by measuring USE or USER SATISFACTION. Still other re- searchers have been interested in the influence which the information product has on management decisions (INDIVIDUAL IMPACT). Finally, some I/S researchers, and to a larger extent I/S practitioners, have been concerned with the eifect of the information product on organizational performance (ORGANIZATIONAL IMPACT).

Once this expanded view of I/S success is recognized, it is not surprising to find that there are so many different measures of this success in the literature, depending upon which aspect of I/S the researcher has focused his or her attention. Some of these measures have been merely identified, but never used empirically. Others have been used, but have employed different measurement instruments, making comparisons among studies difficult.

Two previous articles have made extensive reviews of the research literature and have reported on the measurement of MIS success that had been used in empirical studies up until that time. In a review of studies of user involvement, Ives and Olson (1984) adopted two classes of MIS outcome variables: system quality and system acceptance. The system acceptance category was defined to include system usage, system impact on user behavior, and information satisfaction. Haifa decade earlier, in a review of studies of individual differences. Zmud (1979) considered three catego- ries of MIS success: user performance, MIS usage, and user satisfaction.

Eoth of these literature reviews made a valuable contribution to an understanding of MIS success, but both were more concerned with investigating independent

62 Information Systems Research 3 : 1

Information Systems Success

variables (i.e., user involvement in the case of Ives and Olson and individual differ- ences in the case of Zmud) than with the dependent variable (i.e.. MIS success). In contrast, this paper has the measurement of the dependent variable as its primary focus. Also, over five years have passed since the Ives and Olson study was published and over ten years since Zmud's article appeared. Much work has been done since these two studies, justifying an update of their findings.

To review this recent work and to put the earlier research into perspective, the six categories of I/S success identified in Figure I—SYSTEM QUALITY. INFORMA- TION QUALITY. INFORMATION USE, USER SATISFACTION, INDIVIDUAL IMPACT. AND ORGANIZATIONAL IMPACT—are used in the balance of this paper to organize the I/S research that has been done on I/S success.

In each of the six sections which follow, both conceptual and empirical studies are cited. While the conceptual citations are intended to be comprehensive, the empirical studies are intended to be representative, not exhaustive. Seven publications, from the period January 1981 to January 1988. were selected as reflecting the mainstream of I/S research during this formative period. Additional studies, from other publica- tions, as well as studies from the last couple of years, could have been included; but after reviewing a number of them, it became apparent that they merely reinforced rather than modified the basic taxonomy of this paper.

In choosing the seven publications to be surveyed. fy\Q (Management Science, MIS Quarterly, Communications of the ACM, Decision Sciences, and Information & Man- agement) were drawn from the top six journals cited by Hamilton and Ives (1983) in their study of the journals most respected by MIS researchers. (Their sixth journal. Transactions on Database Systems, was omitted from this study because of its special- ized character.) To these five were added the Journal of MIS. a relatively new but important journal, and the ICIS Proceedings, which is not a journal per se but repre- sents the published output of the central academic conference in the I/S field. A total of 100 empirical studies are included from these seven sources.

As with any attempt to organize past research, a certain degree of arbitrariness occurs. Some studies do not fit neatly into any one category and others fit into several. In the former case, every effort was made to make as close a match as possible in order to retain a fairly parsimonious framework. In the latter case, where several measures were used which span more than one category (e.g.. measures of informa- tion quality and extent of use and user satisfaction), these studies are discussed in each of these categories. One consequence of this multiple listing is that there appear to be more studies involving I/S success than there actually are.

To decide which empirical studies should be included, and which measures fit in which categories, one of the authors of this paper and a doctoral student (at another university) reviewed each of the studies and made their judgments independently. The interrater agreement was over 90%. Conflicts over selection and measure assign- ment were resolved by the second author.

In each of the following sections, a table is included which summarizes the empiri- cal studies which address the particular success variable in question. In reporting the success measures, the specific description or label for each dependent variable, as used by the author(s) of the study, is reported. In some cases the wording of these labels may make it appear that the study would be more appropriately listed in another table. However, as was pointed out earlier, all of these classification decisions

March 1992 63

DeLone • McLean

are somewhat arbitrary, as is true of almost al! attempts to organize an extensive body of research on a retrospective basis.

System Quality: Measures of the Information Processing System Itself In evaluating the contribution of information systems to the organization, some

I/S researchers have studied the processing system itself. Kriebel and Raviv (1980, 1982) created and tested a productivity model for computer systems, including such performance measures as resource utilization and investment utilization. Alloway (1980) developed 26 criteria for measuring the success of a data processing operation. The efficiency of hardware utilization was among Alloway's system success criteria.

Other authors have developed multiple measures of system quality. Swanson (1974) used several system quality items to measure MIS appreciation among user managers. His items included the reliability of the computer system, on-line response time, the ease of terminal use. and so forth. Emery (1971) also suggested measuring system characteristics, such as the content of the data base, aggregation of details, human factors, response time, and system accuracy. Hamilton and Chervany (1981) proposed data currency, response time, turnaround time, data accuracy, reliability, completeness, system flexibility, and ease of use among others as part of a "formative evaluation" scheme to measure system quality.

In Table I are shown the empirical studies which had explicit measures of system quality. Twelve studies were found within the referenced journals, with a number of distinct measures identified. Not surprisingly, most of these measures are fairly straightforward, reflecting the more engineering-oriented performance characteris- tics of the systems in question.

Information Quality: Measures of Information System Output Rather than measure the quality of the system performance, other 1/S researchers

have preferred to focus on the quality of the information system output, namely, the quality of the information that the system produces, primarily in the form of reports. Larcker and Lessig (1980) developed six questionnaire items to measure the per- ceived importance and usableness of information presented in reports. Bailey and Pearson (1983) proposed 39 system-related items for measuring user satisfaction. Among their ten most important items, in descending order of importance, were information accuracy, output timeliness, reliability, completeness, relevance, preci- sion, and currency.

In an early study, Ahituv (1980) incorporated five information characteristics into a multi-attribute utility measure of information value: accuracy, timeliness, rele- vance, aggregation, and formatting. Gallagher (1974) develoiJed a semantic differen- tial instrument to measure the value of a group of I/S reports. That instrument included measures of relevance, informativeness, usefulness, and importance. Munro and Davis (1977) used Gallagher's instrument to measure a decision maker's perceived value of information received from information systems which were cre- ated using different methods for determining information requirements. Additional information characteristics developed by Swanson (1974) to measure MIS apprecia- tion among user managers included uniqueness, conciseness, clarity, and readability measures. Zmud (1978) included report format as an information quality measure in his empirical work. Olson and Lucas (1982) proposed report appearance and accu- racy as measures of information quality in office automation information systems. Lastly, King and Epstein (1983) proposed multiple information attributes to yield a

64 Information Systems Research 3 : 1

Information Systems Success

Authors

Bailey and Pearson (1983)

BartiandHuff(l985)

Belardo. Kanvan. and Wallace (1982)

Con kiln. Gotterer. and Rick man (1982)

Franz and Robey (1986)

Goslar(1986)

Hiltz and Turoff (1981)

Kxiebel and Raviv (1982)

Lehman (1986)

Mahmood(1987)

Morey(1982)

Srinivasan(1985)

TABLE 1 Empirical Measures of System Quality

Description of Study

Overall I/S; 8 organizations, 32 managers

DSS: 9 organizations. 42 decision makers

Emergency management DSS; 10 emergency dispatchers

Transaction processing; one organization

Specific I/S; 34 organizations. 118 user managers

Marketing DSS; 43 marketers

Electronic information exchange system; 102 users

Academic information system; one university

Overall I/S; 200 I/S directors

Specific I/S; 61 I/S managers

Manpower management system; one branch of the military

Computer-based modeling systems; 29 firms

Type

Field

Field

Lab

U b

Field

Lab

Field

Case

Field

Field

Case

Field

Description of Measure(s)

(1) Convenience of access (2) Flexibility of system (3) Integration of systems (4) Response time

Realization of user expectations

(1) Reliability (2) Response time (3) Ease of use (4) Ease of learning

Response time

Perceived usefulnessofl/S (12 items)

LJsefulness of DSS features

Usefulness of specific functions

(1) Resource utilization (2) Investment utilization

I/S sophistication (use of new technology)

Flexibility of system

Stored record error rate

(1) Response time (2) System reliability (3) System accessibility

composite measure of information value. The proposed information attributes in- cluded sufficiency, understandability, freedom from bias, reliability, decision rele- vance, comparability, and quantitativeness.

More recently, numerous information quality criteria have been included within the broad area of "User Information Satisfaction" (Iivari 1987; Iivari and Koskela 1987). The Iivari-Koskela satisfaction measure included three information quality constructs: "informativeness" which consists of relevance, comprehensiveness, re- centness. accuracy, and credibility; "accessibility" which consists of convenience, timeliness, and interpretability; and "adaptability."

In Table 2. nine studies which included information quality measures are shown. Understandably, most measures of information quality are from the perspective of the user of this information and are thus faidy subjective in character. Also, these

March 1992 65

DeLone • McLean

measures, while shown here as separate entities, are often included as part of the measurers of user satisfaction. The Bailey and Pearson (1983) study is a good exam- ple of this cross linkage.

Information Use: Recipient Consumption of the Output of an Information System

The use of information system reports, or of management science/operations re- search models, is one of the most frequently reported measures of the success of an information system or an MS/OR model. Several researchers (Lucas 1973; Schultz andSlevin 1975; Ein-Dor and Segev 1978; Ives, Hamilton, and Davis 1980; Hamil- ton and Chervany 1981) have proposed I/S use as an MIS success measure in concep- tual MIS articles. Ein-Dor and Segev claimed that different measures of computer success are mutually interdependent and so they chose system use as the primary criterion variable for their I/S research framework. "Use of system" was also an integral part of Lucas's descriptive model of information systems in the context of organizations. Schultz and Slevin incorporated an item on the probability of MS/OR model use into their five-item instrument for measuring model success.

In addition to these conceptual studies, the use of an information system has often been the MIS success measure of choice in MIS empirical research (Zmud 1979). The broad concept of use can be considered or measured from several p>erspectives. It is clear that actual use, as a measure of I/S success, only makes sense for voluntary or discretionary users as opposed to captive users (Lucas 1978; Weike and Konsynski 1980). Recognizing this, Maish (1979) chose voluntary use of computer terminals and voluntary requests for additional reports as his measures of I/S success. Similarly, Kim and Lee (1986) measured voluntariness of use as part of their measure of success.

Some studies have computed actual use (as opposed to reported use) by managers through hardware monitors which have recorded the number of computer inquiries (Swanson 1974; Lucas 1973, 1978; King and Rodriguez 1978. 1981), or recorded the amount of user connect time (Lucas 1978; Ginzberg 1981a). Other objective mea- sures of use were the number of computer functions utilized (Ginzberg 1981 a), the number of client records processed (Robey 1979), or the actual charges for computer use (Gremillion 1984). Still other studies adopted a subjective or perceived mea- sure of use by questioning managers about their use of an information system (Lucas 1973, 1975. 1978; Maish 1979; Fuerst and Cheney 1982; Raymond 1985; DeLone 1988).

Another issue concerning use of an information system is "Use by whom?" (Huys- mans 1970). In surveys of MIS success in small manufacturing firms, DeLone (1988) considered chief executive use of information systems while Raymond (1985) consid- ered use by company controllers. In an earlier study. Culnan (1983a) considered both direct use and chaufFeured use (i.e.. use through others).

There are also different levels of use or adoption. Ginzberg (1978) discussed the following levels of use, based on the earlier work by Huysmans; (1) use that results in management action, (2) use that creates change, and (3) recurring use of the system. Earlier, Vanlommel and DeBrabander (1975) proposed four levels of use: use for getting instructions, use for recording data, use for control, and use for planning. Schewe (1976) introduced two forms of use: general use of "routinely generated computer reports" and specific use of "personally initiated requests for additional

66 Information Systems Research 3 :

Information Systems Success

TABLE 2 Empirical Measures of Information Quality

Authors Description of Study Type Description of Measure(s)

Bailey and Pearson (1983)

BlaylockandRees(l984)

Jones and McLeod (1986)

King and Epstein (1983)

Mahniood(1987)

Mahmood and Medewitz (1985)

Mitlerand Doyle (1987)

RivardandHuff(l985)

Srinivasan (1985)

Overall I/S; 8 organizations. 32 managers

Financial; one university. 16 MBA students

Several information sources; 5 senior executives

Overall I/S; 2 firms. 76 managers

Specific I/S; 61 I/S managers

DSS; 48 graduate students

Overall 1/S; 21 financial firms. 276 user managers

User-developed I/S; 10 firms, 272 users

Computer-based modeling systems; 29 firms

Field Output (1) Accuracy (2) Precision (3) Currency (4) Timeliness (5) Reliability (6) Completeness (7) Conciseness (8) Format (9) Relevance

Lab Perceived usefulness of specific report items

Field Perceived importance of each information item

Field Information (1) Currency (2) Sufficiency (3) Understandability (4) Freedom from bias (5) Timeliness (6) Reliability (7) Relevance to decisions (8) Comparability (9) Quantitativeness

Field (I) Report accuracy (2) Report timeliness

Lab Report usefulness

Field (1) Completeness of information (2) Accuracy ofinformation (3) Relevance of reports (4) Timeliness of report

Field Usefulness ofinformation

Field (1) Report accuracy (2) Report relevance (3) Underslandability (4) Report timeliness

information not ordinarily provided in routine reports." By this definition, specific use reflects a higher level of system utilization. Fuerst and Cheney (1982) adopted Schewe's classification of general use and specific use in their study of decision sup- port in the oil industry.

Bean et al. (1975); King and Rodriguez {1978, 1981), and DeBrabander and Thiers (1984) attempted to measure the nature of system use by comparing this use to the

March 1992 67

DeLone • McLean

decision-making purpose for which the system was designed. Similarly, livari {1985) suggested appropriate use or acceptable use as a measure of MIS success. In a study by Robey and Zeller (1978), I/S success was equated to the adoption and extensive use of an information system.

After reviewing a number of empirical studies involving use, Trice and Treacy (1986) recommend three classes of utilization measures based on theories from refer- ence disciplines: degree of MIS institutionalization, a binary measure of use vs. non- use, and unobtrusive utilization measures such as connect time and frequency of computer acce^. The degree of institutionalization is to be determined by user de- pendence on the MIS, user feelings of system ownership, and the degree to which MIS is routinized into standard operating procedures.

Table 3 shows the 27 empirical studies which were found to employ system use as at least one of their measures of success. Of all the measures identified, the system use variable is probably the most objective and the easiest to quantify, at least concep)- tually. Assuming that the organization being studied is {1) regularly monitoring such usage patterns, and (2) willing to share these data with researchers, then usage is a fairly accessible measure of I/S success. However, as pointed out earlier, usage, either actual or perceived, is only pertinent when such use is voluntary.

User Satisfaction: Recipient Response to the Use ofthe Output of an Information System

When the use of an information system is required, the preceding measures be- come less useful; and successful interaction by management with the information system can be measured in terms of user satisfaction. Several I/S researchers have suggested user satisfaction as a success measure for their empirical I/S research (Ein- Dor and Segev 1978; Hamilton and Chervany 1981). These researchers have found user satisfaction as especially appropriate when a specific information system was involved. Once again a key issue is whose satisfaction should be measured. In at- tempting to determine the success ofthe overall MIS effort, McKinsey & Company (1968) measured chief executives' satisfaction.

In two empirical studies on implementation success, Ginzberg (1981a, b) chose user satisfaction as his dependent variable. In one of those studies (1981 a), he adopted both use and user satisfaction measures. In a study by Lucas (1978), sales representatives rated their satisfaction with a new computer system. Later, in a differ- ent study, executives were asked in a laboratory setting to rate their enjoyment and satisfaction with an information system which aided decisions relating to an inven- tory ordering problem (Lucas 1981).

In the Powers and Dickson study on MIS project success (1973), managers were asked how well their information needs were being satisfied. Then, in a study by King and Epstein (1983), I/S value was imputed based on managers" satisfaction ratings. User satisfaction is also recommended as an appropriate success measure in experi- mental I/S research (Jarvenpaa, Dickson, and DeSanctis 1985) and for researching the effectiveness of group decision support systems (DeSanctis and Gallupe 1987).

Other researchers have developed multi-attribute satisfaction measures rather than relying on a single overall satisfaction rating. Swanson (1974) used 16 items to mea- sure I/S appreciation, items which related to the characteristics of reports and ofthe underlying information system itself. Pearson developed a 39-item instrument for measuring user satisfaction. The full instrument is presented in Bailey and Pearson

68 lnfortnation Systems Research 3 : 1

Information Systems Success

(1983), with an earlier version reviewed and evaluated by KHebel (1979) and by Ives, Olson, and Baroudi (1983). Raymond (1985) used a subset of 13 items from Pear- son's questionnaire to measure manager satisfaction with MIS in small manufactur- ing firms. More recently, Sanders (1984) developed a questionnaire and used it (Sanders and Courtney 1985) to measure decision support systems (DSS) success. Sanders' overall success measure involves a number of measures of user and decision- making satisfaction.

Finally, studies have found that user satisfaction is associated with user attitudes toward computer systems (Igerhseim 1976; Lucas 1978) so that user-satisfaction measures may be biased by user computer attitudes. Therefore, studies which include user satisfaction as a success measure should ideally also include measures of user attitudes so that the potentially biasing effects of those attitudes can be controlled for in the analysis. Goodhue (1986) further suggests "information satisfactoriness" as an antecedent to and surrogate for user satisfaction. Information satisfactoriness is de- fined as the degree of match between task characteristics and I/S functionality.

As the numerous entries in Table 4 make clear, user satisfaction or user informa- tion satisfaction is probably the most widely used single measure of I/S success. The reasons for this are at least threefold. First, "satisfaction" has a high degree of face validity. It is hard to deny the success ofa system which its users say that they like. Second, the development ofthe Bailey and Pearson instrument and its derivatives has provided a reliable tool for measuring satisfaction and for making comparisons among studies. The third reason for the appeal of satisfaction as a success measure is that most of the other measures are so poor; they are either conceptually weak or empirically difficult to obtain.

Individual Impact: The Effect ofinformation on the Behavior ofthe Recipient Of all the measures of I/S success, "impact" is probably the most difficult to define

in a nonambiguous fashion. It is closely related to performance, and so "improving my—or my department's—performance" is certainly evidence that the information system has had a positive impact. However, "impact" could also be an indication that an information system has given the user a better understanding ofthe decision context, has improved his or her decision-making productivity, has produced a change in user activity, or has changed the decision maker's perception ofthe impor- tance or usefulness ofthe information system. As discussed earlier. Mason (1978) proposed a hierarchy of impact (influence) levels from the receipt ofthe information, through the understanding ofthe information, the application ofthe information to a specific problem, and the change in decision behavior, to a resultant change in organi- zational performance. As Emery (1971, p. I) states: "Information has no intrinsic value; any value comes only through the influence it may have on physical events. Such influence is typically exerted through human decision makers."

In an extension ofthe traditional statistical theory ofinformation value. Mock (1971) argued for the importance ofthe "learning value ofinformation." In a labora- tory study ofthe impact ofthe mode ofinformation presentation, Lucas and Nielsen (1980) used learning, or rate of performance improvement, as a dependent variable. In another laboratory setting, Lucas (1981) tested participant understanding ofthe inventory problem and used the test scores as a measure of I/S success. Watson and Driver (1983) studied the impact of graphical presentation on information recall. Meador, Guyote, and Keen (1984) measured the impact of a DSS design

March 1992

DeLone • McLean

TABLE 3 Empirical Measures ofinformation System Use

Authors Description of Study Type Description of Measure(s)

Alavi and Henderson (1981)

Baroudi, Olson, and Ives

(1986)

BartiandHuff(1985)

Bell (1984)

Work force and production Lab scheduling DSS; one university. 45 graduates

Overall I/S; 200 finns, 200 production managers

DSS; 9 organizations, 42 Field decision makers

Financial; 30 financial Lab

Use or nonuse of computer-based decision aids

Field Use of 1/S to support production

Percentage of time DSS is used in decision making situations

Use of numerical vs. nonnumerical information

Benbasat. Dexter, and Masulis(1981)

Bergeron (1986b)

Chandrasekaran and Kirs (1986)

Culnan (1983a)

Culnan (1983b)

DeBrabander and Thiers (1984)

DeSanctis (1982)

Ein-Dor. Segev, and Steinfeld(l98l)

Green and Hughes(1986)

Fuerst and Cheney (1982)

Glnzberg(198la)

Hogue(1987)

Gremillion (1984)

Pricing: one university. 50 students and faculty

Overall I/S; 54 organizations. 471 user managers

Reporting systems; MBA students

Overall I/S; one organization, 184 professionals

Overall I/S; 2 organizations, 362 professionals

Specialized DSS: one university, 91 two-person teams

DSS; 88 senior level students

PERT: one R & D organization, 24 managers

DSS; 63 city managers

DSS; 8 oil companies, 64 users

On-line portfolio management system; U.S. bank, 29 portfolio managers

DSS; 18 organizations

Overall I/S; 66 units of tbe National Forest system

Lab

Field

Field

Field

Field

U b

Lab

Field

Lab

Field

Field

Field

Field

Frequency of requests for specific reports

Use of chargeback information

Acceptance of report

(I) Direct use ofl/S vs. chaufieured use

(2) Number of requests for information

Frequency of use

Use vs. nonuse of data sets

Motivation to use

(I) Frequency of past use (2) Frequency of intended use

Number of DSS features used

(I) Frequency of general use (2) Frequency of specific use

(1) Number of minutes (2) Number of sessions (3) Number of functions used

Frequency of voluntary use

Expenditures/charges for computing use

70 Information Systems Research 3 : 1

Information Systems Success

Authors

Kim and Lee (1986)

King and Rodriguez (1981)

Mahmood and Medewitz (1985)

Nelson and Cbeney (1987)

Perry (1983)

Raymond (1985)

Snitkin and King (1986)

Srinivasan (1985)

Swanson (1987)

Zmud. Boynton. and Jacobs(1987)

TABLE 3 {cont d)

Description of Study

Overall [/S; 32 organizations, 132 users

Strategic system; one university. 45 managers

DSS; 48 graduate students

Overall I/S; 100 top/middle managers

Office 1/S; 53 firms

Overall 1/S; 464 small manufacturing Brms

Personal DSS; 31 users

Computer-based modeling systems; 29 firms

Overall I/S; 4 organizations, 182 users

Overall I/S; Sample A: 132 firms

Sample B: one firm

Type

Field

Lab

Lab

Field

Field

Field

Field

Field

Field

Field

Description of Measure(s)

(1) Frequency of use (2) Voluntariness of use

(1) Number of queries (2) Nature of queries

Extent of use

Extent of use

Use at anticipated level

(1) Frequency of use (2) Regularity of use

Hours per week

(1) Frequency of use (2) Time per computer session (3) Number of reports generated

Average frequency with which user discussed report information

Use in support of (a) Cost reduction (b) Management (c) Strategy planning (d) Competitive thrust

methodology using questionnaire items relating to resulting decision effectiveness. For example, one questionnaire item referred specifically to the subject's perception of the improvement in his or her decisions.

In the information system framework proposed by Chervany, Dickson, and Kozar (1972), which served as the model for the Minnesota Experiments (Dickson. Cher- vany. and Senn 1977). the dependent success variable was generally defined to be decision effectiveness. Within the context of laboratory experiments, decision effec- tiveness can take on numerous dimensions. Some of these dimensions which have been reported in laboratory studies include the average time to make a decision (Benbasat and Dexter 1979, 1985; Benbasat and Schroeder 1977; Chervany and Dickson 1974; Taylor 1975), the confidence in the decision made (Chervany and Dickson 1974; Taylor 1975), and the number of reports requested (Benbasat and Dexter 1979; Benbasat and Schroeder 1977). DeSanctis and Gallupe (1987) sug- gested member participation in decision making as a measure of decision effective- ness in group decision making.

In a study which sought to measure the success of user-developed applications, Rivard and Huff (1984) included increased user productivity in their measure of success. DeBrabander and Thiers (1984) used efficiency of task accomplishment (time required to find a correct answer) as the dependent variable in their laboratory

March 1992 71

DeLone • McLean

TABLE 4 Empirical Measures of User Satisfaction

Author(s) Description of Study Type Description of Measure(s)

Alavi and Henderson (1981)

Baitey and Pearson (1983)

Baroudi, Olson, and Ives(1986)

Barti and Huff(l985)

Work force and production scheduling DSS; one university; 45 graduate students

Overall I/S; 8 organizations. 32 managers

Overall I/S; 200 firms. 200 production managers

DSS; 9 organizations, 42 decision makers

Lab Overall satisfaction witb DSS

Field User satisfaction (39-item instrument)

Field User information satisfaction

Field User information satisfaction (modified Bailey & Pearson instrument)

Bruwer(1984)

Cats-Baril and Huber (1987)

DeSanctis (1986)

Doll and Ahmed (1985)

Edmundson and JefFery (1984)

Ginzberg (1981a)

Ginzberg(l981b)

Hogue(1987)

Ives. Olson, and Baroudi(1983)

Jenkins, Naumann, and Wetherbe (1984)

King and Epstein (1983)

Langle, Leitheiser, and Naumann (1984)

Lehman, Van Wetering. and Vogel (1986)

Lucas(1981)

Overall I/S; one organization, 114 managers

DSS; one university, 101 students

Human resources 1/S; 171 human resource system professionals

Specificl/S; 55 firms, 154 managers

Accounting software package; 12 organizations

On-line portfolio management system; U.S. bank. 29 portfolio managers

Overall I/S; 35 I/S users

DSS; 18 organizations

Overall I/S; 200 firms, 200 production managers

A specific I/S; 23 corporations, 72 systems development managers

Overall I/S; 2 firms. 76 managers

Overall 1/S; 78 oi^nizations, I/S development managers

Business graphics; 200 organizations, DP managers

Inventory ordering system; one university, 100 executives

Field

Lab

Field

Field

Field

Field

Field

Field

Field

Field

Field

Field

Field

U b

User satisfaction

Satisfaction with a DSS (multi-item scale)

(1) Top management satisfaction (2) Personal management satisfaction

User satisfaction (11 -item scale)

User satisfaction (1 question)

Overall satisfaction

Overall satisfaction

User satisfaction (1 question)

User satisfaction (Bailey & Pearson instrument)

User satisfaction (25-item instrument)

User satisfaction (1 item: scale 0 to 100)

User satisfaction (1 question)

(1) Software satisfaction (2) Hardware satisfaction

(1) Enjoyment (2) Satisfaction

72 Information Systems Research 3 : 1

Information Systems Success

TABLE 4

Author(s) Description of Study Type Description of Measure(s)

Mahmood (1987)

Mahmood and Becker (1985-1986)

Mahmood and Medewitz (1985)

McKeen(l983)

Nelson and Cheney (1987)

Olson and Ives (1981)

Olson and Ives (1982)

Raymond (1985)

Raymond (1987)

Rivard and Huff (1984)

Rushinek and Rushinek (1985)

Rushinek and Rushinek (1986)

Sanders and Courtney (1985)

Sanders, Courtney, and Uy(I984)

Taylor and Wang(1987)

Specificl/S; 61 I/S managers Field Overall satisfaction

Overall I/S; 59 firms. 118 Field User satisfaction managers

DSS; 48 graduate students Lab User satisfaction (multi-Item scale)

Application systems; 5 organizations

Overall I/S; 100 top/middle managers

Field Satisfaction with the development project (Powers and Dickson instrument)

Field User satisfaction (Bailey & Pearson instrument)

Overall I/S; 23 manufacturing Field Information dissatisfaction difference firms. 83 users between information needed and

amount ofinformation received

Overall I/S; 23 manufacturing Field Information satisfaction, difference firms, 83 users between information needed and

information received

Overall I/S; 464 small manufacturing firms

Overall I/S; 464 small-firm finance managers

User-developed applications; 10 large companies

Accounting and billing system; 4448 users

Overall I/S; 4448 users

Financial DSS; 124 organizations

Field Controller satisfaction (modified Bailey & Pearson instrument)

Field User satisfaction (modified Bailey & Pearson instrument)

Field User complaints regarding Information Center services

Field Overall user satisfaction

Field Overall user satisfaction

Field (1) Overall satisfaction (2) Decision-making satisfaction

Interactive Financial Planning Field (I) Decision-making satisfaction System (IFPS); 124 (2) Overall satisfaction oi^nizations, 373 users

DBMS with multiple dialogue Lab User satisfaction with interface modes; one university, 93 students

experiment. Finally, Sanders and Courtney (1985) adopted the speed of decision analysis resulting from DSS as one item in their DSS success measurement in- strument.

Mason (1978) has suggested that one method of measuring I/S impact is to deter- mine whether the output ofthe system causes the receiver (i.e., the decision maker) to change his or her behavior. Ein-Dor, Segev, and Steinfeld (1981) asked decision makers: "Did use of PERT [a specific information system] ever lead to a change in a

March 1992 73

DeLone • McLean

decision or to a new decision?" Judd, Paddock, and Wetherbe (1981) measured whether a budget exception reporting system resulted in managers' taking investiga- tive action.

Another approach to the measurement ofthe impact of an information system is to ask user managers to estimate the value of the information system. Cerullo (1980) asked managers to rank the value of their computer-based MIS on a scale of one to ten. Ronen and Falk (1973) asked participants to rank the value ofinformation received in an experimental decision context. Using success items developed by Schultz and Slevin (1975), King and Rodriguez (1978. 1981) asked users of their "Strategic Issue Competitive Information System" to rate the worth of that 1/S.

Other researchers have gone a step further by asking respondents to place a dollar value on the information received. Gallagher (1974) asked managers about the maxi- mum amount they would be willing to pay for a particular report. Lucas (1978) reported using willingness to pay for an information system as one of his success measures. Keen (1981) incorporated willingness to pay development costs for im- proved DSS capability in his proposed "Value Analysis" for justification ofa DSS. In an experiment involving MBA students, Hilton and Swieringa (1982) measured what participants were willing to pay for specific information which they felt would lead to higher decision payoffs. Earlier. Garrity (1963) used MIS expenditures as a percent- age of annual capital expenditures to estimate the value ofthe MIS effort.

Table 5, with 39 entries, contains the largest number of empirical studies. This in itself is a healthy sign, for it represents an attempt to move beyond the earlier inward- looking measures to those which offer the potential to gauge the contribution of information systems to the success ofthe enterprise. Also worth noting is the predomi- nance of laboratory studies. Whereas most ofthe entries in the preceding tables have been field experiments, 24 ofthe 39 studies reported here have used controlled labora- tory experiments as a setting for measuring the impact ofinformation on individuals. The increased experimental rigor which laboratory studies offer, and the extent to which they have been utilized at least in this success category, is an encouraging sign for the maturing of the field.

Organizational Impact: The Effect of Information on Organizational Performance

In a survey by Dickson, Leitheiser. Wetherbe, and Nechis (1984). 54 information systems professionals ranked the measurement ofinformation system effectiveness as the fifth most important I/S issue for the 1980s. In a recent update of that study by Brancheau and Wetherbe (1987). I/S professionals ranked measurement ofinforma- tion system effectiveness as the ninth most important I/S issue. Measures of individ- ual performance and, to a greater extent, organization performance are of consider- able importance to I/S practitioners. On the other hand. MIS academic researchers have tended to avoid performance measures (except in laboratory studies) because of the difficulty of isolating the effect ofthe I/S effort from other effects which influence organizational performance.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

University Coursework Help
Best Coursework Help
Study Master
Assignment Hut
Supreme Essay Writer
Coursework Assignment Help
Writer Writer Name Offer Chat
University Coursework Help

ONLINE

University Coursework Help

I am an experienced researcher here with master education. After reading your posting, I feel, you need an expert research writer to complete your project.Thank You

$19 Chat With Writer
Best Coursework Help

ONLINE

Best Coursework Help

I can assist you in plagiarism free writing as I have already done several related projects of writing. I have a master qualification with 5 years’ experience in; Essay Writing, Case Study Writing, Report Writing.

$20 Chat With Writer
Study Master

ONLINE

Study Master

I will provide you with the well organized and well research papers from different primary and secondary sources will write the content that will support your points.

$20 Chat With Writer
Assignment Hut

ONLINE

Assignment Hut

I will provide you with the well organized and well research papers from different primary and secondary sources will write the content that will support your points.

$39 Chat With Writer
Supreme Essay Writer

ONLINE

Supreme Essay Writer

I am an experienced researcher here with master education. After reading your posting, I feel, you need an expert research writer to complete your project.Thank You

$28 Chat With Writer
Coursework Assignment Help

ONLINE

Coursework Assignment Help

I am an academic and research writer with having an MBA degree in business and finance. I have written many business reports on several topics and am well aware of all academic referencing styles.

$19 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

But mr adams lyrics - 1- Identify themes of crime causation as seen in Taxi Driver and Joker. 2- Compare Travis and Arthur and their respective environments - Nowak aesthetics in chula vista - Introduction to health care management third edition - CJ 3200 MOD 3 DB 1 - Superdry questionnaire answers - Opening statement for selection criteria - What excites you about being an entrepreneur - Merlin gerin multi 9 c60h - Existentialism activities for groups - Coca cola company strategy implementation - 2-3 page paper on the Miranda law and Model Penal Code due today by 7 pm centeral time - Week 8 Signature Assignment Final Paper Attempt - Music appreciation concert report - 4 stages of prodg - Scientific apparatus and their uses - Martha stewart white collar crime - Wwnorton com studyspace give me liberty - Solaraccreditation com au consumers find an installer - Discriminant function for normal density - How to get a 30 study score in biology - All about me speech examples - Competency Paper # 2: Policy Report - Superman versus the ku klux klan sparknotes - IT Ethics and Responsible conduct - Foundations of Business Management Final Task - Typhon gcu - Compare and contrast langston hughes poems - Forest development the indian way - Secretive contracts briefly crossword clue - Sexual ethics in judaism - Fraud is defined as failure to use reasonable care in the performance of services. - How do ferrite beads work - Avon products inc case study - Bsa nuclear science merit badge - Springfield nor easters case study - Arrhenius equation activation energy calculator - Epidemiology Essay - The amount recorded for merchandise inventory includes - Bus 499 discussion question - Cat ii approach requirements - Act of donation form - Bad news letter indirect approach example - 7 crystal systems with diagrams - “Thinking piece” - Ch3i polar or nonpolar - Jill st james numerologist - Urgent - Rochester midland corporation uk - Liquid density science project - Bishop tufnell primary school - Adolescent Psychology paper Instructions, and rubric - Nitro pro 9 serial number list - Observations of chemical reactions lab - Aaa authorization config commands - Gibbs reflective cycle example leadership - PRETORIA WEST ?[+̳2̳7̳6̳1̳0̳4̳8̳2̳0̳7̳1̳??]@)) EARLY TERMINATION- PILLS FOR SALE IN PRETORIA WEST SHOSHANGUVE, GEZINA - How many valence electrons does nitrogen have - Addressing selection criteria examples government job - The function of an ap is most closely related to which wired networking device? - Sheilas wheels cancellation fee - 5 dysfunctions of a team avoidance of accountability - Tectonic setting of sequoia – kings canyon national park - 3 r's great depression - Hau lee supply chain uncertainty framework - Ancient china research topics - Ams wireless configurator download - Luna mini 2 device - Bottled snow - Mindanao culture and arts - Assignment: Examine Effective Communication Channels Between Leaders and Personnel - What is iv and dv in research - Edna st vincent millay facts - Mitsubishi m net cable - Burj khalifa site plan - Before the distribution of certain statistical software - Process essay - Periodic trends homework answers - 3 pages and 2 pages #30 - Sales organization structure and sales force deployment - The invention of the dishwasher - Lambda air fuel ratio chart - Walden's mission of social change - Literature review - Everything on demand the uberization of e commerce - Non-value added activities examples - Comm skills for criminal justice week 6 - Power pressure cooker xl manual ppc790 - Plc logic gates examples - NS-10-D - Homework - Six sigma black belt certification villanova - Remember the titans movie questions answer key - My thought whose murder yet is but fantastical technique - Assignment - Microarrays are a very useful tool in genomics because they - Blow out bar vancouver - Prepare a diagram 0 dfd for the new system - The farm life in angola - Cyclopentadiene is aromatic or not