Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

I need 650 words critical review of an article

26/08/2020 Client: tiger Deadline: 24 Hours

Critical Review of an article needed (Document is attached)

It stops at page 32 on the document. The rest of the document is references and app

.................................................................................................................................................................................................

Attachment 1:

Successful program evaluation may require identifying various studies that employ both quantitative and qualitative methodologies.

Using Qualitative and Quantitative Methods for Complementary Purposes:

A Case Study

Josetta S. McLaughlin, Gerald W. McLaughlin, John A. Muffo

Consider the following scenario:

Provost Gram has just left a message on your e-mail. Several groups and individuals are raising issues with her about what is being done in the institution to improve the ability of students to be successful. One of the most impressive and expensive initiatives by the institution over the last several years has been a program to enhance the effectiveness of mathematics instruction. Heavily involving computers and various forms of technology, the program has been a focal concern for those who would like to see the funds spent on other priorities.

Provost Gram sets up a meeting with you and the chair of the mathematics department during which Provost Gram describes some of the pressures on the institution. The issues are framed through a series of generic questions: Is the program doing a good job? Is the program worth the money being spent on it? Should the program be continued? What should be changed to make the program more effective?

This scenario is not uncommon. Many issues debated by administrators in higher education are multifaceted and involve discussions about resources that must be allocated across many different and competing programs. The questions that need to be addressed are not always clearly delineated, and the complexity of issues often requires more than a simple study. Institutional researchers are thus challenged to refine and

NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH, no. 112, Winter 2001 © John Wiley & Sons, Inc. 15

QUALITATIVE AND

restate the issues so that useful information can be developed for decision makers.

The concerns that were expressed by Provost Gram can be addressed through the design of an effective program evaluation. Institutional researchers are faced with the realization that such evaluations are complicated by a number of factors. First, the evaluation often occurs as an afterthought. The institutional researcher must evaluate an ongoing program for which no carefully planned evaluation strategy was implemented during its development. Second, educators and researchers having an interest in the evaluation may not agree about the appropriateness of specific research designs for evaluating the program. Although many writers in the field of evaluation insist on the use of comparison groups as a control for individuals participating in the program, that is, the treatment group, others argue that the practical constraints in using an experimental research design necessitate the use of alternative, nonexperimental approaches to evaluation (Fitz-Gibbon and Morris, 1987). Third, political concerns dictate that the institutional researcher produce credible information geared for a diverse audience rather than just the individual who initially requested the evaluation. For example, in the scenario at the beginning of this chapter, Provost Gram asked for the evaluation; other groups interested in the outcomes of the evaluation would be members of the faculty, particularly the academic department directly involved in developing the program; external groups such as state and accrediting agencies; and students affected by the program’s outcomes.1

In complex situations such as that just described, the institutional researcher’s effectiveness will depend on his or her ability to define and delimit the concerns of the different audiences (Terenzini, 1993). To do so requires that the researcher be familiar with a broad array of methods that can use available data and create new data. In this chapter, we identify and discuss an evaluation strategy that meets this criterion. It uses multiple methods and embeds studies within a case study or program evaluation. The studies use qualitative and quantitative data to explore different but complementary questions important to the evaluation. Our rationale for implementing a strategy using multiple methods is that different groups need information generated through the use of different methods. Although one group may need information that is best developed utilizing quantitative methods, for example, predicted grades, a second group may need information that is best developed utilizing qualitative methods such as studies of student perceptions.

Building a Rationale for the Use of Multiple Studies

In the single-study evaluation, successful institutional research requires identification of the best study given the nature of the program evaluation, the rules of evidence held by the audience, and the questions being asked.

Single studies are appropriate in instances where just one focused question must be answered. However, when decisions about complex issues are being made, the single study usually does not adequately address the array of questions being asked by a diverse audience.

In cases where multiple questions are being asked, the appropriateness of using multiple studies and multiple research methods must be examined. The adequacy of a given method is a function of the researcher’s ability to use that method to answer the questions being asked and to be able to persuade the end user that the method is credible. The choice of methods is complicated by differences in the characteristics of different audiences. Some prefer methods that produce numbers and graphs to describe outcomes. Others prefer methods that produce comments and opinions that lead to the development of concepts and ideas.2

Differences in preferred methods are generally believed to exist between disciplines. For example, accountants are frequently assumed to prefer the use of quantitative methods. As we associate accountants with cost-benefit studies, we might also associate philosophers with conceptual analyses, psychologists with statistical analyses, engineers with graphs, sociologists with case studies, and executives with executive summaries. Understanding the differences between disciplines is important because the audiences for program evaluation outcomes in higher education are likely to come from a number of different disciplines. Designing a program evaluation that incorporates a diverse array of techniques having appeal to the different groups may thus be the only practical means by which all pertinent information needs can be met.

Choosing a Research Strategy

Research designs incorporating more than one method are referred to as multiple methods. The concept itself is a common one in program evaluation (Mark and Shotland, 1987). It has various meanings, including but not limited to the use of multiple methodologies (Cook and Reichardt, 1979; Kidder and Fine, 1987) and use of multiple measures of a construct (Campbell and Fiske, 1959). The general idea underpinning multiple methods is that although “no single method is perfect,” if different methods lead to the same answer, then “greater confidence can be placed in the validity of one’s conclusions” (Shotland and Mark, 1987, p. 77). Over time, this idea has led to widespread advocacy of triangulation in the study of a given construct.

Triangulation is one of several models currently used in program evaluation, three of which are described by Mark and Shotland (1987). The models of multiple methods are the triangulation model, the bracketing model, and the complementary purposes model. Mark and Shotland (1987) point out that the idea underpinning the triangulation model is closest to that idea most frequently providing a rationale for use of multiple methods.


It is assumed that if one method converges across (other) methods on the answer, the result will be a single estimate that is more accurate than what would have occurred with only one imperfect model. For example, the overall difficulty of materials in a course could be evaluated by conducting three studies using three types of data: student opinions, student grades, and faculty evaluations. If the results of all three studies point to the same conclusion, then greater confidence can be placed in beliefs concerning the level of difficulty. The bracketing model differs in that it focuses on a range of estimates. The idea underpinning this model is that the results of different methods can be considered as alternative estimates of the correct answer. For example, a study of the performance in a college math course of those with high grades in high school math and those with low grades in high school math could be used to develop one estimate of the difficulty of the course materials. Similarly, a study of the performance in the same college math course of those with high Scholastic Aptitude Test (SAT) math scores and those with low SAT math scores could be used to develop a second estimate. A third study using an expert panel to assess the difficulty of the course materials could provide yet another estimate. The third study would represent an alternative estimate of course difficulty utilizing a different and widely accepted qualitative method of assessment. Ultimately, the researcher’s goal is to develop a range of estimates within which the true score or estimate for level of difficulty should fall.

The third model, the complementary purposes model, tends to have more utility for institutional researchers. The idea underpinning this model is that one can use multiple methods, with each method performing a different but complementary function. The researcher may thus focus on different methods for alternative tasks to enhance the interpretability of results. A qualitative study may make the statistical results of a quantitative study more understandable and thus enhance the ability to communicate the results of the overall study. Similarly, a qualitative study as the primary vehicle may be supported with results from quantitative studies that clarify the narrative. For example, in the math course being used as an example, the students’ high school grades and SAT scores could be used to assess the difficulty of the course content. Additional information can be sought through collection of qualitative data from students on their perceptions of the difficulty of the course and the degree to which they felt actively involved in the learning process. Data could be collected on faculty perceptions of the motivation of the students based on the amount of homework completed and on the coverage and difficulty of the class based on student performance on examinations. Qualitative data would in this case be used to address questions not easily addressed using quantitative data such as high school and college grades and SAT scores.

This last model differs from triangulation and bracketing in that the methods do not need to be independent. Mark and Shotland (1987) point out that a researcher trying to enhance interpretability will likely find independence to be dysfunctional. This point is well taken for researchers who have as their goal building a foundation for improving the program being evaluated and who are less focused on statistical aggregation of the data and generalization of the findings to settings outside the boundaries of the institution.

The complementary purposes model lends itself to adaptation under the embedded case methodology proposed by Yin (1989), a single-case study within which subunits (or in our case, studies) are embedded within the larger case. When the study design includes an embedded unit of analysis with numerous data points, Yin suggests that the appropriate analysis should first be conducted within the subunit and then augmented by some analytic technique to the level of the case itself. For example, in evaluating the math course, the researcher can identify multiple studies that address the questions concerning student performance and course difficulty. As one study, data can be collected from focus groups at various points in time such as before the course started, the first week of class, at midterm, at the end of the course, and two months after the completion of the course. Metaanalysis might then be used to look for common threads of thought that connected these discussions. By doing this type of analysis, the researcher is able to identify primary subgroups and explore their perceptions and performances.

A second study can focus on time management by students and can utilize quantitative data such as time spent on homework, time spent in class, and time spent in tutoring sessions. A third study can focus on difficulty of the course based on some established criteria. The results of these and other studies incorporated into the case study can be integrated and used to assess student performance outcomes and perceptions relative to the level of difficulty of course materials and the amount of time devoted to course work. This research strategy requires that the nature of the case be defined and that a primary research question be used as an umbrella under which all studies become relevant. For example, the primary research question might be, Given the level of difficulty of the course, what factors impact student outcomes?

The complementary purposes model and the embedded case design represent strategies that can enable institutional researchers to effectively design research that meets the information needs of a diverse audience. In addition, these strategies permit the institutional researcher to deal with nonlinear research activities. When designing research based on traditional, classical techniques, a specific sequence of events starts with the definition of the problem and the body of knowledge from prior research. However, the institutional researcher frequently begins with a heuristic iteration of a problem definition followed by identification of questions and issues and the collection of data to support efforts at decision making by administrators. After some initial interpretation, the researcher frequently must redefine the problem and adjust the scope of the study. The redefinition of issues in turn influences decisions concerning the types of data needed to provide for complex decision making. This time-based, nonlinear aspect of institutional research poses many of the same dilemmas faced by sociologists using grounded theory methodologies. Learning occurs on the part of the researcher during the course of conducting the study, and this learning must then be codified into understandings and beliefs that evolve as part of the research process.3

In summary, answering questions raised in a complex environment may require the development of a strategy that uses multiple methods. Analyzing the audience and the questions produces multiple foci. Addressing these foci requires use of a research strategy that utilizes complex models such as the complementary purposes and embedded case study models. Ultimately, the specific research methods needed to support decision making in complex environments may, out of necessity, encompass techniques utilizing both quantitative and qualitative data and techniques that are recognized as credible by a broad range of academic disciplines.

Example: The Virginia Tech Math Emporium

Use of multiple methods for program evaluation will be described using the embedded case study as a vehicle. The case example is the Math Emporium at Virginia Polytechnic University (Virginia Tech), an innovative pedagogical and technology-driven approach to teaching freshman mathematics. The evaluation was requested by the administration to determine whether the high cost of developing and operating the emporium was justified by the amount of learning that was occurring. Other concerns were whether the benefits of the program could be proven to outside skeptics and whether the emporium should be shut down and the resources reallocated. For researchers, these concerns could initially be restated in relatively simple (but complex) terms: Is the Math Emporium both effective and efficient with respect to student learning and resource allocation?

The next steps in developing a case study of the Math Emporium involved identifying the primary and secondary research questions. (See Figure 2.1.) Once that difficult task was accomplished, researchers could choose the appropriate qualitative or quantitative methods for answering each question. The characteristics and concerns of the audience needing the information also had to be factored into the choice of methods.

The primary audience for the evaluation of the Math Emporium was the provost, other senior administrators, faculty members in the math department, and students taking math courses. The audience also included faculty outside the math department who are concerned about math skills of their students; individuals interested in the balanced allocation of resources; individuals interested in integrating technology into learning; and last but not least, faculty and staff who are proponents of actively engaging students in learning. Clearly, no one analysis or study or methodology will be seen as persuasive by all of these groups.

Figure 2.1. The Case Model


Case focus: Is the high cost of the program justified by the amount of learning that is occurring?

Primary research question: Is the Math Emporium efficient and effective with respect to student learning and resource allocation.

Secondary research questions:

1. Does instruction in the Math Emporium lead to improved student performance?

2. Does instruction in the Math Emporium result in high levels of student satisfaction?

3. Does instruction in the Math Emporium result in efficient use of resources?

Challenges: Getting Started

The challenge faced by researchers asked to evaluate the Virginia Tech Math Emporium was similar to that faced by many institutional researchers; they are too often called upon to evaluate a program ex post facto. Because recommended controls cannot be incorporated into the research design, the use of an experimental design as proposed by classical theoreticians is not an option. The institutional researcher must address issues by identifying a nonexperimental design that leads to credible research, the results of which are both confirmatory and diverse. If successful, the results of the study will provide a foundation for future evaluation efforts.

In our example, an embedded case study methodology that embeds both qualitative and quantitative methods developed ex post facto could be used to address questions about the effectiveness and efficiency of the Math Emporium. For example, both quantitative and qualitative studies of student learning and satisfaction could be conducted to produce estimates of effectiveness. Quantitative studies such as cost-benefit analysis and qualitative studies using expert panels could be employed to assess efficiency and allocation of resources. Each study ultimately chosen for inclusion should provide different but complementary information for decision makers.

The following sections provide examples that describe the use of quantitative and qualitative methods to examine learning and pedagogical effectiveness and student satisfaction with the Math Emporium. All studies were exploratory and were used as a starting point for development of future assessment instruments. As a caveat, no attempt is made to directly answer the question concerning the values associated with cost relative to student gains. Whether the gains in student learning justify the costs of developing the Math Emporium is a management judgment. Our job is to determine what the gains and costs were.

Choosing the Methodology

It has long been recognized that quantitative and qualitative methods produce different types of information.4 The use of quantitative methods permits statistical analysis using standardized measures to measure and compare the reactions of a large number of people on a limited set of questions (Patton, 1997). By contrast, qualitative methods facilitate use of data that are perceived as rich, holistic, and real and for which face validity seems unimpeachable (Miles, 1983). These characteristics and the lack of standardization of much qualitative data make them difficult to analyze and require that the researcher devote much time and effort to managing the data.

In our example, the questions being asked required the use of both methods. Fortunately, a number of quantitative indices are generally available for use by institutional researchers, for example, grades in the Math Emporium, high school grades, SAT scores, and attendance records. A number of additional indicators can be developed by various means, including satisfaction scales and involvement indicators.

By contrast, qualitative indicators are less readily available. Developing a research design that incorporates collection of these indicators is important for a number of reasons. First, use of qualitative methods can lead to production of serendipitous findings. Second, use of qualitative methods can support decision makers who need to understand what people think, why people think what they think, and the values and motivations behind thought and behavior (Van Maanen, 1983; Patton, 1987). Third, when mixed, information developed using qualitative methods can be played against information from quantitative methods evaluating the same setting. The combination produces analyses that are more powerful than either method can produce alone (Miles, 1983).

In summary, studies embedded in the case study are chosen to provide different but complementary information for decision makers. The nature of the questions addressed determines the appropriateness of types of data collected. Quantitative methods utilize performance data to examine learning and pedagogical effectiveness while qualitative methods are used to collect data on student perceptions. All studies are exploratory and used as a starting point for development of future assessment instruments. A summary of studies is shown in Table 2.1.

Conducting the Research

Step 1: Describing the Program. One of the more important issues to remember in conducting research is that many audiences or end users will not have an adequate understanding of the program being evaluated. It was

Table 2.1. A Summary of Studies Embedded Within a Case Study of the Virginia Tech Math Emporium (ME)

Study Contribution Primary Tool Method Level of Analysis

Prediction of performance Use prior performance to estimate effect of ME by class. Complex statistical Quantitative Course/class

Comparison of grades Compare grades in ME and prior to ME by class. Simple

statistical Quantitative Course/class

Student focus groups Ask freshmen what they liked and how it might be improved. Opportunity focus groups Qualitative Individual

Engineering student survey Ask engineering students what they liked and how it might be improved. Open-ended survey Qualitative Individual

Class notes Describe classes and how taught and some of the results of activities. Content analysis Qualitative Class

Expert visit Look at ME and evaluate its worth and what might improve it. Expert judgment Qualitative Organizational

Cost study Identify major cost and attempt to compare to cost without ME. Analytical accounting Quantitative Organizational

Literature review Gather external evidence that increased involvement of engineer students in learning was better. Literature review and secondary analysis Qualitative Theoretical

thus imperative that the evaluation report included a description of the Math Emporium in terms of facilities, the faculty resources involved, the equipment, the processes by which the students accessed the emporium, and so on. In the absence of a written description, information can be gathered through interviews with department heads, individuals in university facilities, and the faculty. A description of the Virginia Tech Math Emporium is presented in Appendix 2.1.

Step 2: Conducting the Study. Researchers must next identify those questions that can help direct the evaluation effort. The case of the Virginia Tech Math Emporium focused on whether the high cost of the program was justified by the amount of learning that was occurring. The primary research question was restated as follows: Is the Math Emporium efficient and effective with respect to student learning and resource allocation? Three secondary research questions were then formulated to provide direction for design of the subunits or studies:

1. Does instruction in the Math Emporium lead to improved student performance?

2. Does instruction in the Math Emporium result in high levels of studentsatisfaction?

3. Does instruction in the Math Emporium result in efficient use of resources?

STUDIES USING QUANTITATIVE METHODS. Two studies were designed using quantitative methods to address questions concerning the effectiveness of the Math Emporium with respect to student performance. The first study was a statistical analysis using regression to estimate the effect of taking mathematics in the Math Emporium. The second study was a direct comparison of student performance by course for the students who took mathematics at the Math Emporium versus those who had taken the same course during the preceding year in a more traditional setting.

Quantitative data used in this study are measures of academic performance readily available for use by institutional researchers. Regression analysis was used to model the academic performance of students in various mathematics classes. Actual grades from 17,069 first-time freshmen in seven courses were entered into a database along with measures of prior academic performance. The latter was measured using the student’s math and verbal SAT scores; the student’s class rank; final high school grade point average (GPA); overall average GPA among the student’s high school English, math, science, humanities and social studies, and language courses; and the total numbers of these types of courses in which a student was enrolled from ninth through eleventh grade. This type of analysis of grades in a course is useful because it is generally believed that the students who participate in programs such as the Math Emporium will learn more if they are better prepared than other students in terms of mathematics. This requires controlling for the entering capability of the students, particularly where there is not random assignment. There was some evidence that entering students differed with respect to ability levels for the different years when measured by average high school grades and performance on the standardized mathematics achievement tests.

Effectiveness of the Math Emporium experience was examined relative to two measures—percentage of students successfully completing the course and average final grades received relative to pre-Math Emporium predecessors. Success was defined as making a grade of C or better. The analysis controlled for academic performance prior to enrollment in the math course taught at the Emporium, that is, the student’s academic preparation and ability.

Effectiveness was first examined by developing logistic equations to model the probability that a student would succeed as a function of whether the course was taught in the Math Emporium (Kleinbaum, 1994). As noted previously, measures of the student’s academic preparation and ability were incorporated into the model. The criterion of success used as the dependent variable for this model was whether the student received a grade of C or better for the course. Results are reported in Appendix 2.2.

Prior academic performance of students as measured by SAT scores and high school GPA was then used to best explain the actual grade that the student achieved in the math course. Least-squares regression was used to predict the grade. Grades were scored from 0 to 4, with F 0, D 1, C 2, B 3, and A 4. The results are summarized in Appendix 2.3.

A second study using quantitative data was then designed to address some of the issues left unanswered by the first study. As noted previously, the results from the logistic regression analysis and the least-squares regression (Appendixes 2.2 and 2.3) were for a single semester. Although the statistical analysis allowed the researcher to make the adjustment for differing levels of student ability in the evaluation of the Math Emporium, it suffered from one major problem—it was difficult to explain and to interpret. This was particularly true for the logistics regression, where the impact of the emporium depended on the ability of the student. To overcome this limitation, a study was designed to make a direct comparison of grades that was more easily interpreted. The study compared students taking courses in the Math Emporium with students in more traditional classes the preceding year, the assumption being that both groups were similar in ability. Comparisons were done by course and by semester for academic year 1996–97 with grades for academic year 1997–98. The results are shown in Appendix 2.4.

The results for studies 1 and 2 generally support the belief that those students who received their math instruction using the Math Emporium showed improved performance relative to what would have otherwise been expected in the absence of instruction in this setting. In Study 1, improvement was significant for four courses—Math 1015, Math 1205, Math 1206, and Math 1224—when performance was defined by the likelihood of getting a grade of C or better in the course. This improvement was significant also for Math 1525 when one considers performance to be measured by the grade received in the course. In both cases, the Math Emporium seemed to have the least effect on student performance among the students of Math 1114, a course designed specifically for engineering majors. In Study 2, comparison of grades for two years tends to generally support the trend of improved performance by students receiving instruction through the Math Emporium.

STUDIES USING QUALITATIVE METHODS. The evidence of improved student learning found using quantitative methods did not provide information on student satisfaction with or perceptions of the Math Emporium. Put another way, statistical analysis of grade and past performance data did not completely meet the information needs of outcomes assessment. This is an important observation because pressures from audiences both internal to and external to institutions of higher education are increasingly defining student outcomes as more than the grade received. There is an interest in what the student learned and how that learning was applied. Furthermore, if administrators seek information that is rich and holistic and that informs them about what students think, it is imperative that these concerns be incorporated into the evaluative study. The institutional researcher will need to identify useful qualitative methods to meet these needs, especially in the short term, when reliable assessment instruments to measure satisfaction have not yet been developed.

A second concern is that although longitudinal studies had to be done to determine the long-term impacts of the Math Emporium on student learning, qualitative data were needed in the short term to lay the foundation needed to develop instruments for longitudinal research. A better understanding of student reaction to innovative pedagogy was also necessary for improvement of instruction within the lab setting. A final limitation to the statistical and numerical analyses was that they provided little information on what might have been done to improve the effectiveness of the Math Emporium.

A number of indices are available for use in program evaluation, including data from focus group interviews, faculty assessments, individual interviews, expert panels, and open-ended surveys. For our example, qualitative methods utilizing focus groups, open-ended surveys, and expert panels were used to assess student satisfaction with the Math Emporium.

Focus groups, or in-depth interviews, are among the most widely used qualitative research tools in the social sciences (Stewart and Shamdasani, 1990). They are almost always conducted with the collection of qualitative data as their primary purpose. Participants can qualify their responses or identify contingencies, thus producing a rich body of data. Furthermore, focus groups are more flexible, less costly, and quicker to implement than many other data collection strategies. The downside to using data collected from focus groups is that summarizing the data is often difficult and time consuming.

Despite these drawbacks, institutional researchers find that focus groups are particularly useful for program evaluations. Focus group techniques can be used to obtain general background information, to diagnose the potential for problems with new programs, and to generate impressions of programs. They are also useful for interpreting and adding depth to previously obtained quantitative results.

Participants in the focus groups organized to evaluate the Math Emporium were convenience samples, that is, students who were available in two freshmen residence halls and in classes for freshman student athletes. Two trained interviewers were present for each focus group. Three questions were asked of the focus groups: (1) What do you like about the Math Emporium? (2) What bothers you about the Math Emporium? (3) What do you suggest should be done differently? In addition to gathering data on student perceptions, evaluators wanted to collect opinions on how the Math Emporium might be improved.

Comments were transcribed and summarized. Partial results are shown in Appendix 2.5, with the top five comments summarized in order of importance by category of response. Some items have been combined for parsimony. Restructuring of comments in this way enables the researcher to identify areas or issues on which decision makers should focus their concerns.

Analysis of the qualitative data did in fact reveal areas of concern that are not obvious from the quantitative data. For example, students voiced high levels of satisfaction with the physical surroundings but low levels of satisfaction with the equipment and site location of the Math Emporium. Red flags were raised concerning relational factors when comments reflected student dissatisfaction with respect to the amount of teacher contact and types of student interactions occurring in the lab setting. The integrity of some students was questioned. The results also suggested that the students are concerned about efficient use of their time.

In general, the results from the third study contributed in two ways to the success of the program evaluation. First, the results added information that could be used by decision makers. Second, the results provided those insights needed to design an ongoing assessment of student satisfaction with the Math Emporium.

A second method for collecting the opinions and perceptions of individuals is the open-ended survey. This survey traditionally contains four or five general questions related to the topic of interest. Such surveys are very easy to construct but, like focus group interviews, are quite difficult to analyze and interpret.

The open-ended survey was administered to classes in the Department of Engineering Fundamentals. Given that most freshman engineering students take two mathematics classes per semester, distributing the surveys through those classes guaranteed that most of the students surveyed had experience with the Math Emporium, though normally in a small range of classes, that is, those required by the engineering curriculum. Because the results of the quantitative studies suggested that engineering students benefited less than other students from instruction in the Math Emporium, it was important to understand whether engineering students differed from the other student population in their perceptions of the Math Emporium. Students were asked the following questions:

1. What do you like about the Math Emporium and your experiences there?

2. What do you not like about the Math Emporium and your experiences there?

3. What should be done differently at the Math Emporium?

4. Do you think that you learned more as a result of your experiences atthe Math Emporium? (If yes, why? If no, why not?)

A random sample of 220 responses from students (179 males and 41 females) enrolled in four different math courses was analyzed. The top five responses to each question are shown in Appendix 2.6. As with focus groups, comments are summarized. However, the researcher has the added advantage when using a written survey of being able to more accurately record the frequency with which specific concerns are identified.

Results from questions 1, 2, and 3 are similar to the results from focus groups. There is no reason to believe, based on analysis of qualitative data, that engineering students were different in fundamental ways as may have been implied by the analysis of quantitative data. Their concerns were basically the same, for example, time management, convenience, usefulness of hardware and software, competence, and availability of tutors. Differences between engineering and other students were suggested by the results from question 4. Engineering students did not perceive that they learned more as a result of experiences at the Math Emporium, a perception that generally supported the statistical analyses in studies 1 and 2.

Data collected using focus groups and surveys should ultimately help the institution design interventions that will address concerns expressed by participants. The institutional researcher can assist by collecting additional data that support the need for interventions. A pedagogy that is consistent with the professional standards of the academic disciplines must be identified. An important source for ensuring that professional standards are met is the use of expert judgment.

Although faculty and administrators inside the institution where the program has been implemented represent one source of expert judgment, they may not be seen as credible by all audiences owing to their involvement in development of the program. A second source of expertise comes from faculty, administrators, and consultants from outside the institution. Their opinions represent usable data that is perceived as credible by many audiences. Experts are particularly useful for describing, analyzing, and interpreting the learning culture of a program such as that developing in Virginia Tech’s Math Emporium. (Creswell, 1998, p. 67, discusses these and other types of qualitative studies.)

Experts invited to Virginia Tech to take part in the program evaluation engaged in the following activities: (1) discussions with students and faculty about what the students and faculty liked and did not like; (2) walks around the Math Emporium; and (3) observation of faculty, staff, and student activities. Their areas of expertise included mathematics and the processes of active learning and student engagement. The exit report provided opportunity for experts to engage key individuals from the university in a group discussion about the Math Emporium.

Results from discussions with students complemented and supported the results from the other qualitative studies of student perceptions. Discussions with the faculty concerning opinions about the learning process reinforced what the consultants observed and what the researcher identified. Results ultimately led to suggestions for interventions to improve the program.

Interviews conducted by experts provided the opportunity for faculty to discuss the implications of the Math Emporium with respect to expenditures and changes in department priorities. Experts also broadened the discussion to include issues of faculty support and priorities setting for future use of the Math Emporium.

Use of experts resulted in a series of comments and observations about the active nature of student learning. Their findings reinforced what had been inferred from student comments and from faculty class notes and assessments. Expert opinion thus attached credibility to the idea that the student should be involved in determining aspects of the learning process. The learning activities incorporated into the Math Emporium were identified as consistent with preferred pedagogical processes and as resulting in preferred learning from the viewpoint of the discipline. Expert opinion reinforced findings from the research literature on the benefits of experiential learning.

A number of studies should be conducted when evaluating a program that is competing for resources, including a cost study, faculty course assessments, and literature reviews. These were incorporated into the case study of the Virginia Tech Math Emporium but are not described here owing to space limitations.

Techniques for conducting the studies mentioned in the previous paragraph are well known. For the cost study, the major costs were identified in terms of equipment, personnel, and facilities, including lease payments or rents in addition to purchase costs. Where possible, costs attributed to the program were separated from costs associated with other activities. A number of questions concerning alternative costs were studied. For example, the following types of questions were asked: If the Math Emporium were not being used, how much would it cost to provide instruction as was previously done? If the computers had not been placed in the emporium, would they have been purchased for other uses? If an administrative unit such as Information Systems paid for the Math Emporium computers as part of its goal to meet objectives for classroom technology, is this a cost attributable to the Math Emporium? Can estimates of improvement in the number of students passing the course as a result of instruction in the Math Emporium be translated into the number of class enrollments that are saved by the new method, which can in turn be translated into classes, staff costs, and facility needs?

Similarly, the use of course notes from faculty who teach the courses for assessment is standard operating procedure in many institutions. The institutional researcher can ask that the notes be more detailed for purposes of the evaluation to include the specific procedure for using the emporium and additional insights from the faculty. In the example of the Math Emporium, faculty noted that “each student is expected to spend one hour in a classroom setting attending a focus group and three hours studying in the Math Emporium with the help of MATH 1015 staff. Staff are available 9:00 A.M. to midnight on Monday through Thursday and 9:00 A.M. to 4:00 P.M. on Friday as well as 4:00 P.M. to 11:00 P.M. on Sunday. Students are given credit toward the final grade for conscientiously doing these two activities. Although this may have led to better grades as a result, they also lost grade points for not showing up and signing in for these time periods.” Examples of evaluative comments by the faculty about the effectiveness of the courses in supporting learning and in causing the active engagement of the students included the following: “A promising service to be provided by MATH 1015 and other courses in the future is down-the-line support for other courses, including those outside mathematics. If a student who has passed the course claims not to know something that is part of the curriculum, that student can be sent back to the Math Emporium. . . . This puts the responsibility to review such concepts on the students, with the Mathematics Emporium providing the services to make this possible.”

Institutional researchers can also benefit from the use of results in previously published research on the issues being studied. Literature reviews can provide insights into questions being asked about the benefits of any program, especially one focusing on student learning and pedagogy. In our example of the Math Emporium, a previously funded study was identified that looked at the impact of increased engagement on the ability and motivation of the students, particularly engineering majors. Results, conducted with funding from the National Science Foundation, suggested that the long-term results of engagement for engineering students would likely be positive.5 This finding, coupled with evidence from the research literature that active and collaborative learning activities enhanced student learning, implies that the long-term student learning of those using the Mathematics Emporium should be better than it would be under the more traditional lecture method to the extent that the courses using the facility encouraged student interaction with faculty and peers, hands-on activities with clear instructions, and appropriate structure.

Step 3: Integration and Interpretation of Results. The units of analysis for studies embedded within the case study vary as a function of the questions being investigated. Some studies focused at the individual level, whereas others focused at the class level. This provides a broad array of complementary information. The challenge for the institutional researcher is to identify a means for communicating the implications of the various studies for the program itself. The quality of institutional research rests on the degree to which it can meet this challenge and provide information that is relevant, sufficient, timely, and reliable. To accomplish this goal, data must be analyzed and the results then translated into useful information at the time that decisions are being made. In much the same way that theoreticians use mathematical equations and graphics to communicate their theory, institutional researchers can use matrices and graphics to communicate information drawn from multiple studies. Results and implications in table or matrix form can provide a visual map for the end user.

We will use the first four studies to demonstrate one possible approach. Table 2.2 is designed with the row headers indicating the study and the column headers indicating findings, consistencies/inconsistencies, and implications for the program.

Table 2.2. Integration and Interpretation of Results

Study 1 Study 2 Study 3 Study 4

 Findings

Except for math for engineering students:

Instruction in the Math Emporium significantly increases the likelihood that students will succeed given past performance.

Instruction in the Math Emporium is associated with significantly higher grades. When comparing two semesters (one before Math Emporium), there

is a pattern of improvement, significant for some courses.

Pattern of improvement is not evident for engineering students.

Consistencies/inconsistencies Freshmen Participants:

Make no reference to improved performance.

Show resistance to new teaching approaches.

Give conflicting opinions about tutors.

Express concern about time and lab location.

Seem to dislike group work.

Are sensitive to hardware problems. Engineering students:

Do not believe they have learned more through use of Math Emporium.

Express satisfaction with the physical learning environment.

Are positive about group work.

Express concern about skill level of tutors.

Express concern about time and lab location.

Dislike choices of hardware.

Findings generally supported by Study 2 on overall performance and by studies 2 and 3 on outcomes for engineering students. Consistent with findings Inconsistencies exist if for studies 1 and 4. one assumes that better

performance translates to higher levels of satisfaction.

Inconsistency between engineering students in Study 4 and those in this study on usefulness of group work. Consistent with studies

1 and 2 on usefulness of Emporium for

engineering students.

Summary

The studies chosen for the embedded case study demonstrate how to meet the needs identified for the program evaluation. First, quantitative methods can be used to estimate and predict performance and to describe costs. Second, qualitative methods can be used to describe the structure of and satisfaction with the program. Third, studies using qualitative and quantitative methods can be compared to look for confirmation and contradictions in findings. The embedded case study thus allows the researcher to move from identification of questions to usable information for decision making at the program level.

Students, faculty, and administrators were the primary audiences for the results of the studies. At the same time, these groups acted as primary data sources for the program evaluation. Though studies embedded in the case had limitations in terms of reliability and validity, a comprehensive assessment of the Math Emporium did emerge. This was made possible by comparing and contrasting results of different studies using different methods. By bringing together various methodologies and by building on what others have learned, information emerged that effectively assisted and supported decision processes in the institution.

In summary, successful institutional research requires identifying the various studies that are appropriate given the nature of the program evaluation, the rules of evidence held by the audience that is listening, and the questions being asked. The strengths of various methodologies should thus be examined during the design phase of the research effort. The answer to one question may require use of qualitative methodologies, for example, student perspectives on effectiveness of a computer lab setting, whereas quantitative methodologies may be more appropriate in other instances, such as cost effectiveness of the program. In addition to the design phase of the project, the appropriateness of various studies and methodologies will need to be continuously considered and reconsidered as the project continues. This continuous assessment is particularly important when key audiences change and when surprising facts are discovered.

Notes

1. A related issue is whether the institutional researcher has a clear understanding ofthe role he or she should play. It could be either as a summative evaluator or a formative evaluator. Individuals in top administrative positions may want to know whether the program obtained its overall goals, and thus the role of the institutional researcher would be that of a summative evaluator. By contrast, faculty may see the evaluation as a progress check being conducted during the course of the program, and thus the role would be that of a formative evaluator. (See Morris, Fitz-Gibbon, and Lindheim, 1987, for a description of summative and formative evaluations.) The summative evaluation must be designed with close attention to stated performance objectives. The formative evaluation requires less rigid data collection and is generally more flexible with respect to requirements of the research.

2. Individuals who have these predispositions tend to be found in specific disciplines and in certain roles. At a popular level, this differentiation has been described in various books such as those about Myers-Briggs types and the differences in the behaviors, values, and beliefs of these different types (Myers and Myers, 1990) At a more scholarly level, the audience’s ability and willingness to be persuaded has been related to “attention factors, message quality, a person’s involvement in the issue, and a person’s ability to process persuasive argument.” (Jowett and O’Donnell, 1992, p. 137). In other words, each person will have a different ability to learn from the various alternative methodologies, and the results of the various studies will be given different credibility by different individuals. In addition, each person will have a different motivation to learn from the various methodologies. Intuitively, using the preferred methodology of the audience will increase the likelihood that the members of that audience will accept the methodology as persuasive and accept the results as legitimate.

3. Selection of methodologies in the tradition of grounded theory (see Glasser andStrauss, 1967; Strauss and Corbin, 1994) is done with the understanding that the set of beliefs, or the theory, that initially drive the study will be changing as the research is being conducted. The researcher is expected to revisit the theory, modify it, and then modify the research to address the newly identified issues. The refinement of the beliefs and the methodology often come from interacting with those who are important to the use of the results. It can also come from the discovery of related research that one finds as one moves through the major issues.

4. Patton (1987, p. 64) notes that the “ideal-typical qualitative methods strategy consists of three parts: (1) qualitative data, (2) naturalistic inquiry, and (3) inductive content or case analysis,” whereas “the classic hypothetico-deductive approach would ideally include (1) quantitative data, (2) experimental (or quasi-experimental) research designs and (3) statistical analysis based on deductively derived hypotheses.”

5. In a survey of 480 undergraduate engineering students at six other universities onknowledge and skills required by the Accreditation Board for Engineering and Technology, the authors concluded that opportunities to interact with faculty and to work collaboratively with peers in a classroom setting should lead to gains in professional competencies (Cabrera, Colbeck, and Terenzini, 1998).

References

Cabrera, A. F., Colbeck, C. L., and Terenzini, P. T. “Teaching for Professional Competence: Instructional Practices that Promote Development of Group, ProblemSolving, and Design Skills.” Paper presented at the meeting of the Association for the Study of Higher Education, Miami, Fla., November 1998.

Campbell, D. T., and Fiske, D. W. “Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix.” Psychological Bulletin, 1959, 56, 81–105.

Cook, T. D., and Reichardt, C. S. (eds.). Qualitative and Quantitative Methods in Evaluation Research. Thousand Oaks, Calif.: Sage, 1979.

Creswell, J. W. Qualitative Inquiry and Research Design: Choosing Among Five Traditions. Thousand Oaks, Calif.: Sage, 1998.

Fitz-Gibbon, C. T., and Morris, L. L. How to Design a Program Evaluation. Thousand Oaks, Calif.: Sage, 1987.

Glasser, B., and Strauss, A. The Discovery of Grounded Theory. Chicago: Aldine, 1967.

Jowett, G. S., and O’Donnell, V. Propaganda and Persuasion. (2nd ed.) Thousand Oaks, Calif.: Sage, 1992.

Kidder, L. H., and Fine, M. “Qualitative and Quantitative Methods: When Stories Converge.” In M. M. Mark and R. L. Shotland (eds.), Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35. San Francisco: JosseyBass, 1987.

Kleinbaum, D. G. Logistic Regression. New York: Springer, 1994.

Mark, M. M., and Shotland, R. L. “Alternative Models for the Use of Multiple Methods.” In M. M. Mark and R. L. Shotland (eds.), Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35. San Francisco: Jossey-Bass, 1987.

Miles, M. B. “Qualitative Data as an Attractive Nuisance: The Problem of Analysis.” In J. Van Maanen (ed.), Qualitative Methodology. Thousand Oaks, Calif.: Sage, 1983.

Morris, L. L, Fitz-Gibbon, C. T., and Lindheim, E. How to Measure Performance and Use Tests. Thousand Oaks, Calif.: Sage, 1987.

Myers, I. B., with Myers, P. B. Myers-Briggs Type Indicator: Gifts Differing. Palo Alto, Calif.: Consulting Psychologists Press, 1990.

Patton, M. Q. How to Use Qualitative Methods in Evaluation. Thousand Oaks: Calif.: Sage, 1987.

Patton, M. Q. Utilization-Focused Evaluation: The New Century Text. Thousand Oaks: Sage, 1997.

Shotland, R. L., and Mark, M. M. “Improving Inferences from Multiple Methods.” In M. M. Mark and R. L. Shotland (eds.), Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35. San Francisco, Calif.: Jossey-Bass, 1987.

Stewart, D. W., and Shamdasani, P. N. Focus Groups: Theory and Practice. Applied Social Research Methods Series, no. 20. Thousand Oaks, Calif.: Sage, 1990.

Strauss, A., and Corbin, J. “Grounded Theory Methodology: An Overview.” In N. Denzin and Y. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, Calif.: Sage, 1994.

Terenzini, P. T. “On the Nature of Institutional Research and the Knowledge and Skills It Requires.” Research In Higher Education, 1993, 34(1), 1–10.

Van Maanen, J. “Reclaiming Qualitative Methods for Organizational Research: A Preface.” In J. Van Maanen (ed.), Qualitative Methodology. Thousand Oaks, Calif.: Sage, 1983.

Yin, R. K. Case Study Research: Design and Methods. Applied Social Research Methods Series, no. 5. Thousand Oaks, Calif.: Sage, 1989.

JOSETTA S. MCLAUGHLIN is director, School of Management and Marketing, Roosevelt University.

GERALD W. MCLAUGHLIN is director of institutional planning and research at DePaul University.

JOHN A. MUFFO is director of undergraduate assessment at Virginia Polytechnic Institute and State University.


APPENDIX 2.1. DESCRIPTION OF THE MATH

EMPORIUM

The Math Emporium was first opened for classes on August 25, 1997. The purpose of the emporium was to provide a computer-based learning environment for instruction in selected freshman- and sophomore-level mathematics courses. The goals developed for the emporium were to improve student performance and to improve retention in math and math-related majors such as engineering. Courses covered by the Math Emporium included calculus, linear algebra, vector geometry, geometry, and computing for teachers. The math department cooperated with other departments in the development of courses. For example, Math 1114 was developed in cooperation with the College of Engineering, whereas Math 1525 was designed to meet the needs, including software suitability, of students from several departments.

The Math Emporium was open twenty-four hours a day, seven days a week. The facility provided 76,000 square feet of space, 17,000 of which were set aside for Information Services, an administrative unit. The furnishings included 500 computer stations arranged in pods of 6 with minimum use of partitions. There were one large lecture area; two enclosed classroom computer labs; two lounge areas; and partitioned spaces for a tutoring lab, staff offices, and small group sessions.

A math support staff made up of faculty members, graduate teaching assistants, and undergraduate assistants was available from 9 A.M. to midnight Monday through Thursday, from 9 A.M. until 4 P.M. on Friday, and from 4 P.M. until 11 P.M. on Sundays. Tutorial help was available from 6 P.M. to 9 P.M. Sunday through Thursday.

APPENDIX 2.2. RESULTS OF A STUDY OF

STUDENT PERFORMANCE IN THE VIRGINIA

TECH MATH EMPORIUM

Results indicate that for Math 1015, Math 1205, Math 1206, and Math 1224, instruction at the Math Emporium significantly increased the likelihood that a student would succeed in the course, given the effects on student success of the student’s prior academic performance. Instruction at the Math Emporium also appeared to increase the likelihood of student success for the remaining three courses, but the increase was not substantial enough to be statistically significant. Notably, the course for which the Math Emporium mode of instruction seemed to make the least difference in student performance was Math 1114, a course specifically designed for engineering majors. Thus, the typical characteristics and general math ability of the students in this course likely differed from those of the students in the other courses. In addition, students in Math 1114 spent the least amount of time in the Math Emporium, and those fewer assignments could likely have led to the lower impact.

Some key results from this analysis are presented in Table A.1.

Information under “Change without Emporium” shows by what percentage a representative student’s likelihood of succeeding would have been reduced if the student had not been instructed at the Math Emporium. For the purposes of this comparison, a representative student was defined as a student of a Math Emporium course for whom the likelihood of succeeding was equal to the proportion of Math Emporium students in the sample who received a satisfactory grade in that course. The column “Percent Success without Emporium” shows what the probability of success in the math course would have been for a representative student if instruction had not been at the Math Emporium but rather had followed more traditional methods.

The final column, “Unsuccessful Representative Students,” uses the previous information to estimate how many additional representative students, out of a total number of students as given in the second column, most likely would not have received a satisfactory final course grade if the course had not been taught in the Math Emporium. For Math 1205, approximately 139 additional students would not have been successful (that is, they would have received a grade of C or lower). Among all of the classes evaluated, 45 additional representative students on average would have performed unsuccessfully in each course if it had not been taught using the Math Emporium. Among courses for which the Math Emporium was found to have a signif-

 STUDY OF STUDENT PERFORMANCE IN THE VIRGINIA TECH MATH EMPORIUM 37

Table A.1. Effect of Math Emporium on Percentage of Students Receiving a C or Better

    Percentage Percentage

    of Success Change of Success Unsuccessful

Course with without without Representative

Number Na Coefficientb p-value Emporiumc Emporiumd Emporium Studentse

1015 949 0.5489 0.0001 81.85 8.15 73.70 77

1016 317 0.3211 0.2148 85.49 3.98 81.51 13

1114 1365 0.0010 0.9915 70.72 0.02 70.70 1

1205 1260 0.6908 0.0001 80.13 11.00 69.13 139

1206 389 0.4446 0.0409 82.82 6.33 76.49 25

1224 421 0.6531 0.0024 86.46 7.65 78.81 32

1525 784 0.1972 0.1666 80.15 3.14 77.01 25

Average 783.57 0.4081 0.2023 81.09 5.75 75.34 45

aThe number of first-time freshmen in the sample that took the course at the Math Emporium. bThe coefficient for the Math Emporium effect from the logistic regression equation. cThe percentage of students in the sample who received a grade of C or better.

dThe estimated reduction in the likelihood of receiving a satisfactory grade in the course was calculated as follows:

(%) = * prob * (1 – prob)* 100%

where (%) = the percentage change in the likelihood of a satisfactory grade

 = the coefficient for the Math Emporium effect, and prob = the prior probability or likelihood that the student would be successful in the math course

eHow many additional representative students, out of a total number of students as given in the second column, most likely would not have received a satisfactory final course grade if the course had not been taught in the Math Emporium.

icant impact, this average number of representative students affected increases to 68. For all seven courses combined, the total number of representative students who were estimated not to have received a satisfactory math grade during the semester if instruction had utilized more traditional methods was 310. Note that these results were for a single semester.

APPENDIX 2.3. PERFORMANCE IN MATH

EMPORIUM CONTROLLED FOR PAST

PERFORMANCE

The results shown in Table A.2 indicate that instruction in the Math Emporium was associated with a significantly higher grade in five courses (Math 1015, Math 1205, Math 1206, Math 1224, and Math 1525). Significant improvements ranged from about .15 in Math 1015 to more than one-third of a letter grade in both Math 1206 and Math 1224. The average improvement for all courses was about .19 of a letter grade, and for the five courses where the improvement was significant, the average anticipated grade was .32 higher, representing an improvement of about 10 percent in the average grades. Once again, the Math Emporium showed the least effect among students of Math 1114, the course designed for engineering majors.

Table A.2. Effect of Math Emporium on Grades in Different Mathematics Courses

     Average Grade

    Average Without

Course N Coefficienta p-value Grade Emporium

1015 949 0.1491 0.0027 2.56 2.41

1016 317 0.1165 0.1483 3.10 2.98

1114 1365 –0.0619 0.1557 2.33 2.39

1205 1260 0.2080 0.0001 2.53 2.32

1206 389 0.3904 0.0001 2.90 2.51

1224 423 0.3411 0.0001 3.00 2.66

1525 784 0.1817 0.0014 2.74 2.56

Average 783.57 0.1893 0.0441 2.74 2.55

aNote that all of the coefficients in this table describe the effects of the Math Emporium mode of instruction on students’ success in these math courses given the effects of the numerous other predictors of student performance, including SAT scores and high school GPA. Thus, multicollinearity is a potential problem that would complicate the interpretation of the coefficients of individual regressors. To control for this possibility, a battery of multicollinearity diagnostic tests were performed on the data. No evidence of any problems with multicollinearity was detected.

APPENDIX 2.4. DIFFERENCES IN

MATHEMATICS GRADES AFTER INSTITUTING

THE MATHEMATICS EMPORIUM

The results shown in Table A.3 suggest a pattern of improvement, some of which was significant, when comparing across two fall semesters. Change was not always in the desired direction for the two spring semesters. The results shown in Table A.4 reveal that for fall 1997, over three hundred fewer students received grades of F in basic mathematics courses than did in fall 1996. Approximately five hundred fewer, equivalent to 10 percent of

Table A.3. Summary of Mathematics Grades, 1996–97 Versus 1997–98

 Math 95% 95%

Class Courses Number Mean Range Number Mean Range

1015 Fall 1996 1384 2 0.07 Spring 1997 235 1.52 0.15

1015 Fall 1997 1137 2.44*** 0.07 Spring 1998 177 2.14*** 0.2

1016 Fall 1996 593 2.44 0.11 Spring 1997 1298 2.55 0.07

1016 Fall 1997 620 2.58 0.1 Spring 1998 1094 2.46 0.07

1114 Fall 1996 1595 2.36 0.07 Spring 1997 637 2.61 0.1

1114 Fall 1997 1580 2.3 0.06 Spring 1998 557 2.31*** 0.1

1205 Fall 1996 1344 2.28 0.07 Spring 1997 277 2.07 0.15

1205 Fall 1997 1347 2.46* 0.06 Spring 1998 194 2.02 0.18

1224 Fall 1996 711 2.13 0.1 Spring 1997 1043 2.13 0.07

1224 Fall 1997 726 2.44*** 0.1 Spring 1998 1138 2.12 0.07

1525 Fall 1996 972 2.38 0.08 Spring 1997 243 1.97 0.19

1525 Fall 1997 886 2.63*** 0.08 Spring 1998 152 2.18 0.21

1526 Fall 1996 194 2.42 0.18 Spring 1997 733 2.44 0.1

1526 Fall 1997 200 2.1 0.18 Spring 1998 756 2.59 0.08

1614 Fall 1996 47 3.43 0.18

1614 Fall 1997 44 3.78* 0.14

1624 Spring 1997 44 3.46 0.15

1624 Spring 1998 47 3.63 0.17

*Significant at the .05 level.

***Significant at the .001 level.


 MATHEMATICS GRADES AFTER INSTITUTING THE MATHEMATICS EMPORIUM 41

the freshman class, earned grades of D or F. This comparison thus supports findings of improved performance suggested by the two previous analyses. Though this study does not adjust for previous ability, the trade-off is that it is much easier to explain than the earlier regression studies and therefore more persuasive to the less statistically inclined end user. 

APPENDIX 2.5. SUMMARY OF RESULTS FROM

FOCUS GROUP INTERVIEWS

What do you like about the Math Emporium?


 Access/Convenience • Open twenty-four hours per day, seven days a week.

• Tutors are helpful, available much of the time, and free.

• Easy to schedule lectures.

• Can go through quizzes before taking a test.

• Flexible; self-pacing.

 Aesthetics • Comfortable chairs.

• Air-conditioned.

• Quiet; a good place to study any subject.

What bothers you about the Math Emporium?


Location • Inconvenient; getting there is a big effort that wastes a lot oftime.

• Too far to go to accomplish what can be done in one’s room.

• Bus system is not timely; they are infrequent later at night;sometimes one has to walk home in the dark late at night.

• Only reason to go there is to take tests on computers.

Hardware/Software • System crashes frequently.

• Server freezes while taking a test with incorrect point adjust-ment afterwards; better system needs to be developed in the interest of fairness.

• Inefficient system; only Macs are used there; we are required to purchase PCs.

• The Macs are slow or not preferred, and some don’t have PC crossover option.

• Cannot save without a disk; trouble with disk compatibility.

Employees • Tutors are not helpful (twice as many responses as “are helpful” above).

• Cups used to signal for help result in an inefficient system.

• Coaches Corner not helpful.

• Independent learners can do well there, but they are in theminority.

Dehumanization • Removes teacher-student relationship.

• Does not allow for human interaction.

• Preference for teacher; paying money for academic creditswithout a real teacher.

• Don’t like doing math on a computer.

Group work/labs • Partners trade off, rotating through assignment.

• Pick up labs for a friend and complete them in the residencehall room.

• Not effective—less retention.

• Difficult to get several computers or chairs together for a work group.

• Assigned to a group but would rather pick them oneself.

 SUMMARY OF RESULTS FROM FOCUS GROUP INTERVIEWS 43

Cheating/security • Weak guidelines; it’s easy to cheat.

• Cheating goes on; people work together and/or use calcula-tors.

• Sneak out of the building while supposed to be putting inrequired hours.

• Nobody knows if you are at your computer or not.

• Someone else can take your test.

What do you suggest should be done differently? How might it be improved?


General • Make its use voluntary.

• Correct problems of computers locking up.

• The time required should be appropriate to the assignment.

• Schedule classes at the same times so that help can besolicited from classmates.

Employees • Hire better or more helpful tutors.

• Hire more tutors.

• Develop a better way to signal for assistance.

• Tutors need to know more about math as well as computers.Courses • Allow students to do the work in their rooms.

• Offer some courses at the emporium and in traditional class-rooms; provide a choice.

• Make MATH 1525 optional.

• Optional hours; no requirements for number of hours at theemporium.

• More advanced courses (such as calculus) should not be atthe emporium.

Convenience/location • Move the emporium to campus; move it to a more convenient location.

• More convenient bus schedule.

• Adjustable chairs.

• Have coin-operated copiers available.

• Have small sections of the emporium set up for quizzes.

Tests • Allow calculators.

• Change the way each question is timed equally.

• More time allowed per question.

• Create a way to give partial credit.

• Ability to check each test when completed before submitting it.

APPENDIX 2.6. OPEN-ENDED RESPONSES

FROM ENGINEERING STUDENTS

 Questions and Responses No.

What do you like about the Math Emporium and your experiences there?

• It is a learning environment, a good place to study and do group work. 65

• The staff are friendly and helpful. 46

• Lectures and tutoring are available there. 43

• The chairs and computer arrangements are comfortable. 35

• It is open twenty-four hours a day seven days a week.

What do you not like about the Math Emporium and your experiences there? 26

• It is distant from campus, inconvenient to reach, and requires substantial 93

travel time.

• There are hardware and software problems. 82

• The staff are not knowledgeable and are sometimes slow when there is a 63

problem.

• Macs are used, but I own a PC or PCs elsewhere. 35

• It’s a requirement to go there, so many hours are required, and checking in and out can take a lot of time.

What should be done differently at the Math Emporium? 35

• Get rid of the Macs and replace them with PCs; get better computers. 40

• The staff could be more helpful, especially in relation to certain courses 39 and software being used.

• There should be better and more reliable hardware (especially servers), 33 software, and printers.

• Make use of the emporium optional. Assignments can be completed elsewhere, 29 often in one’s room using the computer required by the university.

• There should be more tutors and other staff available over more hours. 24

Do you think that you learned more as a result of your experiences at the Math Emporium? If yes, why? If no, why not?


• No. 116 • Yes. 81

• I can learn the material just as well at home. 53

• I was forced to learn the material using the quick tests and interviews. 26

• It is a good place to get help and to meet with a group; the staff are helpful. 23

 

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Writing Factory
Study Master
Top Grade Tutor
Academic Master
Writer Writer Name Offer Chat
Writing Factory

ONLINE

Writing Factory

I can help you with creating a presentation of one slide for The Word of William Hunter. I will be happy to offer you 100% original work with high-quality standard, professional research and writing services of various complexities.

$5 Chat With Writer
Study Master

ONLINE

Study Master

I am highly qualified expert, working from 2008-9 in this industry. I have all relevant skills and expertise related to your project.

$15 Chat With Writer
Top Grade Tutor

ONLINE

Top Grade Tutor

I am highly qualified expert, working from 2008-9 in this industry. I have all relevant skills and expertise related to your project. To ensure my potential must visit my profile to check my ratings and uploaded samples. Thanks :--)

$20 Chat With Writer
Academic Master

ONLINE

Academic Master

I have super grip on essays, case studies, reports and discussion posts. I am working on this forum from last 6 years with full amount of satisfaction of my clients.

$20 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Social media and loneliness essay - Four Source Essay - How to create a genogram in microsoft word - Week 4 - data mining - Oxalic acid with sodium hydroxide - Physics chain reaction project - The ______ measure of returns ignores compounding - The obligation to endure ethos pathos logos - Body of evidence garage scene - Rigor and relevance examples - Annotated Bibliography - Order 2303456: the effect of bean type amount of egg laid by female bean bettles - National archives written document analysis worksheet - I have so much homework hyperbole - Cuhk social science broad based - H101 - Mysoclab answers - Anthropology Discussion post - 5 promotion mix tools for communicating customer value - Queen mab speech analysis - A 0.14 kg baseball is dropped from rest - Synthesis Worksheet: Doctoral Identity - How many ram chips are necessary - As9102 rev c forms - Psychsim 5 my head is spinning answers - Work within legal and ethical framework - Sun sparc enterprise t5120 - +91-8306951337 vashikaran specialist near me IN Jodhpur - Mcdonald's darling quarter haymarket nsw - Bulimia research paper MLA - House and mitchell path goal theory - Supply Chain Management - Facial recognition software its reliability effectiveness and admissibility - Cohesion tension theory of water transport in plants - Microsoft wireless mouse 5000 - Photosynthesis and cellular respiration in elodea lab answers - Er to relational mapping algorithm - Gilwell halls plymouth address - Crimes of the Future essay - Fluid volume deficit symptoms ati - Strayer university acc 100 syllabus - Aws double hung windows - Define duty of care - Texas gov discussion (atleast 250 words) - WEEEK VII PART 2A - Can you say what your strategy is - Cultural Competence Advocacy Project - The Advantages and Limitations of Expert Reviews of Essay Writing Sites - How to measure arm span - Safety 1st snug fit folding infant seat yardley - Http www creativebloq com infographic tools 2131971 - Presenting problem case study example - MGT 351 week 5 - Society and culture association - Stores material within the cell - Dozier company produced and sold units - Amadeus availability display explanation - Milestone 1: Project Introduction - Security Architecture and Design - Kerberos - Communication based on a speaker's body and voice - History of proctor and gamble - SWOT Analysis - Importance of sleep informative speech outline - International business communication - Sociology powerpoint template - How to survive ib junior year - Full block style business letter example - Discussion - Teejay maths level d - E coli on cled agar - Role development in professional nursing practice 2nd edition - Becoming a master student 16th edition chapter 5 quiz answers - Introduction to science exercise 1 data interpretation - Iso 6336 gear standard - In a communication context what is meant by shared meaning - Is toms shoes a public company - Strengths and weaknesses of hawthorne studies - Swarovski ring size 50 - Freepbx call center builder - Cheetah conservation fund namibia visit - Moral/Ethics and human behavior - Public relations lecture notes - Some recent financial statements for smolira golf corp follow - Kamala harris leadership style - United states registered nurse workforce report card and shortage forecast - Before giving care - Key and peele pizza sweater - Could 131 g of xenon gas in a vessel - Hris needs analysis - Powerpoint - San francisco department of building inspection notice of violation - Ritz carlton organizational culture case study - Downfall creek bushland centre - Write a 3-5 full pages Double Spaced Essay for Africana studies - Please submit your final assignment for the semester, your completed Student Success Plan, here. You should have completed all sections of the plan using what you've learned in this course. - Why america is self segregating they say i say - Www factmonster com history - Statistics - Managerial statistics questions and answers - Sciences po lse dual degree