A major stumbling block to students' success in intermediate microeconomic theory is an inadequate background in basic quantitative skills and economic concepts. To address this concern, we develop a Pretest Program to help improve student performance. Results from an experimental design show that students who participate in the Pretest Program receive higher grades in intermediate microeconomic theory than students who are in the control group. We conclude that students who participate in the Pretest Program raise their skill levels in mathematics and economics and are better able to identify both their weaknesses and the instructor's expectations early in the semester.
keywords: student learning, pretest program, basic skills, education research
One of the major stumbling blocks to students' success in intermediate microeconomic theory (a junior-level course), is an inadequate or out-of-date background in basic quantitative skills and economic concepts. Our experience at California State University, Fullerton (CSUF) is that many students take their requisite mathematics and economic principles courses more than a year before their junior-level coursework and forget much of what they learned because of lack of use. Consequently, they start intermediate theory with a handicap and must continuously struggle to keep up with the course material. This deficiency also hinders new learning, since a substantial amount of class time is spent reviewing introductory-level material. Educational psychologists have clearly demonstrated the importance of students' prior knowledge in understanding course material for a wide variety of academic disciplines.
To address this problem, we developed a Pretest Program, an integrated plan to help students improve their performance in intermediate microeconomic theory. At the core of the program is a two-part pretest of students' knowledge of basic quantitative and economic skills. As we discuss below, students were given a wide variety of options from which to upgrade their basic skills outside of the classroom and without the direct involvement of the instructor.
Our objective in this paper is to determine if our Pretest Program improves students' performance in intermediate economic theory. We address this question by analyzing data drawn from an experimental design. The experiment involves five instructors who offered multiple sections of intermediate microeconomic theory. The instructors taught one section with the Pretest Program and the other without. Using logistic regression methods, we assess the importance of the Pretest Program by analyzing and comparing grade distributions for the pretest and no-pretest groups. In the following sections, we first position our Pretest Program within the context of an extensive education literature on prior knowledge, pretesting, and learning. Next, we provide more detailed descriptions of the Pretest Program and data collected, and then we explain our methodology and results.
II. The Literature on Prior Knowledge and Pretesting
Over the last 20 years, researchers in the field of educational psychology have carefully examined the factors that affect student performance. From this work, it is apparent that a key factor contributing to the learning process is students' prior knowledge. Research in the field of cognitive psychology demonstrates that students possessing more extensive prior knowledge recall and understand more about a subject than those individuals with less preexisting knowledge (Chi & Ceci, as cited by Thompson & Zamboanga, 2003; Glaser, 1984; Schneider & Pressley, 1987). Accompanying this notion has been a growing acceptance of a concept known as constructivism, which refers to the idea that students construct knowledge for themselves--"each learner . . . constructs meaning--as he or she learns (Hein, 1991)." It follows that prior knowledge plays an important role in fostering student learning, with new learning building on prior understanding.
Dochy, Segers, and Buehl (1999) provide a detailed overview of the research on prior knowledge and its role in promoting student learning. They examine nearly 200 articles, books, and research reports and find that prior knowledge generally has positive effects on student performance. Indeed, they report that an even stronger finding is reached by numerous studies, so that "it is difficult to overestimate the contribution of individuals' prior knowledge" and that "prior knowledge is an essential variable in learning" (see, for example, articles by Alexander, Kulikowich, & Jetton, 1994; Alexander, Kulikowich, & Schulze, 1992; Alexander, Pate, Kulikowich, Farrell, & Wright, 1989; Bjorklund, 1985; Chi & Ceci, 1987; Chi, Glaser, & Farr, 1988; Dochy, 1992, 1994; Glaser, 1984; Glaser, Lesgold, & Lajoie, 1987; Pressley & McCormick, 1995; and Schneider & Pressley, 1989). Moreover, Dochy, Segers, and Buehl note that the learning process is most strongly affected by domain-specific prior knowledge. This latter point is reinforced by numerous studies (e.g., Alexander & Judy, 1988; Hudson & Rottmann, 1981; McCutcheon, 1986; Dochy, 1992; and Klahr & Carver, 1988) in the academic disciplines of physics, mathematics, writing ability, computer programming, and economics.
Further, a review of the kinds of assessment techniques used to assess prior knowledge suggests that one of the most effective ways to test students' prior knowledge is through the use of objective pretests. Thompson and Zamboanga (2003), in particular, make a strong case for the usefulness of pretests to both instructors and students. For instructors, the pretest allows for an evaluation of the depth of the classes' preexisting knowledge. With this information, an instructor can adjust his or her course presentation to communicate the subject matter more effectively. For the students, a pretest carries obvious benefits. A good pretest can familiarize them with the instructor's testing style and, more importantly, re-introduce them to some of the basic course material, in effect mobilizing their relevant prior knowledge. We take this argument one step further and contend that the simple act of taking (and, of course, studying for) a pretest influences (in a positive manner) students' prior knowledge. Further, taking the pretest shifts the responsibility for learning these skills to the students, thereby putting them in a more active "learning mindset" right from the start of the semester.
Compared to the extensive literature on the role of prior knowledge in education, the number of studies explicitly considering the learning process in economics courses is much more limited and presents somewhat mixed results. Dochy (1992) offers a comprehensive discussion of eight such studies. Three of these (Clayton, 1964; Saunders, 1980; and McKenzie & Staaf, 1974) found positive associations between college-level economics performance and whether students had taken high school economics. Further, two of the three studies determined that students armed with prior knowledge achieved the same test scores as students lacking that knowledge while exerting less effort. In contrast, three economics education studies (Harbury & Szreter, 1970; Siegfried, 1980; and Voss, Blais, Means, Greene, & Ahwesh., 1986) found no association between prior knowledge (measured by the taking of a formal economics course or simply by the act of studying for an advanced placement test) and performance in college economics courses. Perhaps the most intriguing result, however, comes from two studies (Moyer and Paden, 1968; Palmer, Carliner, & Romer, 1979) that found that students who had taken high school economics scored significantly higher at the beginning of a college economics principles class. The performance of these same students, however, was no better by the end of the course than those students who had not taken economics in high school.
Of course, prior knowledge in the form of quantitative skills and a solid foundation in economic principles are only two of the many factors leading to success in an intermediate-level economics course. Teachers are keenly aware that actively engaging students in the learning process also helps them to succeed. Becker (1997) reports on the apparent consensus among teaching specialists regarding the need for active student involvement in the learning process. In two studies evaluating factors that influence student performance, Devadoss and Foltz (1996) emphasize the importance of motivation in determining better grades, while Becker and Watts (1995) encourage instructors to use a wider variety of teaching methods to actively engage students. We feel that the Pretest Program addresses all of these points.
III. The Pretest Program
We administered a common pretest in the basic concepts of principles of microeconomics and introductory algebra/calculus to incoming Intermediate Business Economics (Economics 315) students. The economics section of the pretest consisted of 10 questions that emphasized graphical and algebraic treatments of supply and demand; shifts in and movements along supply and demand curves; demand elasticity; opportunity cost; and production and costs, including profit maximization under perfect competition and monopoly. The mathematics portion of the test consisted of 5 questions that stressed the basics of two-dimensional graphs, calculation of slopes and percentage changes, simple differentiation, and the technique of finding the maximum of a function. In this initial in-class pretest, two versions (A and B) were given to each class. Because two versions were used, cheating--or even the temptation to cheat--was greatly minimized. Each version contained the same number and types of questions, with only the numerical parameters being different. Thus, in Version A, a demand curve might be presented as Qd = 50 - 5P, while in Version B, demand became Qd = 60 - 4P. A passing score of 60% on the test earned each student "full credit" for 5% of his or her overall Economics 315 grade. Assigning credit for successfully passing the pretest is an important component of the program, given evidence that students typically do not take tests seriously unless they count as part of the course grade (Becker, 1997).
Our hypothesis is that the Pretest Program is an investment in human capital that can improve students' grades in intermediate theory. The review of economics principles and introductory algebra/calculus upgrades students' skills, reinforces their prior knowledge, and helps them to become active participants in the course early in the semester. To ensure that students had time to review the necessary material, we administered the pretest during the second half of the first week of class. Thus, students had anywhere from 2 to 4 days to study for the test, depending on their schedules. Night students had a full week to prepare, since evening classes met once a week. Moreover, most of the instructors spent a small amount of class time briefly reviewing the basic ideas that would be covered on the pretest.
Another key feature of the program was the array of support services offered by the Economics Department. Students who did not pass the pretest on their first attempt were encouraged to take corrective measures to strengthen their skills. They could receive tutoring help from the Economics Department Help Center or the Mathematics Lab. If students elected to study on their own, they could do so by using textbooks or software provided by the Economics Department. Students had to pass the pretest in no more than two additional attempts within the first three weeks of the semester to receive the 5% credit. It was emphasized to the students that they would not be dropped from the course if they did not pass the pretest. They were reminded that the purpose of the test was to encourage them to refresh their knowledge of economics and mathematics.
For these additional attempts, which were administered at specified times and locations, two new versions (C and D) were used (Version C for the first re-take, and Version D for the second). This, of course, prevented students from completing the pretest "between" testing opportunities. Students could not simply memorize an answer--they had to know how to solve a supply/demand model and how to calculate a percentage change in order to correctly answer the re-take questions. The students thus knew exactly the skills that were being tested, which allowed them to focus their study efforts very precisely.
We implemented an experimental design during the spring 1999 semester to test the overall effectiveness of the Pretest Program. Five instructors teaching multiple sections of Economics 315 taught one section of the course using the pretest (the treatment group) and another section without using the pretest (the control group). We recorded the final grades given by each instructor for each section, class size, and whether the class was a day or evening class. To avoid bias in the experimental data, we verified that instructors employed the same teaching methods in each of their individual sections and administered the same quizzes and examinations to both sections. We also checked the student withdrawal rates for the pretest and no-pretest sections. There was no difference between the two, suggesting that poorer students were not dropping the course, or changing sections, to avoid the pretest. Finally, since the students did not know beforehand whether they would have to take the pretest, there was little chance for any self-selection based on the pretest to emerge. That is, it is likely that the assignment of students to the control and treatment groups was independent of the pretest instrument.
Table 1 reports the grade distributions for the 412 students who participated in the experiment during the spring 1999 semester. This table gives a breakdown of the grade distributions for the pretest and control groups. As shown, there are salient differences in the grade distributions between the pretest and control group. In particular, pretest students received a much smaller percentage of "D" and "F" grades. A Chi-squared test indicates that we reject the null hypothesis of independence between the groups (p = .009), leading us to conclude that the data clearly favor the hypothesis that the Pretest Program improves students' performance.
Three factors lead us to be cautious about this conclusion, however. First, the data in Table 1 have been aggregated across all instructors. Differences among instructors in teaching effectiveness, coupled with differences in the number of students in the pretest and control groups across instructors, may mask or accentuate differences in grades between the pretest and control groups. Second, some of the classes in our experimental design were offered in the evening. Evening students at CSUF tend to be older and possess more work experience. To the extent that these factors may lead to higher grades in intermediate microeconomics, failure to control or account for evening classes may bias the results. Third, for four of the five instructors who participated in the experiment, class size was larger for the control group. This could lead us to inadvertently attribute success to the Pretest Program if larger classes were associated with lower grades. Given these concerns, we turn next to a regression framework to isolate the impact of the Pretest Program on student performance.
Students receive particular grades in a class when their performance on examinations and other assessment tools meet or exceed the standard for those grades set by the instructor. As is often the case, we do not observe student performance directly, nor do we observe the standard set by the instructor. Instead, we know only the outcome of the process--final grades. This suggests that a useful way to proceed toward an empirical specification for testing the success of the Pretest Program is to assume that there is an underlying latent-variable model that satisfies the classical linear-model assumptions (Wooldridge, 2000). Translated to the present application, let the variable y* be a latent variable representing the difference between a student's exam scores and other measures of performance and the standard set by the instructor for receiving a grade of at least "C." We are particularly interested in the "C" grade because business students must meet this standard to satisfy their core requirement in economics. Next, let y* be related to a vector of characteristics determining grades (X), unknown parameters ß, and a random error term e as
We then define the indicator variable:
Y = 1 if y* ≥ 0
Y = 0 if y* < 0
Thus, the dependent variable for our regression is the binary indicator taking the value of one (Y = 1) if the student receives a grade of "C" or better for the course. That is, Y takes on the value of one if the student meets or exceeds the standard set by the instructor.
We assume that the probability (Y = 1) can be represented by a logit model. That is,
Prob (Y = 1) = exp(ßX)/[1 + exp(ßX)]
The probabilities for the logit model are based on the logistic distribution, which, like the normal distribution, is symmetric, but with heavier tails than the normal distribution (Greene, 2003).
It is apparent from (3) that Prob (Y = 1) is nonlinear in both X and ß. However, the logarithmic transformation of the odds ratio,
ln[Prob(Y = 1)/(1 - Prob(Y = 1))] = ßX
is linear in both, leading to a standard interpretation of the estimated coefficients from the logit model. In the present context, these coefficients give the change in the log odds of receiving at least a "C" in the course relative to receiving a "D" or "F," for a given change in the explanatory variables (X).
Changes in the log odds are not particularly useful from an educational policy perspective, however. Instead, it is more informative to have an estimate of the "marginal effects" for key explanatory variables. In the context of the present study, a marginal effect will allow us to estimate how changes in an explanatory variable--participation in the Pretest Program--will change the probability of obtaining a grade of at least "C." In the analysis that follows, we report the marginal effect for the Pretest-Program variable because it allows us to better gauge the quantitative impact and practical importance of the Pretest Program on student learning.
Marginal effects are typically evaluated at the sample means of the data since they depend on the values of X (see Greene, 2003, p. 668). However, we cannot use this formulation in the present study. As we discuss below, all of the explanatory variables in our logit model are dummy variables. Calculating marginal effects at the sample means of the explanatory variables is neither appropriate nor informative when this is the case. As an alternative approach, we use the logit coefficients obtained from estimating Equation (3) to predict the probabilities for each instructor of receiving a grade of at least "C." Marginal effects for each instructor are then calculated as:
Marginal Effect =
Prob(Y = 1|Pretest = 1) - Prob(Y = 1|Pretest = 0)
The first term on the right-hand side of Equation 5 is the predicted probability of receiving a grade of at least "C" given that the Pretest variable is set equal to one when predicting the probability from the logit estimates. The second term on the right-hand side of Equation 5 predicts the same probability, but with the Pretest variable set equal to zero. The difference between the first and second terms is thus the marginal effect of the Pretest Program on the probability of receiving a grade of at least "C."
We next turn to a discussion of the key variables included in the vector X. This set of variables consists of: (a) a dummy variable set equal to one if the student was in the pretest group; (b) a dummy variable set equal to one if the course was taken in the evening; (c) a series of dummy variables for the individual instructors, with Instructor 5 serving as the base category; and (d) a series of dummy variables controlling for class size. With respect to class size, the categories are small (fewer than 20 students), medium (21 to 59 students), and large (60 or more students). Medium-sized classes are used as the base category for the logit regressions. As mentioned, the controls for instructors, evening classes, and class size are used in an effort to isolate the impact of the present program on student performance. Failure to account for instructor, evening classes, and class size would give misleading results for the pretest variable if they were correlated with grades and with the pretest variable itself.
Table 2 summarizes our findings from estimating the parameters of the logit model. As mentioned, the coefficient estimates reported in the table give changes in the log odds of receiving a grade of at least "C" for a change in an explanatory variable. The results reported in the table give strong support for the hypothesis that the Pretest Program improves student performance. As shown, the coefficient for the pretest variable is positive and statistically significant. Of the remaining explanatory variables, only the coefficient for Instructor 1 is statistically significant.
Our estimated marginal effects for the Pretest Program are reported in Table 3. The table is organized as follows. The first column of the table gives, for each instructor, the predicted probabilities assuming that students do not participate in the Pretest Program (the logit coefficient for the pretest variables is set equal to zero), while the second column gives the probabilities assuming participation in the program (pretest variable is set equal to one). We also set the coefficients for the evening class variable and the small and large class-size variables equal to zero, so that the predicted probabilities in each cell are based on the assumption that the class is a medium-sized class taught during the day. As shown, the probabilities reported imply large positive effects on grades for the Pretest Program. The implied percentage point increase in the probability of receiving a grade of at least "C" ranges from 0.148 to 0.23.
What do these marginal effects have to say about the number of students who directly benefited from the program? For each instructor, we calculated the increase in the number of students in the control group who would have been pushed up to a grade of "C" had they participated in the program. Applying the implied marginal effects to the control group data, we find that 40 additional students from the control group would have received a grade of at least "C" had they had the benefit of the Pretest Program. Blown up to the usual 15 sections of Intermediate Microeconomic Theory taught at CSUF per semester, this yields a total of 120 students who could directly benefit from the Pretest Program.
Results from our experimental design indicate that the pretest of basic economics and mathematics skills is successful in raising students' scores in intermediate microeconomics. We speculate that the pretest works on two fronts. Students prepare for the pretest, thus raising their basic skills. Moreover, students get early feedback on their weaknesses and on instructors' expectations. This gets them involved in the learning process right from the start of the semester.
Given that CSUF business students must pass Economic 315 with a grade of at least "C," the Pretest Program represents a potentially invaluable tool for facilitating effective learning and advancing students' progress through the undergraduate business program. The benefit of advancing students through the program is particularly noteworthy given that many universities, particularly those in the California State University System, are now in the midst of serious budget reductions. It is also worth noting that the pretest program itself is relatively easy to administer. One graduate student working approximately 5 hours per week for two weeks during the beginning of the semester is all that is needed to grade and summarize the pretest results. The tutoring help from the Economics Department Help Center is already in place, and given that the tutors have very light work loads during the first few weeks of the semester, the additional costs imposed by the pretest program are minimal. In all, given the encouraging results from this study, along with the relative ease of administering the pretest, other college academic departments might find some sort of pretest program to be a cost-effective way of advancing student learning.
Alexander, P. A., and Judy, J. E. (1988). The interaction of domain-specific and strategic knowledge in academic performance. Review of Educational Research, 58, 375-404.
Alexander, P. A., Pate, E. P., Kulikowich, J. M., Farrell, D. M., and Wright, N. L. (1989). Domain-specific and strategic knowledge: Effects of training on students of differing ages or competence levels. Learning and Individual Differences 1(3), 283-325.
Alexander, P. A., Kulikowich, J. M., and Jetton, T. L. (1994). The role of subject-matter knowledge and interest in the processing of linear and nonlinear texts. Review of Educational Research, 64(2), 201-252.
Alexander, P. A., Kulikowich, J. M., and Schulze, S. K. (1992). How subject-matter knowledge affects recall and interest. Paper presented at the XXV International Congress of Psychology, Brussels.
Becker, W. E. (1997). Teaching economics to undergraduates. Journal of Economic Literature, 35(3), 1347-73.
Becker, W. E., and Watts, M. (1995). Teaching tools: teaching methods in undergraduate economics. Economic Inquiry, 33, 692-700.
Bjorklund, D. F. (1985). The role of conceptual knowledge in the development of organization in children's memory. In C. J. Brainerd and M. Pressley (Eds.), Basic processes in memory development (pp. 103-142). New York: Springer-Verlag.
Chi, M. T. H., and Ceci, S. J. (1987). Content knowledge: Its role, representation, and restructuring in memory development. In H. W. Reese (Ed.), Advances in child development and behavior (Vol. 20, pp. 91-142). Orlando, FL: Academic Press.
Chi, M. T. H., Glaser, R., and Farr, M. (1988). The nature of expertise. Hillsdale, NJ: Erlbaum.
Clayton, R. E. (1964). Performance in economics at school and university. Vestes, 7, 120-127.
Devadoss, S., and Foltz, J. (1996). Evaluation of factors influencing student class attendance and performance. American Journal of Agricultural Economics, 78, 499-507.
Dochy, F. J. R. C. (1992). Assessment of prior knowledge as a determinant for future learning. Utrecht/London: Lemma BV/Jessica Kingsley Publishers.
Dochy, F. J. R. C. (1994). Prior knowledge and learning. In T. Husen and T. N. Postlethwaite (Eds.), International Encyclopedia of Education (2nd ed., pp. 4698-4702). Oxford/New York: Pergamon Press.
Dochy, F. J. R. C., Segers, M., and Buehl, M. M. (1999). The relation between assessment practices and outcomes of studies: The case of research on prior knowledge. Review of Educational Research, 69(2), 145-186.
Glaser, R. (1984). Education and thinking. The role of knowledge. American Psychologist, 39, 93-104.
Glaser, R., Lesgold, A., and Lajoie, S. (1987). Toward a cognitive theory for the measurement of achievement. In R. R. Ronning, J. Glover, J. C. Conoley, and J. C. Witt (Eds.), The influence of cognitive psychology on testing and measurement (pp. 41-85). Hillsdale, NJ: Erlbaum.
Green, W. H. (2003). Econometric analysis. Upper Saddle River, New Jersey: Prentice Hall.
Harbury, C. D., and Szreter, R. (1970). The value of prior experience of economics for university students. The Journal of Economic Education, 2, 56-61.
Hein, G. E. (1991). Constructivist learning theory. Retrieved from
McKenzie, R. B., and Staaf, R. J. (1974). An economic theory of learning. Blacksburg, VA: University Publications.
Moyer, M. E., and Paden, D. W. (1968). On the efficiency of high school economics courses. American Economic Review, 58, 870-877.
Palmer, J., Carliner, G., and Romer, T. (1979). Does high school economics help? The Journal of Economic Education, 15, 58-61.
Pressley, M., and McCormick, C. B. (1995). Advanced educational psychology for educators, researchers and policymakers. New York: Harper Collins College Publishers.
Saunders, P. (1980). The lasting effects of introductory economics courses. The Journal of Economic Education, 12, 234-248.
Schneider, W., and Pressley, M. (1989). Memory development between 2 and 20. New York: Springer-Verlag.
Siegfried, J. J. (1980). Factors affecting student performance in law school economics courses. The Journal of Economic Education, 12, 54-60.
Thompson, R. A., and Zamboanga, B. L. (2003). Prior knowledge and its relevance to student achievement in introduction to psychology. Teaching of Psychology, 30(2), 96-101.
Voss, J. F., Blais, J., Means, M. L., Greene, T. R., and Ahwesh, E. (1986). Informal reasoning and subject matter knowledge in the solving of economics problems of naive and novice individuals. Cognition and Instruction, 3, 269-302.
Wooldridge, J. M. (2000). Introductory Econometrics: a modern approach. Mason, OH: South-Western College Publishing.
Worthington, A., Hansen, J., Nightingale, J., and Vine, K. (1998). Supplemental instruction in introductory economics: an evaluation of the University of New England's peer assisted study scheme. Australian Economic Papers, Special Issue, 69-80.
Posted March 11, 2004
Modified March 12, 2004
All material appearing in this journal is subject to applicable copyright laws.
Publication in this journal in no way indicates the endorsement of the content by the California State University, the Institute for Teaching and Learning, or the Exchanges Editorial Board.
©2004 by Victor Brajer and Andrew Gill