Excerpts  from
Return to Academic Standards:
Charles Emery, Ph. D.,
Tracy Kramer, Ph. D., ->
<- Robert Tian, Ph. D.




 Today many colleges and universities in the United States demand their faculty members to treat the students as customers in their teaching practice. 

As the result, the demands for increasing student enrollments, the pressure to satisfy the students’ desires for higher grades, and using student evaluations of faculty performance (SEFP) or student evaluations of teaching effectiveness (SETE) have become increasingly common on college campuses across the nation.


In a study that tracked the use of student evaluations of faculty


 in 600 colleges between 1973 and 1993, Seldin (1993) found that the use of SEFP/SETE increased from 29% to 86% during that period.


As professors at business schools we are in the position that students are our products rather than they are our customers (Emery, Kramer, & Tian 2001).  We view students’ schoolwork as their products (Tian 2001) and believe that we, as professors, are their customers  (Emery 2001).  We therefore view the SEFP/SETE as a negative approach in terms of education quality control.  ….
…. in the majority of business schools across the country, SEFP becomes the most frequently used, in most cases it becomes the only method of evaluating teaching effectiveness.  Does this practice encourage business school faculty to teach their students with future employers in mind, or does


it encourage faculty to teach with their own student evaluations in mind



Challenge the Validity of SEFP/SETE

A Literature Review


There have been decades of extensive research on SEFP/SETE. That research can answer many persistent questions and dispel some widely held misconceptions. For instance, it is widely believed that


SEFP/SETE are only popularity contests that have little to do with learning.



…. some critics argue that student ratings are unduly affected "by the personal style of the instructor rather than [the] instructor's ability to convey instructional material” (Abrami, Leventhal & Perry, 1982).  Feldman (1986) found ... the overall relationship of instructor personality to student ratings is substantial, with positive correlations ranging from moderate to high.  A meta-analysis of a dozen of these studies revealed


"instructor expressiveness had a substantial impact on student ratings but a small impact on student achievement")…… 

Cohen (1983) found that student achievement accounted for 14.4% of overall instructor rating variance. Other analyses have turned up somewhat lower estimates of student rating validity. In a meta-analysis of 14 multi-section validity studies, McCallum (1984) found that student achievement explained 10.1% and 6.4% of (respectively) overall instructor and course rating variance. And, in a quantitative analysis of six validity studies chosen for their exceptional control of student presage variables,


Dowell & Neal (1982) found that student achievement accounted for only 3.9% of between-teacher student rating variance.  Damron (1996) finds out that when considered together with validity research yielding only marginal and unstable relationships between student ratings and instructional outcomes, it is likely that


most of the factors contributing to student instructional ratings are unrelated to instructors' ability to promote student learning.


Dowell & Neal (1983) by providing more evidences support their previous conclusion that student ratings are inaccurate indicators of student learning; they can only be best regarded as indices of


"consumer satisfaction" rather than teaching effectiveness. ….



it was criticized that summary and global ratings, which are frequently used to make tenure and promotion decisions, were particularly elevated by instructor expressiveness. ...


lecture content had a sizable influence on student achievement but only a negligible impact on student ratings. .... student instructional ratings should not be used in decision making about faculty promotion and tenure because charismatic and enthusiastic faculty can receive favorable student ratings regardless of how well they know their subject matter, nor is it related with how much their students learn (Abrami, Leventhal & Perry, 1982, Damron 1996).

 SEFP/SETE is a serious unrecognized infringement on academic freedom (Haskell 1997).  Scholars also frankly indicate students are not qualified to evaluate their professors.



Abrami, P.C., d’Apollonia, S. & Cohen, P.A. (1990) “Validity of student ratings of instruction: What we know and what we do not”, Journal of Education Psychology, 82(2), pp. 219-231.

Abrami, P.C., Leventhal, L., & Perry, R.P. (1982) “Educational seduction”, Review of Educational Research, 32, pp. 446-464.

Adams, J.V. (1997) “Student evaluations: The ratings game”, Inquiry 1 (2), pp.10-16.

Aleamoni L. (1989). “Typical faculty concerns about evaluation of teaching.” In L.M. Aleamoni (Ed.) Techniques for evaluating and improving instruction. Jossey-Bass, Inc. San Francisco.

Aleamoni, L. (1987) “Student rating: Myths versus research facts", Journal of Personnel Evaluation in Education, (1)  pp.111-119.

Arreola, Raoul A. (1995) Developing a comprehensive faculty evaluation system. Bolton, MA: Anker.

Cashin, W. (1990). “Students do rate different academic fields differently.” In Theall, M. & Franklin J. (Eds), Student Ratings Of Instruction: Issues For Improving Practice. Jossey-Bass, Inc. San Francisco, 1990.

Cohen, P.A. (1983) “Comment on a selective review of the validity of student ratings of teaching", Journal of Higher Education, 54,  pp.448-458.

Comm, C.L. and Mathaisel, D. (1998) “Evaluating teaching effectiveness in America's business schools: implications for service marketers”, Journal of Professional Services Marketing, Vol.16 No.2, pp. 163-170.

Crumbley, D. L. (1995) “The dysfunctional atmosphere of higher education: Games professors play”, Accounting Perspectives, Spring, Vol.1, No. 1.   

Damron, J.C. (1996) “Instructor personality and the politics of the classroom”, < http://www.mankato.msus.edu/dept/psych/Damron_politics.html> accessed in May 2001.

DeBerg, C.L. and J.R. Wilson (1990) “An empirical investigation of the potential confounding variables in student evaluation of teaching. Journal of Accounting Education, pp. 37-62.

Dooris, M. J. (1997) “An analysis of the Penn State student rating of teaching effectiveness: A report presented to the University Faculty Senate of the Pennsylvania State University”, < http://www.psu.edu/president/cqi/cqi/srte/analysis.html> accessed in May 2001.

Dowell, D.A., & Neal, J.A., (1983). “The validity and accuracy of student ratings of instruction: A reply to Peter A. Cohen”. Journal of Higher Education, 54, pp.459-463.

Dowell, D.A. & Neal, J.A. (1982) “A selective review of the validity of student ratings of teaching”. Journal of Higher Education, 53, pp.51-62.

Emery, C., Kramer, T., and Tian, R. (2001) “Customers vs. products: adopting an effective approach to business students”, Quality Assurance in Education, Vol. 9 No. 2, pp. 110-115.

Emery, C. (2001) “Professors as customers”, in Lamb, Hair & McDaniel (Eds.) Great Ideas for Teaching Marketing 6th Edition, New York: South-Western College Publishing, pp. 136-139.

Erikson, S.C. (1983). “Private measures of good teaching.” Teaching of Psychology, 10, pp.133- 136.

Feldman, K.A., (1986) “The perceived instructional effectiveness of college teachers as related to their personality and attitudinal characteristics: A review and synthesis”, Research in Higher Education, 24, pp.139-213.

Feldman, Kenneth A. (1978). "Course characteristics and college students' ratings of their teachers: What we know and what we don't", Research in Higher Education, (9) pp.199-242.

Franklin, J., & Theall, M. (1990). “Communicating student ratings to decision makers: Design for good practice.” In Theall, M. & Franklin J. (Eds), Student Ratings of Instruction: Issues For Improving Practice, Jossey-Bass, Inc. San Francisco.

Haskell, R. E. (1997) “Academic freedom, tenure, and student evaluation of faculty: Galloping pulls in the 21 century”, Education Policy Analysis Archives, Vol. 5, No.6. 

McCallum, L.W. (1884) “A meta-analysis of course evaluation data and its use in the tenure decision”, Research in Higher Education, 21, pp.150-158.

McKeachie, W. (1987). “Can evaluating instruction improve teaching?” In L.M. Aleamoni (Ed.), Techniques for evaluating and Improving Instruction. San Francisco: Jossey-Bass, Inc.

Rosenfeld, P. (1987). Instructor's Manual to Accompany Scarr and Vander Zadens' Understanding Psychology (5th ed.), New York: Random House.

Seldin, P. (1993, July 21). “The use and abuse of student ratings of instruction”, The Chronicle of Higher Education, A-40.

Sproule, R. (2000) “Student evaluations of teaching: Methodological critique of conventional practices”,  Education Policy Analysis Archives, Vol.8 No. 50

Theal, Michael and Jennifer Franklin (eds.) (1990). Student ratings of instruction: issues for improving practice. New directions for teaching and learning no. 43. San Francisco: Jossey-Bass.

Tian, R.G. (2001) “Applying 4 Ps to Students’ Schoolwork”, in Lamb, Hair & McDaniel (Eds.) Great Ideas for Teaching Marketing 6th Edition, New York: South-Western College Publishing, pp. 125-127.

Tian, R. G. (2000) “Understanding consumer behavior: Psycho-anthropological approach", in North American Journal of Psychology, Vol. 2, No. 2, pp. 273-279.








OK Economics was designed and it is maintained by Oldrich Kyn.
To send me a message, please use one of the following addresses:

okyn@bu.edu --- okyn@verizon.net

This website contains the following sections:

General  Economics:


Economic Systems:  


Money and Banking:


Past students:


Czech Republic


Kyn’s Publications


 American education


free hit counters
Nutrisystem Diet Coupons