论文代写 National Postgraduate Entrance Examinations In China English Language Essay

8年前 933次浏览 论文代写 National Postgraduate Entrance Examinations In China English Language Essay已关闭评论

National Postgraduate Entrance Examinations In China English Language Essay

Published:

The English Test for Non-English Major Students of the National Postgraduate Entrance Examinations (ET-NPEE) in China is an important gate-keeper in the educational ladder between undergraduate and postgraduate study. Through analyzing the validity and backwash of the reading comprehension part of this test, the present paper tries to come up with some suggestions on how to improve the testing of reading so as to ultimately make the test more effective and efficient in selecting candidates with good English proficiency for postgraduate study.

The test

ET-NPEE is a norm-referenced English proficiency test developed by the National Education Examinations Authority based on the new version of Syllabus for National Postgraduate Entrance Examinations established by the China National Ministry of Education in 2005 (Zhang, 2010, p3). The purpose of this annual test is to select non-English major students with competent English language knowledge and language abilities for graduate schools and to facilitate college English teaching. In that only those whose scores rank high among candidates countrywide get a chance of being considered by their ideal graduate schools, the test is extremely competitive.

With listening and speaking being separately measured in a later interview with students who score high enough in this pencil and paper test, this test targets at the other 2 of the language constructs, reading and writing, and is composed of 3 sections: use of English, reading comprehension, and composition (Anonymous, 2008, p.3).

Reading comprehension consists of 3 sub-sections and tests different reading skills:

1) 5 multiple choice questions per long text (20 questions with 2 points for each) seeking to sample students' ability to understand the main idea of the text, identify some detailed information, draw logical inferences, decode the meaning of specific words from the context;

2) Multiple passage matching, multiple heading matching or ordering task (5 blanks with 2 points for each) testing the ability to detect text organization and cohesion;

3) English-Chinese translation (5 sentences with 2 points for each) testing the understanding of conceptual meaning or complex structures of a text.

On the ground that non-English major students have to wait till the first semester of the graduating year to take this exam and that there is no English course in junior and senior year, it seems the test doesn't have as direct impact on teaching process in the colleges as specified in the Syllabus. Therefore, in the following analysis of the reading comprehension part, the present essay will focus more on the backwash of the test on learning activities while ignores that on school teaching or curriculum design.

Reading Comprehension

It should be mentioned first that the Syllabus, the test specifications of ET-NPEE, has made a requirement on the vocabulary that candidates should master in order to take this test and appended a glossary of approximately 5500 words. By showing explicitly what vocabulary level the test requires, it will make the test more open to candidates and give them more confidence. Besides, by expanding candidates' vocabulary size, it will have some beneficial influence on their performance in reading comprehension in that breath and depth of vocabulary knowledge are positive predictors in the performance of university level ESL speakers on reading assessment (Qian, 2002). The above merits together contribute to validity of the reading comprehension part.

Alderson (2000, p.206, 270) argues that good reading tests should always employ multiple techniques since different methods can measure different aspects of the reading process. The 3 methods including multiple-choice, multiple-matching and translation used respectively in the sub-sections to directly measure reading proves that the test has made quite an effort in enhancing its validity and reliability.

It should also be highlighted that the multiple-choice and multiple-matching items which add up to 50 points can be scored objectively by computer and contribute to the advancement of scorer reliability.

However, at this moment we should be wary of the possibility that the test may be increasing its reliability at the cost of validity if we understand there is always some tension between reliability and validity (Hughes, 2003, p.51), especially when objective questions have a weight of 50% in the total mark.

One big difficulty of multiple choice questions repetitively stressed by Hughes (ibid, p.77) is that it is very difficult to write good ones. Among the characteristics of unsuccessful multiple choice questions, ineffective distractors enable guessing factor, a notorious fault of multiple-choice technique, to play a more considerable role in getting the right answer. The following multiple choice question, one out of the 5 measuring candidates' comprehension on a specific text from the 2008 test paper (see appendix), can be used as a good illustration.

On which of the following statements would the author most probably agree?

[A] Non-Americans add to the average height of the nation.

[B] Human height is conditioned by the upright posture.

[C] Americans are the tallest on average in the world.

[D] Larger babies tend to become taller in adulthood.

In this example, people can immediately recognize that C and D should be ignored if they use some common sense. This exclusion offers candidates a bigger chance to get the right answer even if they guess since they only need to choose between A and B. Such kind of ineffective item is not alone in the 2008 test paper. With professional guidance on test-taking strategies and frequent practice, test takers can become very good at eliminating the improbable or irrelevant answers and finally choose the right one. This again proves that the skills required to pass a language test are not necessarily the skills required in the use domain of that language (Bachman & Palmer, 1996), which is particularly the case for testing mainly with multiple choice items. Unknown harmful backwash on learning may come out since such kind of practice on multiple choice questions tend to coach candidates test-wise rather than effectively providing them with the most effective means of improving language ability (Hughes, 2003, p. 78).

Even if we brush aside guessing factor, multiple-choice technique is still problematic in testing ones' reading ability. Candidates have to spend time on distractors which they would not have thought of if otherwise and this can result in an inaccurate measure of their understanding (Alderson, 2000, p.219). Moreover, testers cannot tell whether the candidates are responding in the way anticipated. Test takers can get a question right without showing the ability being tested as illustrated above or answer an item wrong despite having the ability being tested (ibid, p.212). For example, in the text from the 2008 test paper that tells women are suffering from stress which is in more of a chronic and frequent nature than men are, there is a paragraph that quotes Adeline Alvarez, a single mother, as saying "It's the hardest thing to take care of a teenager, have a job, pay the rent, pay the car payment, and pay the debt. I lived from paycheck to paycheck". It is probable that a student can apprehend that Alvarez is struggling with the wear-and-tear brought by various debts and is devastated by such durable and repeated strains but still choose the wrong answer C instead of the right answer B because she thinks that C looks more close to Alvarez's situation literally than B. In this case, the student does interpret the text as the testers expect, but she answers it wrong partly due to the ambiguous nature of the item and partly due to the limitations of the test type in measuring one's individual and silent reading process.

The sentence "I lived from paycheck to paycheck" shows that

[A] Alvarez cared about nothing but making money.

[B] Alvarez's salary barely covered her household expenses.

[C] Alvarez got paychecks from different jobs.

[D] Alvarez paid practically everything by check.

In terms of multiple passage matching and ordering task, the other objective techniques used by test constructors to test reading comprehension besides multiple choice questions, the official marking key doesn't take consideration of partially correct answers and is likely to reduce their validity in testing candidates' ability of detecting text organization and cohesion. For example, if the right order is "ABCDE" and a student answers "AGBCD", she will only earn 2 points for the correct answer "A". In fact she does have a quite global understanding of the text since she gets the accurate sequence "ABCD". But inserting the distractor G into her answers makes her lose another 6 points she should more or less deserve. This sort of marking criteria can also be very discouraging by telling test takers that if they choose one single wrong answer, they could disrupt the whole order and gain nothing in this task. Thus they may spend less time and effort in practicing such matching and exercising corresponding abilities.

Probably being aware of the defects of using too many objective items, test writers add a subjectively evaluating technique, English-Chinese translation, to supplement and achieve good balance in the testing of reading. According to the syllabus (Anonymous, 2008, p.6), the purpose of including English-Chinese translation in the reading comprehension section is to test whether students can understand specific concepts or complex structures by inferring from within the text and express this understanding in Chinese correctly. Albeit this kind of task not only cause the candidates to exercise reading skills but also ask them to demonstrate that they are successfully using these skills, structuring and organizing Chinese is a language facility out of the scope of English reading and writing. The failure of a test taker to satisfactorily express in Chinese her accurate understanding of an English sentence can lead to some loss of points and undermine the validity of such task in testing reading comprehension.

Suggestions

Generally speaking, in spite of some drawbacks, ET-NPEE has employed relatively efficient means of directly testing candidates' reading ability. The following suggestions are made to overcome these drawbacks and help improve the test in its future administration.

1. Conduct enough pre-testing so as to make sure that multiple choice questions are unambiguously developed and no second acceptable answer is listed among the choices. Each distractor needs to be useful and represents a plausible misinterpretation of the text partly or holistically so as to provide a more valid measure of reading process (Munby, 1968).

2. If possible, give certain points to allow partial correct answers (3 or 4 passages in the correct sequence) in multiple passage matching or ordering task so as to encourage students' response to such tasks and enhance their validity.

3. If possible, reduce the weighting of English-Chinese translation which may not be a valid method of evaluating candidates' reading ability.

4. If possible, incorporate short answer question requiring response of 1 or 2 words or integrated with gap-filling technique. This "semi-objective alternative to multiple-choice" will elicit some performance from candidates to demonstrate that they have really understood the text (Alderson, 2000, p.227).

Number of words: 1803

Reference

Alderson, C. (2000). Assessing reading. Cambridge: Cambridge University Press.

Anonymous (2007). Syllabus of English test for non-English major students of the 2008 National Postgraduate Entrance Examinations. Beijing: Higher Education Press.

Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.

Hughes, A. (2003). Testing for language teachers (2nd Ed.). Cambridge: Cambridge University Press.

Munby, J. (1968). Read and think. Harlow: Longman.

Qian, D. D. (2002). Investigating the relationship between vocabulary knowledge and academic reading performance: an assessment perspective. Language learning, 52:3, pp. 513-536.

Zhang, L. (2010) Hong Bao Shu: English test papers in the past 10 years of the National Postgraduate Entrance Examinations. Xi'an: Northwest University Press.

代写程序

这些您可能会感兴趣

筛选出你可能感兴趣的一些文章,让您更加的了解我们。