图书介绍

外教社博学文库 大规模英语考试作文评分信度与网上阅卷实证研究2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载

外教社博学文库 大规模英语考试作文评分信度与网上阅卷实证研究
  • 王跃武著 著
  • 出版社: 上海:上海外语教育出版社
  • ISBN:9787544639842
  • 出版时间:2015
  • 标注页数:359页
  • 文件大小:52MB
  • 文件页数:385页
  • 主题词:计算机系统-评分-应用-英语-写作-考试-研究

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

外教社博学文库 大规模英语考试作文评分信度与网上阅卷实证研究PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

Chapter 1 Introduction1

1.1 Rationale for the study2

1.2 Objectives of the study3

1.3 Organization of the thesis5

1.4 Definition of terms7

1.4.1 Online7

1.4.2 Marking8

1.4.3 Online marking8

1.4.4 Online Marking System(OMS)9

1.4.5 Local Area Network(LAN)10

Chapter 2 Research Questions and Methodology of the Study11

Chapter 3 Issues in the Direct Testing of EFL/ESL Writing Ability14

3.1 Introduction14

3.2 What is a direct writing test?16

3.3 EFL/ESL writing ability:What shall we test?16

3.4 Issues in validity21

3.4.1 What is validity?21

3.4.2 Types of validity21

3.5 Issues in reliability25

3.5.1 What is reliability?25

3.5.2 Methods of judging reliability of writing assessments25

3.6 The relationship between validity and reliability28

3.7 Four components of a direct writing test29

3.7.1 The task29

3.7.2 The writer32

3.7.3 The scoring procedure34

3.7.4 The rater37

3.8 Washback39

3.8.1 Washback in general39

3.8.2 Washback of direct tests of writing42

3.9 Practicality44

3.10 Summary44

Chapter 4 The CET Writing Test45

4.1 Introduction45

4.2 The writing test required by the CET47

4.2.1 A direct test48

4.2.2 Positive washback48

4.3 The scoring of CET compositions50

4.3.1 The scoring approach currently adopted51

4.3.2 Procedures involved in scoring CET essays51

4.3.2.1 Scoring Principles and Marking Scheme52

4.3.2.2 Range-finders and sample essays53

4.3.2.3 Rater training54

4.3.2.4 Rating process55

4.3.2.5 Monitoring raters'scoring during the scoring sessions55

4.3.2.6 Recording essay scores56

4.4 Computer-aided adjustment of writing scores56

4.5 Discussion64

Chapter 5 The First Experimental Study67

5.1 Introduction67

5.2 Compositions68

5.3 Participants69

5.4 Data collection procedure71

5.5 The introspection and retrospection studies74

5.5.1 Introduction74

5.5.2 Data elicitation76

5.5.3 Tape transcription77

5.5.4 Data analysis77

5.6 The questionnaire studies78

5.6.1 Design of the questionnaires78

5.6.2 Analysis of questionnaire responses79

5.7 Findings from the introspection,retrospection and questionnaire studies86

5.7.1 Issues and problems in rating CET essays online87

5.7.2 Decision-making behaviors while rating CET-4 essays88

5.7.3 Summary of comments made by the raters on essays91

5.7.3.1 Overall summary91

5.7.3.2 Variations in raters'comments93

5.7.4 Essay elements'influences on raters'decision-making93

5.7.5 Elements of good CET essays in the raters'eyes96

5.8 Analysis of writing scores98

5.9 Summary and discussion107

5.9.1 About the issues and problems involved107

5.9.2 About the raters'scoring decisions107

5.9.3 About the writing scores108

Chapter 6 The Second Experimental Study110

6.1 Introduction110

6.2 Compositions111

6.3 Participants112

6.4 Data collection procedure112

6.5 Problems encountered113

6.6 Data analysis114

6.7 Results114

6.8 Summary122

Chapter 7 Design of the OMS123

7.1 Introduction123

7.2 Literature review on online marking of compositions124

7.2.1 Automated scoring of essays124

7.2.1.1 Overview of four major automated scoring methods125

7.2.1.2 Analysis of the four major automated scoring methods132

7.2.1.3 Summary138

7.2.2 Online scoring of essays by human raters139

7.2.2.1 Overview of online scoring of essays by human raters140

7.2.2.2 Empirical research on online scoring of essays by human raters143

7.2.2.3 Summary146

7.3 A preliminary model of marking essays online147

7.4 Overview of the CET Online Marking System(OMS)148

7.4.1 The data management module149

7.4.1.1 Basic information management150

7.4.1.2 Essay management151

7.4.1.3 Search and report151

7.4.2 The training module152

7.4.3 The rating module152

7.4.4 The monitoring module153

7.5 Operation of the OMS and the rater interface153

7.5.1 Overview of the operation of the OMS153

7.5.2 The OMS rater interface155

7.6 Main features of the CET OMS161

7.6.1 Random distribution of scripts161

7.6.2 Efficient score recording162

7.6.3 Online real-time monitoring of scoring162

7.6.4 Quality control of raters163

7.6.4.1 Adherence to the CET Scoring Principles and Marking Scheme164

7.6.4.2 Rater training166

7.6.4.2.1 Compulsory training167

7.6.4.2.2 Individual rater's self training170

7.6.4.2.3 Forced training171

7.6.4.3 Online discussion172

7.6.4.4 Back-reading and score revising173

7.6.4.5 Time control173

7.7 Advantages of the CET OMS175

7.7.1 Real and efficient random distribution of scripts at the national level175

7.7.2 Real-time online monitoring of raters175

7.7.3 Assured quality control of scoring177

7.7.4 Overall efficiency179

7.7.5 Efficient and economical storage of scripts180

7.7.6 Express retrieval of scripts and scores180

7.7.7 Efficient management and potential utilization of test data for research180

7.8 Limitations of online scoring and solutions182

7.9 Summary184

Chapter 8 The Third Experimental Study185

8.1 Context of the experiment185

8.2 Participants186

8.3 Compositions188

8.4 Data collection189

8.4.1 Step 1:Online marking190

8.4.1.1 The first round online marking190

8.4.1.2 The second round online marking198

8.4.2 Step 2:Conference marking198

8.5 Data analysis199

8.6 Results200

8.7 Summary and discussion215

Chapter 9 Data Analysis Using FACETS219

9.1 FACETS and method219

9.2 The first approach:comparison of rater severity and consistency from the online setting and the conference setting222

9.2.1 Rater severity and consistency:the online setting222

9.2.1.1 Rater severity:the online setting224

9.2.1.2 Rater consistency:the online setting226

9.2.2 Rater severity and consistency:the conference setting227

9.2.2.1 Rater severity:the conference setting229

9.2.2.2 Rater consistency:the conference setting231

9.2.3 Comparison of rater severity and consistency in two settings231

9.2.4 Comparison of rater severity change between two settings233

9.3 The second approach:bias analysis234

9.3.1 Bias analysis:rater by essay interactions235

9.3.2 Bias analysis:rater by setting interactions237

9.4 Conclusion240

9.5 Discussion241

Chapter 10 Summaries,Discussions,Implications and Recommendations243

10.1 A refined model of online scoring of CET essays and its main features244

10.2 Benefits proceeding from online scoring247

10.3 Practicality249

10.4 Scoring quality252

10.5 Raters'comments254

10.6 Suggestions for the improvement of the Online Marking System255

10.7 Implications for other writing tests256

10.8 Suggestions and recommendations for future research257

10.8.1 Suggestions for future research in online marking of compositions257

10.8.2 Recommendations for future research in EFL writing assessment261

10.9 Theoretical and practical significance of the study265

References267

Appendices281

后记358

热门推荐