Item Creation Process

1. Determining Size and Scope of the Test

When states or districts engage with Measured Progress to produce an assessment, they specify the number of items for each grade level, content area, and content strand, as well as other requirements.  

2. Item Writing

Measured Progress item writers factor detailed specifications, such as content-specific guidance, style requirements, wording preferences, and level of difficulty, into the item creation process. 

3. Item Review

After the items are drafted, curriculum and assessment specialists review each item for content-standard alignment, grade-level appropriateness, wording, and correct answer choice. They then review whole sets of items, ensuring adherence to the following criteria. 

  • Does the set of items measure the content domain well?
  • Is there a full range of difficulty within the item set?
  • Does the item set represent a diversity of approaches to testing the standards?
  • Is there an appropriate balance between items that concentrate on primary ideas and those that touch on secondary ideas?

If the answer to any of these questions is "no," then the test developer may need to write items to fill in any gaps that exist.  

4. Internal Peer Review

A collegial, consensus-building process carried out by a team of content development specialists ensures that item content is accurate, wording is grade-level appropriate, and the items clearly measure the content standards.  

5. Editorial Review

The items are then passed on to our Publishing department to be reviewed for grammar, spelling, punctuation, word usage, and readability.  

6. Client Review

Developers meet with the client and a team of educators selected by the client. The developers offer alternatives after hearing the comments, suggestions, and concerns of the client. Typically, comments are derived from the following questions. 

  • Do these items appropriately measure the content standards?
  • Are the content and context of the items grade-level appropriate and accessible? (For example, questions about stock options may not be accessible to fourth-grade students.)
  • Is there a variety of contexts? (All the questions should not involve sports.)
  • Do the items represent how the client wants the content tested and taught? 

7. Bias Review

Test developers meet with clients and their designees to review issues of bias and fairness. These issues may inhibit the ability of a student, or group of students, to answer the question. 

8. Face-To-Face Review

This is the final review. Together, the client and Measured Progress finalize the number of items to be field-tested and select the items for the field test.  

9. Field-Test Items

New items are embedded into a regularly scheduled test. These items do not count toward a student's score but are used to gather data from student responses.  

10. Statistical Analysis

Once the field tests are scored, student responses on the field test items undergo statistical analysis to determine whether the items are eligible to be used in an upcoming test. If not, the item is either revised and field-tested again or rejected.  

11. Item Selection

An eligible item and its related data are stored until selected to appear in an assessment.  

12. Item Release

After an item is used, it is often made available to the general public through the client's website. 

Bookmark and Share

See a glossary of terms
 

Apply Now for Employment in Test Development

If you are interested in full-time career opportunities, please visit our Employment section and click on job catergory Educational Services.