The Use Cases and User Requirement

1.Types of staff user

2 Extension of the definition of a test item

3 Definition of “a test”

4 Types of usage of tests

5 Test formats

6 Feedback and recording requirements in relation to types of usage

7 Item management

8 Usage patterns of individual users

 

 

1.        Types of staff user

We identify the following types of staff user

  1. Fully QTI- and XML-literate learning technologists able to carry out authoring of test items who are
    1. literate in mathematical support pedagogy
    2. less to not-at-all literate in mathematical support pedagogy
  2. Academic staff fully literate with mathematical support pedagogy able to relate learning needs or outcomes to test items who are
    1. QTI- and XML-literate
    2. less to not-at-all QTI- and XML-literate
  3. Others, including academic and related staff who supervise tests, particularly high-stakes or summative assessments

Figure 1: MathAssess Use Cases

return to top


2         Extension of the definition of a test item

A test item can include one or more parameters whose value is randomly determined the first time the item is rendered in an assessment.  The randomisation shall be uniformly distributed over a specified sample space; this specification may include

  1. type constraints (“This is an integer”)
  2. interval constraints (“This lies between -9 and +9”)
  3. inequality constraints (“This shall not be zero”)
  4. algebraic constraints (”These integers shall be coprime”)

return to top


3         Definition of “a test”

A test in MathAssess will usually consist of one (or occasionally more than 1) test part, containing one or more sections, each in turn containing one or more items.  At least one item from each section will be presented for student response when the test is taken.  The sections in an individual test need not be all contain the same number of items; there is no natural general upper bound on the number of items in a section.  In some specific cases each section shall contain a specified number of items, for example 1 item per section for a diagnostic test.

Figure 2: Test Construction

return to top


4         Types of usage of tests

4.1       Integral to programme delivery

In this scenario:

  1. Formative assessment:
    1. A structured sequence of formative assessments supports learning and measures and/or records progress toward specified learning outcomes. 
    2. Each is typically taken many times by each student.
  2. Summative assessment
    1. One or more summative assessments measure achievement of specified learning outcomes. 
    2. Each assessment is taken once by each student.

Figure 3: Assessment Management

4.2       Supporting programme delivery

In this situation:

  1. A set of formative assessments supports learning toward specified learning outcomes. 
  2. Each assessment is typically taken several times by an individual student.

4.3       Supporting student numeracy or mathematical capability

In this scenario:

  1. A set of formative assessments supports learning toward student-identified learning needs. 
  2. Each assessment typically taken many times by an individual student.

4.4      Diagnosis of learning needs

In this scenario:

  1. a spectrum of assessments identifies individual student learning needs against specified learning requirements.
  2. Typically each student takes one selected assessment once.

return to top


5         Test formats

5.1       Standard test

Each candidate is presented with a pre-determined number of items, one chosen at random from each section in the test.  In addition some parameters in each question are chosen at run time so are different each time the question is answered. 

return to top


6         Feedback and recording requirements in relation to types of usage

6.1       Tests integral to programme delivery

When tests are integral to programme delivery:

  1. In a formative test
    1. One or more hints is available to be offered while each item is being attempted,
    2. A full solution is available either after each item or at the end of the test,
    3. Performance score is displayed but not recorded, 
    4. No record of the test instance(s) or item responses is kept.
  2. For summative tests measuring achievement of specified learning outcomes.    
    1. A solution to each item is available once the test is completed, >
    2. The performance score is displayed and recorded,  
    3. A record of the test instance and item responses is kept.>

Figure 4: Formative Assessment - Random Item Selection

Figure 5: Formative Assessment - All Items Accessible

Figure 6: Summative Assessment

6.2       Tests supporting programme delivery

When tests support programme delivery:

  1. In a formative test
    1. One or more hints is available to be offered while the item is being responded to,
    2. A full solution available after each item, performance score is offered but not recorded.
  2. No record of the test instance(s) or item responses is kept.>

6.3       Tests supporting student numeracy

When tests support student numeracy or mathematical capability:

  1. For each item in a formative test
  2.  
    1. One or more hints is available to be displayed while the item is being attempted,
    2. A full solution available after each item,
    3. Performance score is displayed but not recorded. 
  3. No record of the test instance(s) or item responses is kept.

6.4       Diagnostic tests

In tests to diagnose learning needs

  1. The feedback for each item contains links to support material,
  2. The links provided are appropriate to the learning state indicated by the student’s response,
  3. The feedback for all items in the assessment is presented as an individualized webpage and stored, 
  4. A record of the test instance and item responses is kept.

Figure 7: Diagnostic Assessment

return to top


7         Item management

7.1       Existing items

  1. Staff users are able to
    1. Open and preview an existing item 
  2. Subject to technical capability, staff users are able to
    1. Edit existing items
    2. Author new items

Figure 8: Item Authoring and Editing

7.2       Item Authoring and Editing:

Staff with appropriate technical capability and subject knowledge can

  1. Open an existing item for editing (including initiate the authoring of a new item) 
  2. Edit or create
    1. The pedagogic purpose of the question (What I mean here is that if I want to create a question about differentiating a cubic then I might start from an existing question about differentiating a quadratic.)
    2. Create or update item metadata
    3. The source text (e.g. LaTeX) of the mathematical elements of the rendered item
    4. The processed text (e.g. MathML) of the mathematical elements of the rendered item
    5. The parameters to be randomised
    6. The parameter randomisation
    7. The hints available, including their number
    8. Response processing, including the criteria for full or partial credit
    9. Feedback available
  3. Preview an item, including a partially edited item
  4. Iterate (c) and (d), including “Undo”
  5. Store a partially edited item locally
  6. Accept or discard changes to an edited item
  7. Store an item locally
  8. Submit an item for inclusion in the item bank

return to top


8         Usage patterns of individual users

8.1       Student Users

Students will be able to

  1. Select and attempt a question from the item bank,
  2. Select and attempt a test from a list of available tests.

8.2       Tutors

Tutors, subject to technical capability, will additionally be able to

  1. Contribute tests to the test bank,
  2. Create, modify and delete their own tests,
  3. Modify and delete copies of existing tests
  4. Contribute items to the item bank
  5. Author items and store them in the item bank.

8.3     Lead tutors

Lead tutors will also have rights to

  1. Remove unwanted tests,
  2. Remove unwanted items.

 

 

 

Start date: 01/10/2008

End date: 31/03 2009

Funded by: JISC