IMS Accessible Portable Item
Protocol (APIP): Best Practice and Implementation
Guide Candidate
Final Date Issued: 26 March
2012 Latest version: http://www.imsglobal.org/apip/ IPR and Distribution Notices Recipients of
this document are requested to submit, with their comments,
notification of any relevant patent claims or other intellectual
property rights of which they may be aware that might be
infringed by any implementation of the specification set forth in
this document, and to provide supporting
documentation. IMS takes no
position regarding the validity or scope of any intellectual
property or other rights that might be claimed to pertain to the
implementation or use of the technology described in this
document or the extent to which any license under such rights
might or might not be available; neither does it represent that
it has made any effort to identify any such rights. Information
on IMS’s procedures with respect to rights in IMS
specifications can be found at the IMS Intellectual Property
Rights web page:
http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf
. Copyright
© 2012 IMS Global Learning Consortium. All Rights
Reserved. Use of this
specification to develop products or services is governed by the
license with IMS found on the IMS website: http://www.imsglobal.org/license.html
. Permission is
granted to all parties to use excerpts from this document as
needed in producing requests for proposals. The limited
permissions granted above are perpetual and will not be revoked
by IMS or its successors or assigns. THIS
SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER,
AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY
DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY
AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR
ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY
WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF
ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE
USE OF THIS SPECIFICATION. Public
contributions, comments and questions can be posted here:
http://www.imsglobal.org/community/forum/categories.cfm?catid=110 1.1.1
Relationship to IMS Global Standards
1.1.3
Interoperability in APIP
1.2
Structure of this Document
2
Constructing an APIP Solution
2.2.1
APIP Item Content Package Information
2.2.4
Parts of an APIP Item Content XML File
2.2.5
Connecting Default Content to Access
Elements
2.2.6
Access Element Access Feature Information
2.2.8
Accessibility for Access Features using the
apipLinkIdentifierRef Attribute
2.2.10
Using Multiple Access Elements that Refer to the Same
Default Content 2.2.11
Using CSS to Position Labels for Graphics
2.2.12
Referring to a Portion of an Image
2.2.14
Use of CSS in a Style Sheet 3.1
Standard True/False AssessmentItem
3.2
Standard Multiple Choice AssessmentItem
4.1
Spoken Multiple Choice Alternative
Rendering
4.2
Spoken, Braille, ASL Supported Multiple Choice
Rendering
4.2.1
Accessibility Information within an Access Element
4.2.3
A User Who Might Need Supplemental
Information
6.1
Importing/Exporting APIP Items
6.2
Item Package with a Single APIP Item
6.3
APIP Package Additional Variants
6.4
APIP Package with Sections
Appendix A – Content & User Profile
Tagging Map
Figure 2.1 Key QTI Structures and APIP
Extensions
Figure 2.2 Linking Access Elements to Default
Content
Figure 2.3 Accessibility for Access
Features
Figure 2.4 Graphic with labels that require
accessible information
Figure 3.1 Visualization of the T/F
assessmentItem example. Figure 3.2 Visualization
‘imsmanifest.xml’ for the packaging of the T/F
assessmentItem. Figure 3.3 Visualization of the MC
assessmentItem example. Figure 3.4 Visualization
‘imsmanifest.xml’ for the packaging of the MC
assessmentItem. Figure 4.1 Visual (Default) representation of an
APIP MC item
Figure 4.2 Visualization
‘imsmanifest.xml’ for the packaging of the
assessmentItem. Figure 4.3 Visualization of the APIP question
apip_example_4d2. Figure 4.4 Accessibility elements referencing
the content used by ‘Spoken’ and
‘Braille’ users. Code 5.1 Example of an APIP AfA PNP instance.
Code 5.2 Example of an APIP AfA PNP instance.
Code 6.1 Example of an APIP Content Package
manifest. Code 6.2 Example of a manifest demonstrating
dependencies. Code 6.3 Example of the manifest with a single
APIP assessmentItem. Code 6.4 Example of manifest fragment
demonstrating variants. Code 6.5 Example of manifest with an
assessmentSection and assessmentItems. Code 6.6 Example of assessmentSection
Referencing assessmentItems. Code 6.7 Manifest of example test package with
two sections containing 5 items. The Accessible Portable Item Protocol (APIP) is
an interoperability standard that enables the exchange of
assessment content and an examinee’s accessibility needs by
defining standard XML-based exchange formats. APIP also provides
expectations of a computer-based assessment delivery system for
the delivery of an assessment to an examinee. The assessment
content, with associated accessibility information, can be
efficiently exchanged between assessment applications and service
providers without the loss of information or the need to
“re-code” the content. APIP is intended to enable
assessment file exchange that serves ALL students, not only
students with accessibility needs. APIP focuses on the needs of
students, rather than assuming a particular physical or cognitive
issue prescribes the solution. It enables educators to make
decisions that support the specific needs of individual
students. In order to achieve this goal, APIP
identifies three major components that must work together in
harmony. Those are illustrated in the Figure 1.1 and discussed
below. Figure 1.1 The APIP Major Components
Set. Accessible Assessment Content By leveraging the
existing IMS QTI v2.1 assessment interoperability specification
APIP, in simplest terms, extends the default QTI item and
assessment specifications to include alternate (or supplemental)
content representations, content presentation sequences (known as
inclusion orders), and information about companion materials or
tools. Test Taker Personal Needs and Preferences
Profile By collaborating
with the IMS Access for All (AfA) initiative, APIP has identified
the test taker’s need profile, or Personal Needs and
Preferences (PNP) information needed to enable the various
accessibility elements in the assessment content. By mapping the
PNP to the access elements, all test takers can now access the
right content, in the right format, and at the right time. Assessment Delivery System By combining the
accessible assessment content with the test taker’s
Personal Needs and Preferences profile, the assessment system can
deliver the appropriate testing experience for all test takers.
While APIP is not a delivery system specification, it does
identify necessary features and functions that must be provided
in order to support user profile access needs while providing
alternate content sequences and representations. APIP is
primarily focused on online test delivery platforms. To achieve accessible content
interoperability, APIP host item banking systems will be able to
import and export APIP formatted XML. This should, as much as
possible, be automated so that manual interventions
(proctor/assistant help) are not required. In addition, each online delivery platform
may provide for additional features and functions to be
configured by the test content, definition, or other assessment
metadata. This may include such items as navigation controls,
tool access (ex: toolbars), error messages, logos, etc. that are
not provided by the APIP (or QTI) content specification. While the APIP file format is the expected
exchange format, it is not a requirement of implementing delivery
systems to use the native APIP file formats during actual
operational delivery. Implementing delivery systems may transform
the provided content to proprietary formats that may aid in the
efficient delivery of content or user accessibility need
information to examinees, provided the accessible content
information supplied within the APIP access element data is
available to examinees. This document contains the recommended best
practices for the use of the APIP standard. The APIP content
standard provides assessment programs and question item
developers a data model for standardizing the file format of
digital test items. When applied properly, APIP accomplish two
important goals. First, the APIP content standard allows digital
test items to be ported across APIP compliant test item banks.
Second, APIP provides a test delivery interface with all the
information and resources required to make a test item accessible
for students with a variety of disabilities and special
needs. APIP builds on the IMS Global Question and
Test Interoperability (QTI) v2.1 [QTI, 06a] and the IMS Global
Access For All Personal Needs & Preferences (AfA PNP) v2.0
[AfAPNP, 10] specifications. A number of extensions have been
added to both of these specifications [APIP, 12c], [APIP,
12d]. APIP is expected to compliment and not
supersede or replace any other US-based or international
accessibility standards. APIP is an assessment content
interchange standard that provides for robust extensions to
assessment content in support of accessibility options within the
assessment delivery system. In contrast, APIP is not a
specification for assessment delivery system accessibility
functionality. Suppliers of assessment delivery systems will
articulate how their platform meets the needs of their users as
well as how the platform incorporates APIP content and functions
and in turn, how it satisfies Section 508 Amendment to the
Rehabilitation Act of 1973 or other internationally accepted
standards such as the Web Content Accessibility Guidelines (WCAG)
2.0 ( http://www.w3.org/TR/WCAG ) that are applicable for a specific
program. Assessments, and more specifically
high-stakes assessments, present unique challenges for
accessibility that are unlike most traditional web applications.
For example, ensuring that certain system behaviors are
consistent across platforms regardless of the software that each
system may be running is necessary in online assessments.
Comparability concerns, even for accessibility options, should be
considered when designing the assessment interfaces. Often there
is more control over the user interface to turn on or off
features based on user profiles or the type of assessment being
administered that is not typical in a traditional browser
interface. In addition, accessibility features may need to
undergo research or usability studies to determine their
effectiveness and how they may or may not impact
comparability. An assessment delivery system that
implements all of the APIP standard’s access features could
meet, and in some cases exceed other industry accessibility
standards. However, IMS Global and the APIP compliance and
certification process will allow for each supplier to specify
which accessibility features have been implemented in their
delivery platforms. While the specific content provided for each
accessibility feature will be consistent, there will be some
variation in the specific accessibility implementations, and
therefore the user experiences, across different APIP delivery
systems. As the APIP compliance and certification process
describes, documentation of specific access feature support can
be provided to IMS Global and their members for review. The APIP standard is intended to foster
interoperability of the content packages and user accessibility
needs (PNP) files. APIP delivery systems are expected to support
the content and user profile information, but are not expected to
directly communicate with other APIP delivery systems. That is,
the file exchange is the primary method of interoperability. APIP
delivery systems will be expected to support the information
supplied in the APIP exchange files, though they are not required
to use those exchange files at the moment of delivery. Delivery
systems are expected to use the information supplied in those
exchange formats, but can elect to use proprietary or other
delivery-focused formats during content rendering. Within the content package exchange, there
may be interoperability issues surrounding specific file formats
(e.g., mime types) for some supporting media files or custom
interaction code. The APIP standard allows for the use of any
media type at this time, so interoperability between vendors may
require agreement on the provision of specific file formats.
Likely areas where that would occur within the content are
pre-recorded files for use within the spoken access feature,
pre-recorded video of sign language, or provision of a media file
(audio or video) as part of an item stimulus or prompt.
Additionally, APIP allows for the use of a custom interaction (a
specific QTI interaction) within an item, which may use specific
kinds of code to govern the user’s response interaction.
That governing code may not be interoperable between an APIP
authoring system and a receiving APIP delivery system. The first and current version of APIP
describes support for the following access features: To provide support for the access features
listed above, APIP combines XML structures available in user
profile and item content. The APIP Tagging Map in Appendix A,
details how each access feature is addressed in the user profile
and/or content structures. Also see the APIP Terms and
Definitions document for definitions of each of the above access
features. For some access features, the delivery
system will first need to know that the examinee requires the
access feature, then will use the access information supplied in
the APIP content. For example, a user profile that indicates the
user needs spoken text & graphics support, meaning reading
aloud all text-based content and describing all the graphics. The
APIP content provides information to be supplied to that user.
The APIP delivery system then provides the necessary sound files
(or text-to-speech capability) for the examinee, in the order
specified in the content. In other cases, the delivery system provides
the needed access feature without specific information provided
by the content. A good example is a student whose reading
comprehension is dramatically better when the text and background
colors of the test content are changed. The delivery system gives
the student the ability to alter the presentation of the testing
environment, without the need for any special coding in the
content itself. The APIP standard specifies critical
features both for assessment content and for personal needs
(e.g., PNP) but intentionally offers limited guidance relative to
assessment delivery systems. Accessible online delivery is still
in its infancy, and developers are innovating new approaches that
may be beneficial to users with specific access needs across
different computer devices. It is important that developers of
assessment delivery systems as well as organizations that are
seeking suppliers of assessment delivery systems consider issues
that extend beyond those that are addressed by APIP v1. Among the
many possible issues when delivering accessible online
assessments are the following: APIP (as part of QTI) provides methods to
bundle content and addition assessment structures using content
packages and manifests. Depending on the requirements for each
specific partner exchange, content may be packaged in different
ways. For example, if the partner exchange is only the item
content itself and does not include any test form or section
specific information, the individual APIP item files with
associated content files (e.g., audio/video files) would be
sufficient. Alternatively, if the partner exchange is for both
the items as well as the associated structures of the assessments
(e.g., test forms), then addition section and content manifest
information will be required. APIP content packages include a package
manifest, which details dependent resources for the test or item,
the resources referenced, as well as other assessment meta-data.
Section 6 describes how to package APIP content for transfer. Examinee PNP files for APIP v1 do not impose
a packaging structure for multiple user profiles. It is likely
these user profiles will be exchanged and stored within larger
student rostering systems. APIP v1 purposefully limits the scope of its
specification to the online administration of accessible tests.
It includes many, but not all, of the commonly administered
access features (or test accommodations) during testing. It does
not consider other test administration factors that may be stored
in other student roster systems, such as specific, physical
environment settings. For example, it has no way of indicating a
student should take the test in room by themselves, or that the
user needs to be given a computer screen of 51 cm (20 inches) or
higher. APIP v1 is primarily concerned with the presentation of
accessible information. Future versions may include guidance on
alternate response capabilities (i.e., speech-to-text, Braille
input, etc.). Additionally, APIP is expected to continue as a
standard, and incorporate support for additional accessibility
needs as they are more fully researched and understood. Work on
APIP conformance will be an ongoing effort directed by the APIP
Accredited Profile Management Group (APMG), incorporating
feedback from the APIP Working Group and APIP End-User Group. The structure of the rest of this document
is:
Version 1.0Table of Contents
List of Figures
List of Code Examples
1
Introduction
1.1 Scope and
Context
1.1.1 Relationship to IMS
Global Standards
1.1.2 Other Related
Standards
1.1.3 Interoperability in
APIP
1.1.4 APIP Access
Features
1.1.5 Kinds of
Packages
1.1.6 Future
Scope
1.2 Structure of this
Document
|
2. Constructing an APIP Solution |
An overview of how the set of examples were created and recommendations for how these can be used as best practice references; |
|
3. Annotated Standard QTI Examples |
Presentation and description of several annotated examples of the use of the standard QTI v2.1 for assessment Items; |
|
4. Annotated APIP Examples |
Presentation and description of several annotated examples of APIP Items; |
|
5. Annotated APIP PNP Examples |
Presentation and description of several annotated examples of Personal Needs and Preferences (PNP) profiles |
|
6. Supporting the Use-cases |
Presentation and description of several annotated examples to show how APIP access features the set of use-cases. |
[AfAPNP, 09] IMS Global Access For All Personal Needs & Preferences Information Model v2.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Final Release, IMS Global Inc., April 2010.
[APIP, 12a] Accessible Portable Item Protocol (APIP) Overview v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12b] Accessible Portable Item Protocol (APIP) Technical Specification v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12c] Accessible Portable Item Protocol (APIP) Technical Specification of New Features to QTIv2.1 v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12d] Accessible Portable Item Protocol (APIP) Technical Specification of New Features to AfA PNPv2.0 v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12f] Accessible Portable Item Protocol (APIP) Use Cases v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12g] Accessible Portable Item Protocol (APIP) Conformance and Certification v1.0, Candidate Final Release, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, IMS Global Inc., March 2012.
[APIP, 12h] Accessible Portable Item Protocol (APIP) Terms and Definitions Specification v1.0, Candidate Final Release, C.Smythe, T.Hoffmann and M.Russell, IMS Global Inc., March 2012.
[QTI, 2006a] IMS Global Question & Test Interoperability Assessment Test, Section and Item Information Model v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006b] IMS Global Question & Test Interoperability XML Bindingv2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006c] IMS Global Question & Test Interoperability Implementation Guide v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006d] IMS Global Question & Test Interoperability Conformance Guide v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006e] IMS Global Question & Test Interoperability Metadata and Usage Data v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006f] IMS Global Question & Test Interoperability Migration Guide v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006g] IMS Global Question & Test Interoperability Overview v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
[QTI, 2006h] IMS Global Question & Test Interoperability Results Reporting v2.1, S.Lay and P.Gorissen, Public Draft Revision 2, IMS Global Inc., June 2006.
AfA PNP Access for All Personal Needs & Preferences
APIP Accessible Portable Item Protocol
CP Content Packaging
IEEE Institution of Electronic & Electrical Engineers
IMS Global IMS Global Learning Consortium
LOM Learning Object Metadata
MC Multiple Choice
MC-MR Multiple Choice Multiple Response
MC-SR Multiple Choice Single Response
PNP Personal Needs & Preferences
PSM Platform Specific Model
QTI Question & Test Interoperability
T/F True/False
XML Exchange Mark-up Language
XSD XML Schema Definition
Before you can tailor the delivery of accessible content to an examinee (a.k.a. the test taker, or online assessment user), the delivery system will need to be provided with two important pieces of information: what the user needs (their PNP profile) and the test/item content. Supplying only one of those pieces might result in not understanding the needs of the examinee, or not having the correct content to present to the examinee. However, there are cases where the delivery system is expected to provide access to the examinee, without supplemental accessibility information provided within the XML content file. For example, a user profile that indicates the user needs to have the content magnified. The delivery system would provide the tools or means necessary for the examinee to magnify their content. In contrast, for an examinee that needed American Sign Language, both their user profile would indicate the need, and the content would contain supplemental information that provides the examinee with a video that signs the item content.
For an overview of each access feature, and how the PNP tags relate to the content tags, see Appendix A: APIP Tagging Map. Definitions of each access feature are available in the APIP document IMS Accessible Portable Item Protocol (APIP) Terms & Definitions Version 1.0 [APIP, 12h].
If an examinee needs a specific access feature during assessment, that access feature would be listed in their PNP profile. For the majority of examinees today, no access features would be listed. It is acceptable to have a PNP with no access features specified. This confirms with the delivery system that no additional access features need to be provided during an assessment session.
When an access feature is listed in an examinee’s PNP, the default assumption is that the access feature should be provided. Specifically, the assignedSupport tag is required for each access feature, and its default value is always true. Use of the false value for the assignedSupport tag is allowed, and confirms that the student should NOT be provided the access feature. This allows two approaches for the student rostering system to withhold an access feature from the examinee. First, remove the access feature tag entirely from the profile, or second, change the assignedSupport tag to false.
Most access features also have an activateByDefault tag, which usually defaults to true. activateByDefault indicates that when an assessment session begins for an examinee, the access feature is provided immediately, as opposed to merely available as an option. For example, an examinee requiring magnification would start their assessment with the content and/or interface magnified. Not all access features lend themselves to activation when assessment is initiated, like masking (access feature C2). By default, masking is not activated when the assessment session begins, as it may confuse or disorient the examinee from knowing how to navigate through the test. The masking option is however, available for the examinee to activate at a moment of their choosing.
Some access features do not have an activateByDefault tag, like breaks (access feature C5). This is because the access feature isn’t something the examinee turns on or off during the assessment. It is a condition of the assessment session generally. The examinee can take breaks during the assessment, or is allowed more time to take the assessment. The access feature would never go away during the session.
The spoken access feature contains an additional tag in the PNP called readAtStartPreference. By default, this is set to true, and means that when the spoken access feature is active in a delivery system, and the examinee first encounters assessment content, the content will be spoken aloud to them, from the beginning to the end of their default inclusion order (see Section 2.2.9). Some users prefer to request the content be spoken to them only at a time of their choosing, which could be stored as <apip:readAtStartPreference>false</apip:readAtStartPreference> in their PNP. The delivery system should then have a method for letting that user begin playback of the content.
When a profile indicates screenReader preference settings for an examinee, it should also indicate the related userSpokenPreference attributes, so the delivery system is clear as to what inclusion order to present to the examinee. The Accessibility for All (AfA) screenReader vocabulary includes a usage tag that includes a vocabulary of required, preferred, optionally use, and prohibited which are functionally not relevant in the assumed assessment context of APIP. TheuserSpokenPreference attributes will indicate whether or not the access feature should be assigned to the examinee and the screenReader preference settings will indicate specific preferences.
Annotated examples of PNP files can be found in Section 5: Annotated PNP Examples. Additional PNP examples can be found in the APIP examples folder.
The APIP PNP file is designed to transfer examinee accessibility information specific to their assessment context. APIP PNP is not intended to store or transfer any other student information. While the APIP PNP file could be used by an APIP delivery system, it is not the expectation that an APIP delivery system must use an APIP XML PNP file during delivery. Delivery systems may use the examinee access feature needs listed within the PNP file to make the necessary access features available to the examinee using its own proprietary format.
An APIP package is constructed according to the IMS Content Packaging standard, and consists of a zip file containing (a) a manifest, (b) one or more assessmentItem XML files, and (c) item assets (the supplementary image and sound files). If a package is meant to contain multiple items, then it will also contain either assessmentTest and/or assessmentSection XML files. The package may optionally include XSD schema files for use in validating the package contents.
A package contains an item manifest, which holds information of the item’s resources, and describes the relationship between the resources, including any dependencies. The package’s manifest also contains metadata about the content. Within the item package resources, there is at least one item XML file, which contains the default item content, and its supplemental accessibility information. A fuller detailed description of package information is discussed in Section 6.
APIP assessments are organized in the same manner as QTI 2.1 – by the use of assessmentTest and assessmentSection XML data well described in the standard QTI Information Model. assessmentSection elements are the structure for organizing assessmentItems into groups, as they may contain multiple assessmentItems, other assessmentSections, and section-specific content in the form of rubricBlocks. assessmentTest elements may contain one or more assessmentSections, and provides additional navigation-controlling and test-scoring information. assessmentSections may be included in the package in one of two ways – in their own files or wrapped inside assessmentTest files. Specific construction of Test Sections is fully described in Section 6.
In an APIP Content package, more than one variant of an item may exist. Some access features might require the delivery system to provide an entirely different representation of the default content in the original item. For example, for an original item presented in English, there is a translated version in Spanish that is available to those students who need the test content delivered in Spanish. Both the English and Spanish versions of the item are known as variants. Each variant of an item has its own accessibility information coded within its item XML file. For example, to support spoken accessibility needs for both the English and Spanish examinees, the original English variant will have spoken access feature information in the apipAccessibilityInfo section of its XML file, and the Spanish variant (in a different XML file) will have its spoken access feature information within the apipAccessibilityInfo section of its XML code. By separating the representations into different variants, there is less confusion about the accessibility information available for the presented representation.
In APIP v1, if one or more Item Translations or Simplified Language were required, item packages would contain more than a single variant for an item. If these access features are not offered to examinees, an APIP item package is expected to contain only a single variant. A full description of packaging variants is discussed in Section 6.
The APIP standard builds on the IMS QTI v2.1 specifications. The ‘Q’ or question portion of the QTI specification is held within an APIP Item XML file. APIP, in simplest terms, extends the QTI item representation to include alternate content representations, content presentation sequences (known as inclusion orders), and information about companion materials. These content extensions will enable a test delivery engine to tailor the presentation of items to meet the specific access needs of each individual examinee, as provided in the examinee’s PNP. Figure 2.1 illustrates the key structures of an assessment item using QTI and the APIP extensions.

Figure 2.1 Key QTI Structures and APIP Extensions
The default QTI representation of the item provides the content
to be presented to an examinee with no defined access needs.
Traditionally, this default content defines the original form of
the item developed for the general population of examinees,
accessed by visually reading or viewing content (or in some cases
viewing/listening to time-based media). Typically, the default
content includes text, graphics, and/or tables that form the item
that would be presented to a student who does not have any
defined access needs. However, default content might also include
other media elements such as sound files or movies intended to be
presented as part of the item for the general population of
examinees.
The default content, content displayed to all examinees regardless of their access needs, is included in the item’s XML file, using XHTML. It also contains code that supplies item identification, response information, and several other item characteristics, available through the QTI 2.1 standard. Section 3 supplies annotated examples of QTI 2.1 items. These are presented without the APIP accessibility extensions.
In APIP, the default QTI can provide alternate content representations of items, or supplemental content, for students with specific access needs. In general, when all of the original default content is replaced by content that is presented instead of the default content (not presented simultaneously), another full APIP representation (which includes its own accessibility information) of the item can be provided. The default QTI structures support bundling the multiple default QTI item representations in the test section manifests, known as variants. As an example, a full translation of an item that replaces all text, graphics, or other content in another language likely results in another full APIP representation of the item as alternate content.
In contrast, for many access needs, improved access does not solely require content to be replaced, but instead requires the presentation of information that supplements default content, or is presented simultaneously with the default content. As an example, text displayed as part of default content might be presented in spoken, Braille, or signed representations. When doing so, the default content will remain displayed to the examinee and the additional content (spoken, Braille or sign) will be presented as simultaneous, parallel representations. Similarly, changes intended to assist the examinee in identifying important aspects of default content present the default content along with supplementary information (e.g. highlighting key words, translation or definitions for key words, guidance that point the student to key information, etc.).
Other QTI parts of an assessment item include: response declaration, outcome declaration, styles sheets, and response processing. Brief descriptions are provided below. For complete descriptions of the QTI structures, see the QTI 2.1 documentation.
APIP supports the following QTI interaction types within the APIP QTI profile:
APIP supports the use of Composite Items,
that is, items that use more than one interaction within the same
item. APIP also supports the use of MathML, Feedback, and Shared
Material Objects.
When supplementary information is provided to meet a specific access need, this information must be placed in the APIP accessibility information (<apip:accessibilityInfo>) component for that item. By placing identifiers within the default XHTML itemBody, APIP extensions can now refer to portions of the default XHTML item body content. The XHMTL ‘id=’ attribute is placed within the default XHMTL item body content. Accessibility (or supplementary) information is stored within an APIP accessElement. In conjunction with the APIP accessibility content link ‘qtiLinkIdentifierRef=’, the XHTML content and accessibility information in an accessElement can be linked. The XHTML ‘id=’ optional attribute can be placed on most all of the XHTML content tags as well as the XHTML features in QTI v2.1. Therefore any APIP compliant application must support all the possible uses of the ‘id’ attribute. Figure 3 illustrates this relationship.

Figure 2.2 Linking Access Elements to Default Content
It is also important to note that, depending on the specific accessibility needs of the different types of examinees, the default content may require multiple alternative representations. In this case, multiple access elements may point to the same QTI default content identifier ‘id=’.
Each access element is also uniquely identified using the ‘identifier=’ attribute tag within the <apip:accessElement> node. That identifier may be referenced in an inclusion order (see Section 2.2.9).
Within an accessElement, the reference to specific default content takes place within a contentLinkInfo node. The access feature information is in the relatedElementInfo node (see Section 2.2.5). You must provide at least one contentLinkInfo node in an accessElement, but you are permitted to provide as many contentLinkInfo nodes as needed. That is, you can refer to a single portion of any part of the default content (provided it is within an element that has an id attribute), or to multiple parts of the default content.
Below is an example of referencing all of
the text within single, uniquely named text element in the
default content. There is no information in the relatedElementInfo tag for
simplicity’s sake.
<apip:accessElement identifier="ae001">
<apip:contentLinkInfo qtiLinkIdentifierRef="p1">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:relatedElementInfo></apip:relatedElementInfo>
</apip:accessElement>
To reference more than one portion of the default content, you would use additionalcontentLinkInfo nodes, as shown below.
<apip:accessElement identifier="ae001">
<apip:contentLinkInfo qtiLinkIdentifierRef="p1">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:contentLinkInfo qtiLinkIdentifierRef="p2">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:relatedElementInfo></apip:relatedElementInfo>
</apip:accessElement>
There are a number of different ways of using only a part of a text string, including: <apip:characterStringLink>, which provides a method for supplying a starting and ending character number, and <apip:wordLink>, which permits the specifying a word within the text string by word count (a number), and <apip:characterLink>, which provides the ability to set specific character(s) by index number. For example, assume we have a paragraph identified as <p id="p1"> with multiple sentences where each sentence has its own identified span tags of <span id="a"> and <span id="b">. To provide accessibility to the first sentence of the paragraph, the access element could refer to the entire first sentence within the <span id="a"> tag, or the access element could link to the <p id="p1"> object, then specifying a reference to the first 39 characters of the string text within the paragraph using the <apip:characterLink> tag.
An APIP accessElement is a container that provides one or more pieces of accessibility information that can be provided to an examinee. While a single access element can only contain one instance of a particular access feature within the element, the access element can contain more than one access feature within the element. These access features share the single reference, the link, to the default content. The specific types of access features that can be included in an access element are:
In the example code below, access feature information is provided for spoken AND Braille examinees.
<apip:accessElement identifier="ae001">
<apip:contentLinkInfo qtiLinkIdentifierRef="p1">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:relatedElementInfo>
<apip:spoken>
<apip:spokenText contentLinkIdentifier="spokentext001">Sigmund Freud and Carl Jung both belong to the psychoanalytic school of psychology.</apip:audioText>
<apip:textToSpeechPronunciation contentLinkIdentifier="ttsp001">Sigmund Freud and Carl Young both belong to the psycho-analytic school of psychology.</apip:textToSpeechPronunciation>
</apip:spoken>
<apip:brailleText>
<apip:brailleTextString contentLinkIdentifier="brailleText001">Sigmund Freud and Carl Jung both belong to the psychoanalytic school of psychology.</apip:brailleTextString>
</apip:brailleText>
</apip:relatedElementInfo>
</apip:accessElement>
Since you can only use a single instance of the spoken access feature in an element, if you want different information to be presented to different spoken users (TextOnly, TextGraphics, NonVisual), you would need to create a different access element to contain the different information. The different access elements would be listed in the different user’s inclusion orders.
For guidance on the best practices on specific access feature tags and their use within an access element, see the information below.
Within the <apip:spoken> node, which supports the examinee needing information to be provided when spoken to them, there are 3 main access feature tags:
If this element is intended to provide support for any examinee requiring spoken support, APIP requires the inclusion of the spokenText and textToSpeechPronunciation information within the element.
The <apip:spokenText> contains a string of text that clearly states how the text should be presented when spoken (read aloud). This tag has a mandatory inclusion requirement to support spoken accessibility. It is used to remove any ambiguity of the text, meaning it is intended to clarify exactly how the information is intended to be presented when spoken. It would also be used as the script to be used by a human reader when producing a recorded file. For instance, you might want to clarify the reading of certain numbers. This is particularly important for numbers like “1998” where you may want to ensure it is read as a year (nineteen ninety eight) and not a sequence of numbers (one, nine, nine, eight) or a single number (one thousand, nine hundred, and ninety-eight).
The <apip:textToSpeechPronunciation> tag is used to specify a pronunciation for a text-to-speech (TTS) engine. All TTS engines need some help indicating how to pronounce some words, or need proper names spelled phonetically. For the purposes of pre-generating a synthetic sound file, or generating the text at the time of testing, delivery systems should use this textToSpeechPronunciation tag. The likelihood of the text string for the textToSpeechPronunciation to match the spokenText is very high, but is repeated for ease of access by systems processing the information. As an additional benefit, in the event that any provided pre-recorded audio files are missing, corrupt, or of an unusable format for the delivery system, newly generated audio files could be created prior to delivery, or generated by a TTS at the time of delivery. It should also be noted that each TTS engine has its own quirks about pronunciation (particularly for English), and the textToSpeechPronunciation text string used for one vendor may not work the same for another vendor. If there are known differences between vendors, the receiving vendor may wish to redefine the textToSpeechPronunciation information by applying their own pronunciation rules using the spokenText string as the original source.
The <apip:audioFileInfo> tag contains information about the audio file that could be played for the examinee as a representation of the text or object it refers to. While a vendor may supply a single recording, it may also provide multiple recordings for use by examinees with different spoken preferences. For each recorded file, the attribute mime type is required for the referenced audio file (<apip:audioFile contentLinkIdentifier=”af001” mimeType=”audio/mpeg”>). Within the audioFile node, use <apip:fileHref> to provide the location of the file (within the item package). You may include the same recording in multiple file types by changing the mimeType, and referencing a different sound file. Each of the different audio files should be referenced in its own audioFile node. There is no limit to the number of audio files listed within the <apip:spoken> node, but note that only a single sound file is intended to be delivered to an examinee for a single access element. The differences between the provided audio files within an element are meant to address specific examinee requirements and/or preferences, like speed or voice type, and/or to address differences in delivery system requirements (file mime types).
For each recorded file, you may optionally provide:
To make the spoken access feature available to the examinee, the accessElement MUST be listed in the specific user type’s inclusion order in either their …DefaultOrder, or their …OnDemandOrder.
The <apip:brailleText> node is used to specify the text will be displayed using a refreshable Braille display. This tag contains a single node, namely the <apip:brailleTextString> where the text intended for use with a refreshable Braille display should be provided. Braille text needs to be provided for any element that will be provided to a Braille user, though you might create specific elements that contain only Braille text. For example, you may have a reading passage with multiple paragraphs that you reference in a single accessElement, and provide the Braille string for the entire passage in that element. This might be different than the way you create accessElements to provide access for you spoken users. In order for this information to be available to the Braille user, the element MUST be listed in the inclusion order within the brailleDefaultOrder list.
The <apip:tactileFile> node is used to identify the tactile supports that will be available to the test taker separate from the online delivery system. The expectation is that the tactile file is an external, physical tactile representation, completely separated from the examinee’s computer experience. The information provided in the access feature is intended to assist the examinee in locating the proper tactile representation to use when responding to the current item. These tags do not contain the actual content that appears on the separate tactile supports but only how the examinee is to locate them during testing. The ability to define the content for the tactile supports may be included in future versions of APIP.
There are three main tactile support tags and they are:
· tactileSpokenFile
· tactileSpokenText
· tactileBrailleText
Use the <apip:tactileSpokenText> tag to specify how you want to describe the location of the tactile representation. Example:
<apip:tactileSpokenText>Use tactile sheet B to answer question fifteen.</apip:tactileSpokenText>.
Use the <apip:tactileAudioFile> to reference the audio file to be played (read aloud). Use the <apip:tactileBrailleText> to write the text string to be sent to a refreshable Braille display.
There are two different nodes that specify sign language information within an access element. They are <apip:signFileASL> and <apip:signFileSignedEnglish>. The signFileASL node is used to support American Sign Language (ASL) examinees. The signFileSignedEnglish is used to support Signed English examinees. Each of these nodes contain the same sub-nodes that supply specific pre-recorded video file information. In either signing node, specify a single mimeType attribute, to indicate which type of video format is used. Example: <apip:signFileASL mimeType=”video/mpeg”> . In either signing container node, specify a video file using the videoFileInfo tag. Include a reference to the file location using the <apip:fileHref> tag. To specify a starting time (in milliseconds) use the <apip:startCue> tag. This will allow the playback of the video to begin at the indicated startCue time. If a startCue time is not specified, the default time the playback will begin is at the beginning of the video file (0 milliseconds). To indicate when the playback would stop, specify the time in milliseconds (greater than the startCue time) using the <apip:endCue> tags. If an endCue time is not provided, the default ending time is the end of the video file (or the entire length of the video file).
To make the sign language access feature available to the examinee, the accessElement MUST be listed in the specific user type’s inclusion order in either their aslDefaultOrder/signedEnglishDefaultOrder, or their aslOnDemandOrder./signedEnglishOnDemandOrder.
Use the <apip:keyWordTranslation> node for examinees that may benefit from having certain words translated into their native language, or other more familiar language. For those specific users, you can indicate that certain text (word or words) has a translation in a specific language. Use the <apip:keyWordTranslation> node to include information for the translated text and the language it is providing the translation to. A definitionID attribute is used to identify the translation (in case you are reusing translations across content). Use the textString to write the translation, and the language tag to indicate the language, using the global ISO 639-2 standard. The translation should be provided to the examinee at the request of the examinee. To provide a spoken audio file for this access feature, use a different accessElement to provide the spoken access feature information (sometimes called access to an access feature, see Section 2.2.8).
It is the expectation that the delivery system will provide indicators for words that need translating when an examinee’s PNP specifies they should receive the keyword translation access feature, in the language provided in the examinee’ PNP.
Use the <apip:revealAlternativeRespresentation> node for examinees that have difficultly decoding certain modes of test content (pie charts, bar charts, diagrams, etc.). To display an alternative representation of the original mode, use the <apip:textString> tag and provide a text string describing information presented in the graphic or diagram. These text access features should be provided at the request of the examinee, if the examinee’s PNP specified they should receive the access feature.
Use the <apip:keyWordEmphasis> node for examinees that benefit from having certain words emphasized (bold, italic, color background, or other highlight style) above and beyond the emphasis of the default content. For those specific users, you can indicate that certain text could have emphasis. Use the <apip:keyWordEmphasis/> tag. This is a closed tag. No further information need be provided. It is the expectation that the delivery system will identify which words need visual emphasis when an examinee’s PNP specifies they should receive the keyword emphasis access feature.
Use the <apip:guidance> node for the Language Learner Guidance and Cognitive Guidance Access Features.
The <apip:languageLearnerSupport> node assists examinees who may need additional support with the test content if they are learning the language of the test. The support text provided is expected to be delivered in the same language that the default content is provided. If the question is in English, the support text should also be in English. The text provided is specified in the<apip:textString> node. A test item may have one or more Language Learner Supports. To specify the order in which the supports should be revealed to the examinee (at the request of the examinee), use the <apip:supportOrder> tag, and supply an integer. This access feature is intended only for examinees who have specified they should receive the access feature in their PNP.
The <apip:cognitiveGuidance> node assists examinees who may need additional support interpreting or understanding the test content, or for any other cognitive support need. The guidance text provided is specified in the<apip:textString> node. A test item may have one or more Cognitive Guidance Supports. To specify the order in which the supports should be revealed to the examinee (at the request of the examinee), use the <apip:supportOrder> tag, and supply an integer. This access feature is intended only for examinees who have specified they should receive the access feature in their PNP.
Many of the access features available in APIP do not require any modification to the default content. An APIP delivery system is expected to provide an access feature if the examinee’s PNP requires it and the delivery system has been requested to provide the access need (see the Compliance and Certification document for required conformance features). Those access features include:
The APIP accessibility information also supports access elements having accessibility to content supplied within one of the access feature nodes (also known as access for access features). Because each access element can have multiple access features, each feature must also be uniquely identified with a ‘ContentLinkIdentifier=’ within the access element. An access element that provides accessibility information for another access feature will then identify the specific feature within another access element, specifically referencing the contentLinkIdentifier of the access feature. In an access element providing access for an access feature within another access element, the attribute ‘qtiLinkIdentifierRef=’ within the contentLinkInfo tag is replaced with a ‘apipLinkIdentifierRef=’. The ‘apipLinkIdentifierRef=’ will point to the specific ‘contentLinkIdentifier=’ of the access feature in another access element. Figure 2.2 illustrates this relationship.

Figure 2.3 Accessibility for Access Features
In the accessElement use the unique identifier attribute value for any of the specific elements within another accessElement to specifically identify what part of the element needs an additional support. For example, a text string supplied for Cognitive Guidance might have a spoken access feature (an audio file that reads the string of text). The original access element that had the Cognitive Guidance would look like:
<apip:accessElement
identifier="ae001">
<apip:contentLinkInfo qtiLinkIdentifierRef="p1">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:relatedElementInfo>
<apip:guidance>
<apip:cognitiveGuidanceSupport>
<apip:supportOrder>1</apip:supportOrder>
<apip:textString
contentLinkIdentifier=”cgts001”>
Use the information in the table to help
answer
this question.</apip:textString>
<apip:guidance>
</apip:relatedElementInfo>
</apip:accessElement>
The access element that provides accessibility for the text string for a cognitive guidance support within the above access element would look like:
<apip:accessElement
identifier="ae002">
<apip:contentLinkInfo
apipLinkIdentifierRef="cgts001">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
<apip:relatedElementInfo>
<apip:spoken>
<apip:audioText
contentLinkIdentifier="spokentext002">Use the information in
the table to help answer this
question.</apip:audioText>
<apip:textToSpeechPronunciation
contentLinkIdentifier="ttsp001">Use the information in the
tay-bul to help answer this
question</apip:textToSpeechPronunciation>
</apip:spoken>
</apip:relatedElementInfo>
</apip:accessElement>
Note that the apipLinkIdentifierRef refers to the textString identifier (cgts001), NOT accessElement identifier (ae001). Accessibility for access features should only go one level down from an access element that supports the default text. You should not have an access element that supports an access element that supports another access element.
By placing the spoken accessibility for the cognitive support text string in a different access element, we clarify that the spoken information relates to the text string inside the access element, and not the default content, which may be text that has its own spoken features.
Inclusion order allows you to specify the
order examinees with certain access feature needs may be
presented with the item content. Order differences may occur for
any number of reasons, including: an agreement among experts that
users with specific needs perform better with a particular order;
the authoring organization may have a preference of how some
content should be presented to certain examinee types, or because
the examinee type may mean the content is reorganized when
presented in its accessible form (like an ASL paragraph that may
change the order to follow a time sequence). No inclusion order
is specified for the default content examinee, as it is expected
they will choose how to decipher/decode the displayed information
using visual cues only. Table 1 below lists the examinees for
whom APIP provides an explicit inclusion order.
Table 2.1: Examinees with Explicit Inclusion Orders
|
Examinee Access Feature Need |
PNP ‘user’ tag |
Corresponding Inclusion Order Tags |
|
Spoken, Text Only |
userSpokenPreference:
|
textOnlyDefaultOrder, |
|
Spoken, Text & Graphics |
userSpokenPreference: |
textGraphicsDefaultOrder, |
|
Spoken,
Non-Visual
|
userSpokenPreference: |
nonVisualDefaultOrder |
|
Spoken, Graphics Only |
userSpokenPreference: |
graphicsOnlyOnDemandOrder |
|
Braille user |
braille |
brailleDefaultOrder |
|
American Sign Language |
signingType:
|
aslDefaultOrder, |
|
Signed English |
signingType:
|
signedEnglishDefaultOrder |
Spoken User Types (spokenUserPreference) are
exclusive to each other. You can only be one spoken user type at
a time during an assessment session.
An examinee could be assigned a spoken access feature, but one that is restricted to the directions of an assessment, rather than the questions (or reading passages) within the assessment. In such cases examinees should have a PNP profile with <apip:directionsOnly>true</apip:directionsOnly> entry. Those examines should also have a spokenUserPreference supplied. If a spokenUserPreference is not supplied, the default preference will be considered TextOnly.
Inclusion orders are available for the access features listed in Table 2.1. Also, only elements specifically intended (meaning the access feature is provided within that access element) for that examinee type should be listed in that inclusion order. Elements tagged or created for a different need (for example, keyword translation or cognitive guidance) would NOT be listed in an inclusion order. For example, if you created a spoken access feature for a cognitive guidance text string, the element supplying the spoken access feature would not be listed in any spoken inclusion order. The vendor supplying the delivery system will decide how and in what order that supplementary access feature would be accessed by the examinees requiring spoken access features. The access element that supplied the spoken support for the guidance support text (accessibility for access features) would NOT be listed in any spoken inclusion order because not all spoken users are meant to receive the guidance support. You would not include the spoken access feature for the cognitive guidance text in the same access element, because the information provided in an access element is intended to support the content referenced in the contentLinkInfo container, not any other content within the same access element.
Within an inclusion order, you refer to access element you want a user to encounter using the elementOrder tag’s identifierRef attribute. Contained within the elementOrder is the explicit order tag, as shown below.
<apip:inclusionOrder>
<apip:textOnlyDefaultOrder>
<apip:elementOrder
identifierRef="ae001">
<apip:order>1</apip:order>
</apip:elementOrder>
<apip:elementOrder
identifierRef="ae002">
<apip:order>2</apip:order>
</apip:elementOrder>
<apip:elementOrder
identifierRef="ae003">
<apip:order>3</apip:order>
</apip:elementOrder>
</apip:textOnlyDefaultOrder>
</apip:inclusionOrder>
The identifierRef attribute should contain the name (the unique identifier within an accessElement’s identifier attribute) of an access element within the same XML file. It should never refer to access elements outside of the same document.
The order the content is presented should follow these order numbers. As a best practice, order numbers should align with the order within the code they are presented, and there should not be gaps in the sequencing of the order. If the sequence is presented out of order, use the explicit order number to present the content. If gaps exist in the order numbering, proceed to the next higher number that is above the preceding order number.
In many, if not most cases, inclusion orders will be identical between the various user types. For example, in a multiple choice item that had text as the prompt, and text in the responses, most user types will be presented with the content in the exact same order, possibly using the same access elements
Most of the examinee types that use inclusion orders have a ‘default’ inclusion order. Only the spoken, graphics only user type does not have a default order. A default order is the order of content intended to be delivered when the item content is presented in total from beginning to end, without requiring a user to reinitiate the presentation (unless the user has requested that the presentation be stopped or paused). It should include all the information necessary to understand the context of the question, what response is expected from the examinee, and any answer options/choices.
Most examinee types also have ‘on demand’ inclusion orders. User types expecting to support blind or very low vision user types do not include on demand orders. On demand content is content that is any content authors might be considered to be important to responding to the question, but might best be presented to the user at the request of the user. Examples include identifying labels in scientific diagram, or specific data within a table of information. This information might better be understood when taking to the time to understand its context, rather than when understanding the main points of the item. The reason blind examinees do NOT have these on demand orders is that on demand information is normally made in the context of the content’s location, which is not available to blind users. Information about a diagram’s or table’s content would be supplied in the description of that content for these examinees.
Regardless of which inclusion order contains content for a given examinee type, access elements listed in both the default and on demand orders are expected to be available (individually) to users at their request. Content should be something an examinee can specifically request as many times as the examinee requests.
The inclusion order serves an additional purpose, namely providing a tabbing order for some examinees who may be navigating through test content using an input device (large buttons, sip & puff, keyboard) that sends tab and enter commands to the computer. Use the inclusion orders to set the tabbing order for these users. The default orders should precede any on-demand orders, but both orders (if applicable) should be part of the tabbing order. As a best practice, delivery systems should also ensure users have the ability to respond to items using tab and enter navigation.
Because spoken and Braille access features are a Core access feature for APIP, if an item could be used by any of those users, it should include inclusion orders for those user types. If an item is not considered appropriate for a user type, exclude the inclusion orders for that user in the item. For example, if an item requires the user to recognize something visually (like a photograph of a person), you would remove completely the tags for brailleDefaultOrder and nonVisualDefaultOrder. Do not put in inclusion orders for specific user types if you wish the item to be considered inaccessible to the following user types:
The exception to this rule is that Spoken: Graphics Only user. Many items do not contain graphics, and therefore do not need descriptions. Items without a graphicsOnlyOnDemandOrder should still be provided to these examinees.
Inclusion order lists that contain no references to access elements (an ‘empty’ inclusion order) are not allowed. For example, a simple multiple choice question of only text would likely not have any on-demand elements listed, so the textOnlyOnDemandOrder or textGraphicsOnDemandOrder would be omitted from the code.
See the annotated examples in Section 4 for specific examples of best practice coding for inclusion orders.
In some rare cases, you might have the need to have an access feature broken into several pieces, but have it support a single piece of default content. An example might be a lengthy pre-recorded audio file that describes a complex diagram. In that example, assuming the spoken access feature was broken into two audio files, referenced in two different access elements that refer to the same default content (the complex diagram), you supply the order of the audio files (via the accessElement’s contentLinkIdentifier) using the inclusion order list for the intended examinee type. The access elements might look like:
<apip:accessElement identifier="ae001">
<apip:contentLinkInfo qtiLinkIdentifierRef="complexDiagram1">
<apip:objectLink/>
</apip:contentLinkInfo>
<apip:relatedElementInfo>
<apip:spoken>
<apip:audioText>First part of the complex diagram description here.</apip:audioText>
<apip:textToSpeechPronunciation contentLinkIdentifier="ttsp001">First part of the complex diagram description here.</apip:textToSpeechPronunciation>
</apip:spoken>
<apip:brailleText>
<apip:brailleTextString contentLinkIdentifier="brailleText001">First part of the complex diagram description here.</apip:brailleTextString>
</apip:brailleText>
</apip:relatedElementInfo>
</apip:accessElement>
<apip:accessElement identifier="ae002">
<apip:contentLinkInfo qtiLinkIdentifierRef="complexDiagram1">
<apip:objectLink/>
</apip:contentLinkInfo>
<apip:relatedElementInfo>
<apip:spoken>
<apip:spokenText contentLinkIdentifier="spokentext002>The second part of the complex diagram description goes after.</apip:audioText>
<apip:textToSpeechPronunciation contentLinkIdentifier="ttsp002"> The second part of the complex diagram description goes after.</apip:textToSpeechPronunciation>
</apip:spoken>
<apip:brailleText>
<apip:brailleTextString contentLinkIdentifier="brailleText002">The second part of the complex diagram description goes after.</apip:brailleTextString>
</apip:brailleText>
</apip:relatedElementInfo>
</apip:accessElement>
Note the same reference in the contentLinkIdentifier in both access elements. The inclusion order might then just refer to those elements.
<apip:inclusionOrder>
<apip:nonVisualDefaultOrder>
<apip:elementOrder
identifierRef="ae001">
<apip:order>1</apip:order>
</apip:elementOrder>
<apip:elementOrder
identifierRef="ae002">
<apip:order>2</apip:order>
</apip:elementOrder>
</apip:nonVisualDefaultOrder>
</apip:inclusionOrder>
Many graphics have labels or text areas that overlap, or are displayed alongside the graphics. To avoid making the text part of the graphic file (pixel based text), use CSS styles to position text elements over or near the graphic. Text elements can then contain id attributes that can be individually supported by access element information. If the text is embedded within the image file, use the method described in Section2.2.12 below. The advantage to having text separate from the graphic file is that is can be modified without changing the graphic file, or modified using stylesheet changes. Additionally, the text is also now capable of being modified by the test delivery interface. A user may want to alter the text and background colors, and by making the labels text elements, the text can be changed without altering how the graphic is rendered. This is not always possible though, and the placement of text in some graphics requires a high level of positioning and styling. For example: labeling a river in a map. Below is the graphic used in the example for this section.

Figure 2.4 Graphic with labels that require accessible information
For this example, assume the graphic is only the grid with shading, and the graphic does NOT include the A-B-C-D text as part of the graphic file. The text for the labels will be written in their own tags, and those tags will be positioned using CSS positioning. Below is the default content sample mark up within an assessmentItem itemBody node:
<p id="p1">Below is a graphic with the
labels placed by CSS.</p>
<div id="div1">
<img id="graphic1" src="mm1003justGrid.png" width="209"
height="160" />
<span id="labelA">A</span>
<span id="labelB">B</span>
<span id="labelC">C</span>
<span id="labelD">D</span>
</div>
An access element can now refer to the ids of the entire div (div1), the graphic (graphic1), and/or any one of the labels (labelA – labelD).
CSS code that places the labels and graphics, as written in a separate stylesheet file:
#div1 { position:relative; height:170px; }
#labelA {
font-size: 16pt;
font-family:Verdana, Geneva, sans-serif;
font-style:italic;
position:absolute;
left:20px;
top:140px;
z-index:2;
}
#labelB {
font-size: 16pt;
font-family:Verdana, Geneva, sans-serif;
font-style:italic;
position:absolute;
left:20px;
z-index:3;
}
#labelC {
font-size: 16pt;
font-family:Verdana, Geneva, sans-serif;
font-style:italic;
position:absolute;
left:255px;
z-index:4;
}
#labelD {
font-size: 16pt;
font-family:Verdana, Geneva, sans-serif;
font-style:italic;
position:absolute;
left:255px;
top:140px;
z-index:5;
}
#graphic1 { position:absolute; left:40px; z-index:1; }
Below is the current recommended method for referring to a portion of an image for the purpose of supplying accessibility information to that portion. The basic idea is that within the XHTML default content, a containing <div> element is created. The graphic is referenced in an image tag. Boxes (rectangles) will lay over text within the graphic, listed as <div>s. The CSS style sheet used for the assessmentItem would then describe the size and location of the boxes. The example below also uses the graphic shown in Figure 2.4, except in the example below, the text labels are included as pixel based information in the graphic file.
Code in the asssessmentItem itemBody node:
<p>Below is graphic that has the text embedded in the graphic file:</p>
<div id="div2">
<img id="graphic2" src="mm1003withLetters.png"
width="263" height="169" />
<div id="areaA"></div>
<div id="areaB"></div>
<div id="areaC"></div>
<div id="areaD"></div>
</div>
CSS code that places the areas around the text within the graphic, as written in a separate stylesheet file:
#div2 { position:relative; height:170px; }
#areaA {
position:absolute;
height:24px;
width:20px;
left:25px;
top:140px;
z-index:2;
}
#areaB {
position:absolute;
height:24px;
width:20px;
left:25px;
z-index:2;
}
#areaC {
position:absolute;
height:24px;
width:20px;
left:260px;
z-index:2;
}
#areaD {
position:absolute;
height:24px;
width:20px;
left:260px;
top:140px;
z-index:2;
}
#graphic2 { position:absolute; left:20px; z-index:1; }
Since the label ‘areas’ are now identifiable elements in the XHTML default content, you can now refer to those elements within an access element contentLinkInfo node.
Companion materials are described within the <apip:apipAccessibility> container, and include assessment materials that are required to be available to examinees while answering a specific item. Materials may include interactive tools, like calculators or rulers, or content that also used in responding to the item, like a table of information or map. The specific materials tags, and their best practice usage are described below.
All Calculators would have access to all number digits, decimal key, equals button, and Clear button. Read aloud capability should be something that is configurable to either allow or not allow during testing. Additionally, some institutions allow for reading the numbers or functions as you use them, but do not allow reading the number as a whole. This is usually for Math related content. The four possible calculators that can be specified are Basic, Standard, Scientific, and Graphing. Descriptions of each are included below.
Basic Calculator: In the <apip:calculatorType> tag, use the Basic vocabulary. Assumed functions: Add, Subtract, Multiply, Divide.
Example usage:
<apip:companionMaterialsInfo>
<apip:calculator>
<apip:calculatorType>Basic</apip:calculatorType>
<apip:description>4 function
calculator</apip:description>
</apip:calculator>
</apip:companionMaterialsInfo>
Standard Calculator: In the <apip:calculatorType> tag, use the Standard vocabulary. Assumed functions: all basic calculator functions, Square root, Percentage (%) , Plus/Minus, a.k.a. Sign Change, Memory Functions
Scientific Calculator: In the <apip:calculatorType> tag, use the Scientific vocabulary. Assumed functions may include, but are not limited to: all standard calculator functions, a π key, sign change (+/-), square (x2) , cube (x3), x to the y (xy), square root (√) , cube root , xth root , logarithm keys, log, ln, base 10, base e, Trigonometry function keys with an INVERSE key for the inverse functions, sin, cos, tan, hsin (hyperbolic sin), hcos, (hyperbolic cos), htan (hyperbolic tan), DEG, RAD, GRAD conversion, a capacity to work in both degree and radian mode, a reciprocal key (1/x) – calculate the inverse of the displayed value, permutation and/or combination keys (nPr , nCr), parentheses keys, metric conversion, permutation and combination keys, nPr, cPr, x!
Graphing Calculator: In the <apip:calculatorType> tag, use the Graphing vocabulary. A Graphing calculator includes many of the same functions of a scientific calculator, plus the ability to display equations graphically.
Allows for the presentation of a measuring device for use on the computer with the supplied content. Use the description tag for text description, as a human readable description of the functionality/capability of the rule. Provide the system of measurement using the ruleSystem, which provides for choosing between metric (SI) and US measurement systems, then set the minimum length of the rule, the minor increment of the rule, and the major increment of the rule using the unit type (related to the rule measurement system). An example of a specifying a rule is shown below.
<apip:companionMaterialsInfo>
<apip:rule>
<apip:description>A metric ruler with increments on one side of the rule.</apip:description>
<apip:ruleSystemSI>
<apip:minimumLength>10</apip:minimumLength>
<apip:minorIncrement unit="meter">
0.5</apip:minorIncrement>
<apip:majorIncrement unit="meter">
1.0</apip:majorIncrement>
</apip:ruleSystemSI>
</apip:rule>
</apip:companionMaterialsInfo>
The examinee will be supplied with an on-screen protractor while responding to the item. A human readable description can be including in the description tag. Provide the measurement system using the increment tag, which lets you provide a value for either the metric (radians) or US (degrees) systems of angular measurement. An example is shown below.
<apip:companionMaterialsInfo>
<apip:protractor>
<apip:description>A floating, transparent protractor that can be moved over the angles in the item.</apip:description>
<apip:incrementUS>
<apip:minorIncrement>5.0</apip:minorIncrement>
<apip:majorIncrement>30.0</apip:majorIncrement>
</apip:incrementUS>
</apip:protractor>
</apip:companionMaterialsInfo>
This allows a reference to a reading passage that needs to be provided to the examinee while responding to this item. Provide fileHref Use the <apip:readingPassage> tag and provide a link to the material by use of the fileHref tag.
<apip:companionMaterialsInfo>
<apip:readingPassage>
<apip:fileHref>someAPIPcontent.zip</apip:fileHref>
</apip:
readingPassage>
</apip:companionMaterialsInfo>
Note that a conflict between the relationship of the reading passage and the item may exist between the item <apip:readingPassage> reference and the test package section manifest. Systems should seek to resolve these conflicts, and/or allow for the section manifest to override the item reference.
These are content or reference materials that relate to the item content. Examples could be a map, a table of information, a sheet of math formulas, an interactive periodic table of elements, or even graphic creation tools. Use the <apip:digitalMaterial> tag and provide a link to the material by use of the fileHref tag.
<apip:companionMaterialsInfo>
<apip:digitalMaterial>
<apip:fileHref>directory001/someDigitalFile.exe
</apip:fileHref>
</apip:
digitalMaterial>
</apip:companionMaterialsInfo>
These are external materials needed to work with, or respond with, when the examinee responds to the item. Use the <apip:externalMaterial> tag, then describe the materials using text. Example:
<apip: companionMaterialsInfo >
<apip:
physicalMaterial>Supply scissors and 2 sheets of
8.5 x 11 inch white paper.</apip:
physicalMaterial>
</apip:companionMaterialsInfo>
QTI 2.1 allows for the inclusion of a CSS
file reference within the item. APIP will enforce the specific
use of CSS as the layout specification, and will seek to limit
the use of CSS 2.1 tags to the list below. The full CSS 2.1
specification can be found at http://www.w3.org/TR/CSS2/
Colors should always be represented as hexadecimal
values.
The
suggested unit type for fonts is points. Only letter spacing uses
ems.
The
suggested unit type for all other length measurements is
pixels.
[DIR]
represents directions around the element (top, right, bottom,
left)
Below are the set of examples used to demonstrate the presentation of standard QTI v2.1 instances of content. These examples do NOT use the APIP accessibility extensions of QTI v2.1. The initial set of standard example instances are:
Example code may be found in the APIP examples folder, and is not presented within this section.
The standard QTI XML representation for a true/false assessment item is shown in Example 3.1 (file qti_example_3d2) with the corresponding visualization shown in Figure 3.1. The corresponding logical structure of the content package and the corresponding “imsmanifest.xml” for the assessmentItem, are shown in Figure 3.2 and Example 3.1 respectively. The files for this example are available in the directory: APIP_Examples/Standard_QTI_Examples/qti_example_3d1 (see the documentation set distribution files).

Figure 3.1 Visualization of the T/F assessmentItem example.
This is a simple True/False question in which the user selects one of the two available options.
The key features in the example shown in Example 3.1 are:
1. Lines (0010–0012) define the identifier for the choice correct answer;
2. Lines (0004-0027) define the set of variables that are used to sustain the response processing, the scores assigned for this question;
3. Lines (0028 and 0041) define the question presented to the candidate;
4. Lines (0032–0040) define the presentation of the full true/false question (for which only a single attempt is permitted (maxChoices="1" in line 0031);
5. Lines (0042–0053) denote the response processing that is used to assign the correct response score (‘1’) and to trigger the feedback for the correct answer. If the answer is incorrect then the default value of ‘0’ is assigned automatically.
The code for the corresponding
‘imsmanifest.xml’ when the assessmentItem is packaged
is listed in file qti_example_3d1.
The key features in the example shown in Example 3.1 are:
6. Lines (0008–0012) contain the manifest metadata (at present there is only the schema identification information – this shows that this content package is used to contain an APIP Package);
7. Lines (0015–16) shows that the only resource in the content package is an APIP Item (this conforms to QTI v2.1 assessmentItem);
8. Lines (0017–0040) contain the resource metadata (defined in terms of LOM) that is used to identify the associated APIP assessmentItem i.e. its GUID, title and human-readable description;
9. Lines (0031–0038) contain the QTI metadata for the APIP ‘assessmentItem’;
10. Lines (0041–0042) contain the reference to the XML instance file that contains the actual QTI XML (as listed in Code 3.1) and is found in the content package itself.
A schematic representation of the content package for the manifest listed in Example 3.1 is shown in Figure 3.2. This shows that a single resource is defined with the associated XML instance file contained in the package as a whole.

Figure 3.2 Visualization ‘imsmanifest.xml’ for the packaging of the T/F assessmentItem.
The standard QTI XML representation for multiple choice AssessmentItems is shown in Example 3.2 (file qti_example_3d2) with the corresponding visualization shown in Figure 3.3. The files for this example are available in the directory: APIP_Examples/Standard_QTI_Examples/qti_example_3d2 (see the documentation set distribution files).

Figure 3.3 Visualization of the MC assessmentItem example.
This is a simple Multiple Choice question in which the user selects one of the four available options.
The key features in the example shown in Example 3.2 are:
1. Lines (0010–0012) define the identifier for the choice correct answer;
2. Lines (0014–0025) define the set of variables that are used to sustain the response processing, the scores assigned for this question;
3. Lines (0027 and 0045) define the question presented to the candidate;
4. Lines (0046–0063) define the presentation of the full multiple choice question (for which only a single attempt is permitted (maxChoices="1" in line 0051);
5. Lines (0067–0075) denote the response processing that is used to assign the correct response score (‘100’) and an incorrect answer receives the default value of ‘0’.
The code for the corresponding ‘imsmanifest.xml’ when the assessmentItem is packaged is found in the file apip_example3d2.
The key features in the example shown in Example 3.2 are:
11. Lines (0007–0011) contain the manifest metadata (at present there is only the schema identification information – this shows that this content package is used to contain APIP data);
12. Line (0014) shows that the only resource in the content package is an APIP assessmentItem;
13. Lines (0016–0039) contain the resource metadata (defined in terms of LOM) that is used to identify the associated QTI assessmentItem i.e. its GUID, title and human-readable description;
14. Lines (0030–0037) contain the QTI metadata for the ‘assessmentItem’;
15. Lines (0040–0041) contain the reference to the XML instance file that contains the actual QTI XML (as listed in Code 3.1) and is found in the content package itself.
A schematic representation of the content package for the manifest listed in Example 3.2 is shown in Figure 3.4. This shows that a single resource is defined with the associated XML instance file contained in the package as a whole.

Figure 3.4 Visualization ‘imsmanifest.xml’ for the packaging of the MC assessmentItem.
In the annotated examples within this section, only the specific lines of the example code are presented. To see these annotated examples as complete files, refer to the files starting with apip_examples4d#.
Section 4.1 annotates an example item with spoken, Braille, keyword emphasis, keyword translation, and cognitive guidance access features.
Section 4.2 annotates and example item with more complex access features for spoken, Braille, ASL language learner guidance (with spoken and Braille access feature for the guidance text).
Section 4.3 annotates an APIP Section, where a single reading passage is related to several items.
An example of audio-based alternative rendering of a multiple choice-single response assessmentItem is shown in apip_example_4d1 with the corresponding visualization shown in Figure 4.1 (this is based on the original example shown in Sub-section 3.2). A visualization of content package for this item is shown in 4.2. More detailed information on content packaging is found in chapter 6. The files for this example are available in the APIP examples folder.

Figure 4.1 Visual (Default) representation of an APIP MC item
This is a simple Multiple Choice question in which the user selects one of the four available options. The basic item structure have been added five access elements that are coordinated in various inclusion orders. The key features of the XML code are:
1. The XHTML code is within the <itembody> tags. It is the expectation that the majority of users would encounter the item using this visual presentation only, with no supplemental accessibility information required;
2. While the visual content is presented in a top to bottom, left to right presentation – that may not necessarily be the order we want all users to interact with the content. APIP provides a method of supplying additional information about the basic content, and a method for other types of users to encounter the content;
3. The APIP related code for the example item is included in lines 60–303. The accessibility information is listed within the <apip:accessibilityInfo> tags in lines 138–302. The inclusion orders for the different types of users is found within the <apip:inclusionOrder> tags in lines 67–137. Accessibility information and inclusion order are discussed in further detail below;
4. This document does include a reference to keyword emphasis on line 263 inside accessElement “ae015”. The intent here is to mark certain words that could have emphasis for someone reading the content, independent of the base XHTML code that describes the content. You could then highlight/emphasis some words for some users, and not for others. How that word or those words are emphasized is entirely up to the delivery vendor or their clients, though likely implementations include bolding and/or italicizing the text, or color highlighting the text;
5. Keyword Translation is used in this item, with accessElement “ae016” (line 267–280) referring to the word “expression” in the content. Line 275 provides the essential language tag of “es” (Spanish), to indicate the text string should be provided to users with a profile indicating they should receive keywords translated in Spanish. Element “ae017” provides additional support for users who may also need that translation read out loud (or when spoken) to them, or if they need Braille when spoken for those words. Line 283 uses the apipLinkIdentfier to refer to the text string on line 276;
6. This item shows a formula, which the implementing project has decided they wish to provide tactile sheets for blind users to access. Element “ae002” (lines 155–169) provides tactile information in lines 164–170. Line 168 indicates the text that should be spoken (if the spoken access feature is provided). Lines 165–167 indicate the sound file that might be played to read that text when spoken. And finally, line 169 provides the text string that might be provided to Braille users.
Figure 4.2 shows a visualization of the how the XML code

Figure 4.2 Visualization ‘imsmanifest.xml’ for the packaging of the assessmentItem.
Another example of alternative rendering of a multiple choice-single response assessmentItem is shown in apip_example_4d2 with the corresponding visualization shown in Figure 4.3. The files for this example are available in the directory: APIP_Examples/APIP _QTI_Examples/apip_example_4d2 (see the documentation set distribution files).

Figure 4.3 Visualization of the APIP question apip_example_4d2.
The example demonstrates how different users have been assigned different orders to the item content. Those users include:
It should be noted that the above list of users might not be exclusive to one another. While you would only be one of the three listed spoken users, you might be a spoken, text only user and need the supplemental cognitive guidance information (as just one of the many combinations possible).
An APIPv1.0 core instance is required to include an inclusion order for Spoken and Braille users, provided that the content is appropriate for those users. The example item presented adds other optional accessibility information, but does not include ALL possible optional accessibility information. It demonstrates that it is the content creator’s prerogative to include or exclude supplemental accessibility information for their content.
The layout of the XHTML code (given in the apip_example_4d2.xml file) is found within the <itembody> tags. It is the expectation that the majority of users would encounter the item using this visual presentation only, with no supplemental accessibility information required. Default content code is shown below.
<itemBody id="theWholeItem">
<p id="p1"><span id="a">Ms. Smith's class
contains 24 students. </span><span id="b">Each
student voted for his or her favorite color.
</span><span id="c">The result of the class vote is
shown </span><span id="z">in the table
below.</span></p>
<table id="table001">
<caption id="d">Results of the Class
Vote</caption>
<tbody>
<tr id="columnheadings">
<th id="th001"><span
id="e">Color</span></th>
<th id="th002"><span
id="f">Number of Students</span></th>
</tr>
<tr id="u">
<td id="td001"><span
id="g">Red</span></td>
<td id="td002"><span
id="h">12</span></td>
</tr>
<tr id="v">
<td id="td003"><span
id="i">Blue</span></td>
<td id="td004"><span
id="j">6</span></td>
</tr>
<tr id="w">
<td id="td005"><span
id="k">Green</span></td>
<td id="td006"><span
id="l">4</span></td>
</tr>
<tr id="x">
<td id="td007"><span
id="m">Yellow</span></td>
<td id="td008"><span
id="n">2</span></td>
</tr>
</tbody>
</table>
<choiceInteraction responseIdentifier="RESPONSE"
shuffle="false" maxChoices="5">
<prompt id="o">Indicate which of the following
statements are accurate.</prompt>
<simpleChoice identifier="choice1"
fixed="true">
<p id="p">The majority of students voted
for Red.</p>
</simpleChoice>
<simpleChoice identifier="choice2"
fixed="true">
<p id="q">Twice as many students voted for
Red a voted for Blue.</p>
</simpleChoice>
<simpleChoice identifier="choice3"
fixed="true">
<p id="r">Two percent of students voted for
Yellow.</p>
</simpleChoice>
<simpleChoice identifier="choice4"
fixed="true">
<p id="s">Red received more votes thatn any
other color.</p>
</simpleChoice>
<simpleChoice identifier="choice5"
fixed="true">
<p id="t">Twenty-five percent of students
voted for Green.</p>
</simpleChoice>
</choiceInteraction>
</itemBody>
While the visual content is presented in a top to bottom, left to right presentation – that may not necessarily be the order all users may interact with the content. APIP provides a method of supplying additional information about the basic content, and a method for other types of users to be presented the content. The APIP related code for the example item is included within the <apip:apipAccessibility> tags. The inclusion orders for the different types of users is found within the <apip:inclusionOrder> tags. The accessibility information is listed within the <apip:accessibilityInfo> tags. Accessibility information and inclusion order are discussed in further detail below.
As a reminder, any XHTML tag within the default content can have a unique identifying name using the id attribute. You add accessibility information to content using the accessibility element, or <apip:accessElement> tags, which reference the content id names.
Within the accessElement tag, you need to state which piece or pieces of content the accessibility information is associated with. On line 415, the contentLinkInfo says that the qtiLinkIdentifierRef=”a”, which means the accessibility information will be related to the QTI XHTML element whose id is “a”. Lines 416 to 418 state that the information should be associated with the entire length of the text string.
APIP allows you to assign one or more named elements to an accessibility element. Lines 1078–1083 demonstrate that the accessibility element is related to parts “a” and “b” (all of the 1st sentence in the first paragraph). You can also refer to different subparts in the same object, like referring to 2 non-consecutive words in the same sentence.
Figure 4.4 shows the accessibility elements, and the part of the content with which are associated. The names shown in the figure refer to the unique identifier of the accessElement, NOT the id of the XTHML element. The accessElement names are overlapped with the content to represent the concept of linking content to accessElement information. Note that some content can have more than one accessibility elements that refer to the same content, like the table title, which has two accessElements (ae005 and ae022). Different accessibility information is associated with that same part of the content for different types of users. The accessElements that refer to the rows for the table are ae025–ae028, while the individual table cells are ae008–ae015.

Figure 4.4 Accessibility elements referencing the content used by ‘Spoken’ and ‘Braille’ users.
The accessibility information supplied for the accessibility elements shown in Figure 4.4 are:
1. Lines 414–436 describe the accessibility information for accessibility element “ae001”, which is linked to the content identified as “a”, and refers to the entire string of the text within that content piece;
2. Line 421 begins the related accessibility content for access element ae001;
3. Line 422 states that the accessibility information listed is related to the spoken access feature for access element ae001;
4. Line 423 states that there is an audio file associated with the spoken access feature. That audio file is also given a unique (within the document) identifying name, and the mime type is provided;
5. Line 424 supplies the address of the above audio file, including the file name;
6. Line 425 states that the above audio file is a human recording. If no voiceType specified, we would assume that the audio file is generated by a computer. The voiceType tag is useful for a number of reasons. If a testing program requested that both human and computer recordings were to be provided, the user could indicate a preference to which recording they prefer in their APIP user profile, and the voiceType tag would allow the testing application to provide either voice type to the user.
7. Line 427 describes the text as it should be read out loud for access element ae001 (the <apip:spokenText> information).
8. Line 429 has a tag that is used to specify a pronunciation for a text-to-speech engine. In this case, the text-to-speech engine should use the text supplied in line 417, because it needed specific spellings of the word “Ms.” to get the pronunciation correct, and needed a hyphen between the numbers to get the words read smoothly (this is just a hypothetical case – actual exceptions vary between text-to-speech engines).
9. With the audio file and spoken information, a test application could now supply text spoken to the Text Only user as either the recorded human voice, or by a text-to-speech application.
10. Line 432 begins information related to Braille users for access element ae001, and is not necessarily related to the Spoken user experience, though a Braille user may also use a spoken:nonVisual when spoken. Those two different accessibility features are not expected to work in tandem with each other, as the rate of reading the Braille is unlikely to match that of the spoken representation.
Many testing programs allow very specific access to the content of a test question. They will make special business rules about which kinds of text can be read, and how specific types of information should be described. In the case of the example item, the testing program has made decisions about how they want the table of information spoken to Text Only users. They indicated that tables should have their titles read by default, during the default reading of the item, but the table content should only be accessed by the user on demand.
1. Lines 81–116 describe those elements that should be read by default, and are also available on demand.
2. Lines 117–148 describe those elements that should only be available on demand.
Taken together, they represent all the spoken elements available to the Spoken Text Only user. They also represent the combined tabbing order (for users navigating by tabbing), with the default order going before the on-demand order.
When preparing information for Text & Graphics users (userSpokenPreference:textGraphics), content writers need to consider that the user may have difficulty seeing the materials, or may have difficulty making sense of the graphic representation. Sometimes orienting information can also be useful for these users. For example, you might describe a table as having 3 columns and 5 rows. That information could help the Text & Graphic user have a sense of the whole table. This description has been done in the example item, so while the Text Only user uses the accessibility information provided in accessibility element “ae005” (lines 491–506), the Text & Graphics user uses accessibility element “ae022” (lines 931–948). This element has a fuller explanation of the whole table. Both accessibility elements reference the same piece of text content, namely id=“d” (line 28).
The assessment program also decided that for Text & Graphics users, they would read the table information by rows, restating the column information as it was read. Access element “ae0025” describes the entire row in line 987.
For this example, assume the assessment program made the choice to read the table as part of the default content. The default order for Text & Graphics users is listed in lines 150–206. Note that the main difference between this user and the Text Only user is the inclusion of the table in the middle of the default reading order, and that it refers to different accessibility information, namely a fuller description of the table and reading the data by row. The assessment program also wanted to make the individual text available for users on demand, so those are listed in lines 207–238 within the textGraphicOnDemand order. Together, the default order and on-demand order represent the full spoken content available to Text & Graphics users. Both lists also represent the tabbing order, with the default order preceding the on-demand order.
For Spoken Text & Graphics users, you can make new, supplemental spoken information available as well as giving them access to information already specified for Text Only users. That supplemental information can better orient or guide the examinee.
It is assumed for the non-visual user that they may not be able to see any of the content of the item. They cannot experience the content in the same way that the Visual Only user does. Careful consideration needs to be taken when providing content for these users, and any random reading of text without context should be avoided. For example, if you had a graphic that had two labels pointing to parts of the graphic, and you included spoken information where the labels are spoken, that information could easily be useless, or at least confusing. For that reason, there is no on demand order for Non-Visual users. It is expected that all the information needed to understand the content, or respond to a question, will be supplied by default to the user.
In the case of this example item, no new information (additional access elements with access feature information) was created for Non-Visual users. Instead, it reused the accessibility information supplied for the Text Only and Text & Graphics users. The inclusion order for Non-Visual users for this example is found on lines 239–294. The assessment program has made the decision that for Non-Visual users, tabular data should be presented AFTER the response options are read, so that data is presented after they know the context for which the data is supplied. There is no on-demand order for Non-Visual users, so the tabbing order is taken only from the default inclusion order (nonVisualDefaultOrder).
Braille users will have content provided in the order specified in the <brailleDefaultOrder> tags. For the elements listed in the brailleDefaultOrder order, you MUST provide the brailleText to be supplied to a refreshable Braille display device. You do not need to specify the actual Braille characters, rather, you write the letter characters you want the Braille display to convert for the Braille user. For this reason, all content that is considered accessible to Braille users MUST include at least the brailleDefaultOrder.
Accessibility element “ae001” is included in the brailleDefaultOrder, so it includes the text string for the Braille device, shown on lines 420–422. The brailleTextString here matches the content string exactly, which is often the case. Note the difference between the Braille string specifying using the number “24” instead of the “twenty four” used in the spokenText string on line 415. The number “24” takes up far less characters on the refreshable display, and the context the numbers are used are not so vague as to be confusing.
Accessibility elements “ae005” and “ae022” demonstrate the difference between an element that is NOT included in the brailleDefaultOrder (ae005) and one that is included (ae022). No Braille text is supplied for ae005, but is supplied for ae022.
It should be noted that in the majority of cases, the text string supplied will likely match the content string. It is declared within the accessibility element to ensure the information is available to the refreshable Braille display, and to allow for modifications by people knowledgeable of Braille use in an assessment context.
While you can often use the same accessibility elements for ASL users as Text Only users, that isn’t always the case. When English is translated to ASL, some concepts or major elements are moved to different parts of a sentence, or even different parts of the paragraph. Therefore, the chunks of content you are referring to with ASL are often larger than for Text Only users. This example has new elements that were created specifically for the ASL user. The other elements already existed for other kinds of users:
1. For accessibility element “ae030” in lines 1077–1095, the only accessibility information is for the ASL user. Note that lines 10778–1083 specify that it links to two named parts of the content for this accessibility information;
2. Line 1086 states that the signing information provide is <apip:signFileASL>, intended for examinee with a PNP assignment of signType: ASL. This is done to ensure that the user is given the sign language they understand. If this item also had video information for Signed English sign language (or some future APIP version’s other kind of sign language), you’d want to be sure the ASL user got only the sign file that is ASL, because there are different signs and grammar in ASL than Signed English;
3. Line 1087 identifies the video element, and assigns the mime type of the file;
4. Line 1088 specifies the location of the video file;
5. Line 1089 specifies the location within the video file you should begin playing for this piece of information;
6. Line 1090 specifies the ending location within the video file. In other words, the start and end cues tell you within the video file where the first two sentences of the question are located. If there had been no starting or ending cue information, you would play the entire video file. If you only specified a start cue, you start playing at that point, then play to the end of the file. With these options, you could have one or more video files associated with the content, and you can pick and choose which parts of the video[s] you want to use;
7. Lines 352–377 specify the default order for ASL users, and lines 378–409 specify the on demand order. For this example, the assessment program has decided that the table content will not be automatically presented to ASL users, rather, they can specifically request to have the table information signed as needed.
If a user profile indicated that the student benefited or required the use of additional information about the content, any part of the content could have supplemental information provided to the user. This example item has an accessibility element related to the word “accurate” (line 53) in the content.
Accessibility element “ae029” (lines 1060–1095) has two pieces of related element information: keyWordEmphasis (line 1067) and a guidance (lines 1068–1073). The keyWordEmphasis tag means that if a user would benefit from certain text having emphasis (bold, italic, underline, etc.), this word should have that emphasis. It is up to the assessment program (or the application developers) to determine what kind of emphasis should be placed on the text. The language learner tag indicates there is additional information about that word that may assist language learners, provided on line 1071. It is up to the application developer (or the assessment program) to determine the best way to present this information to the appropriate users.
But what if the user needed Spoken or Braille when spoken for the text provided for the language learner (line 1071)? APIP has a method that allows you link accessibility information to the first level of accessibility when spoken. Accessibility element “ae032” (lines 1116–1133) provides spoken information for the text provided in element “ae029”. The important difference for this accessibility element is made on line 1117, where the attribute in the contentLinkInfo tag is now “apipLinkIdentifier” instead of “qtiLinkIdentifier”. This means the element is linking to an APIP accessibility element tag, not the default content within the XHTML.
In an assessment, there are situations where there is associated content that applies to multiple items. Examples include directions associated with a section of items, or a reading passage associated with a cluster of items. In an assessmentSection, one or more rubricBlock elements may be used to provide such content. A rubric block’s purpose can be defined using the use attribute, using the vocabulary of ScoringGuidance, Instructions, SharedStimulus.
ScoringGuidance describes information used in the evaluation of the responses.
Instructions describes content to be viewed by examinees related to a test or section of a test.
SharedStimulus describes content meant to be delivered simultaneously with test questions. A reading passage could be provided along-side several questions related to the passage. A diagram or graphic could also be presented with one or more questions associated with the diagram.
As in QTI, the target viewing audience for a rubricBlock is specified with the view attribute. The value candidate should be used in the rubricBlock’s view attribute for content relevant to student examinees. This is the most common view value when the purpose of the rubricBlock, as indicated by the use attribute, is either Instructions or SharedStimulus.
Other view attribute value options include author, proctor, scorer, testConstructor, and tutor. These view options are more likely to be selected when the rubricBlock is being used in the traditional academic sense, to provide ScoringGuidance.
A rubric block can also contain APIP extensions contained within the <apip:apipAccessibility> node. Only a single instance of the apipAccessibility node should be provided for each rubric block. An example of a rubricBlock containing instructions relevant to a set of questions contained within an assessmentSection is shown below. Note the presence of an <apip:apipAccessibility/> element following the main content of the rubric block.
<assessmentSection identifier="AssessmentSection1" visible="true" title="AssessmentSection1 Title" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p1" xmlns:apip="http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_qtiv1p0">
<rubricBlock view="candidate" use="Instructions">
<p>Pay <span id="content1">extra</span> attention to portions of text that have been <em>visually emphasized</em>. These are key phrases relevant to the questions.</p>
<apip:apipAccessibility>
<apip:accessibilityInfo>
<apip:accessElement identifier="ae1">
<apip:contentLinkInfo qtiLinkIdentifierRef="content1">
<apip:textLink>
<apip:fullString/>
</apip:textLink>
</apip:contentLinkInfo>
</apip:accessElement>
<apip:relatedElementInfo>
<apip:keyWordEmphasis/>
</apip:relatedElementInfo>
</apip:accessibilityInfo>
</apip:apipAccessibility>
</rubricBlock>
<assessmentItemRef identifier="AssessmentItem1" href="AssessmentItem1.xml"/>
<assessmentItemRef identifier="AssessmentItem2" href="AssessmentItem2.xml"/>
<assessmentItemRef identifier="AssessmentItem3" href="AssessmentItem3.xml"/>
</assessmentSection>
Note that an assessment section’s use of the apipAccessibility node allows for the full use of all APIP accessibility tags, including inclusion orders.
An example of an APIP AfA PNP instance is shown in Code 5.1 (the original file containing this code is available in the directory APIP_Examples/APIP_PNP_Examples/LoadPNP_01.
Code 5.1 Example of an APIP AfA PNP instance.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030
|
<?xml version="1.0"
encoding="UTF-8"?>
<apip:timeMultiplier>Unlimited</apip:timeMultiplier>
|
Lines 0002–0006 supply the necessary name space
declarations.
Lines 0007 supplies the APIP Content extension.
The spoken access feature is declared in lines 0009–0015, where the user is assigned the when spoken, and will have the spoken feature available for them when they begin their assessments (line 0011). This profile includes optional preference information about the spoken source, indicating they prefer a human voice (line 0012). Line 0013 states the user likes to have the item read to them (the default inclusion order’s list of access elements) when they first encounter the item. Line 0014 provides the type of spoken user, which in this example is Text Only.
Lines 0016–0019 also indicate that this examinee should be provided with language learner guidance supports, when available.
Finally, lines 0024–0027 state the user should be given additional time for their assessment sessions. No specific amount is indicated.
Code 5.2 Example of an APIP AfA PNP instance.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039
|
<?xml version="1.0" encoding="UTF-8"?> </apip:apipControl> </apip:apipScreenEnhancement> |
|
|
|
Lines 0009–0012 indicate the examinee requires additional testing time. Line 0011 states the specific amount of time. If an assessment is regularly scheduled for an hour, this examinee should be permitted one and a half hours of time.
Line 0020 describes the amount of magnification the examinee prefers when they start using their magnification tool within the testing interface.
Lines 0025–0035 indicate the examinee should have the ability to change the text (foreground) and background colors for the content (directions, passages, items). For this when spoken, BOTH the foreground and background nodes need to be included. In this example, Lines 0028 and 0033 describe the specific colors the examinee prefers. This is an optional parameter for the Text and Background color when spoken. Colors are indicated with hexadecimal notation. This profile states that the examinee wishes to have the colors changes when the test session initiates (line x). That is, they want to begin their tests with their preferred text and background choices already showing, without them having to explicitly activate the when spoken.
Additional PNP examples can be found in the PNP examples folder of the APIP example files.
Importing and Exporting APIP Items is achieved through the use of a content package. An APIP package is constructed according to the IMS Content Packaging standard, consisting of a zip file containing a manifest, assessmentItem XML files, and the supplementary image and sound files necessary for those assessmentItems. If a package is meant to contain multiple items, then it will also contain assessmentTest or assessmentSection XML files. The package may optionally include XSD schema files for use in validating the package contents.
The manifest is an XML file named "imsmanifest.xml" that serves to identify and catalogue all of the other files in the package. Each file contained within the package must be listed in the manifest, along with information about its purpose. The manifest's "resources" element should contain a "resource" element for every file. The manifest also includes metadata about the package, contained within the aptly-named "metadata" element.
Below is a minimal manifest describing an item package containing a single assessment item resource file.
Code 6.1 Example of an APIP Content Package manifest.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 |
<manifest identifier="apipManifestExample" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1 http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd" xmlns:lomm="http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest"> <metadata> <schema>APIP Package</schema> <schemaversion>1.0.0</schemaversion> <lomm:lom> <lomm:educational> <lomm:learningResourceType> <lomm:source>APIPv1.0</lomm:source> <lomm:value>APIP Package</lomm:value> </lomm:learningResourceType> </lomm:educational> <lomm:general> <lomm:identifier/> <lomm:title/> </lomm:general> <lomm:lifeCycle> <lomm:contribute/> <lomm:version/> </lomm:lifeCycle> <lomm:rights> <lomm:copyrightAndOtherRestrictions/> <lomm:description/> </lomm:rights> </lomm:lom> </metadata> <organizations /> <resources> <resource identifier="Item_01" type="imsqti_apipitemroot_xmlv1p2"> <file href="items/item01.xml" /> </resource> </resources> </manifest> |
The "file" element with the resource node must include a "href" attribute that includes the URI path to the location of the relevant file in the package.
Note that the purpose of a given resource is identified by the value of the resource element's "type" attribute.
Use the imsqti_apipitemroot_xmlv2p1 resource type attribute value for APIP assessmentItem XML files. XML assessmentItem files contain the core content and accessibility metadata used to display and score assessment items.
If the package contains a whole test, the resource element of the XML file with the assessmentTest data will have a type attribute value of imsqti_apiptestroot_xmlv2p1. The assessmentTest provides organization to the included assessmentItems by splitting up the assessmentItems into various assessmentSections and dictating the order of user navigation between the items.
imsqti_apipsectionroot_xmlv2p1 is the type attribute value used to describe an XML file containing an assessmentTest.
The controlfile/apip_xmlv1p0 resource type value is for schema XSD documents used for validation testing.
The associatedcontent/apip_xmlv1p0/learning-application-resource resource type is for all other supplemental resource files such as style sheets, audio and video content, and images.
In addition to listing the resource files, the manifest describes the relationships between the resources. For example, if an assessmentItem resource requires an image file for its contents, then this dependency must be incorporated in the manifest by adding a "dependency" element child to the resource element of the assessmentItem. The dependency element must have an identifierref attribute with a value equal to the identifier of the resource element that defines the given image file.
Code 6.2 Example of a manifest demonstrating dependencies.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039 0040 0041 0042 0043 |
<manifest identifier="apipManifestExample" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1 http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd" xmlns:lomm="http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest"> <metadata> <schema>APIP Package</schema> <schemaversion>1.0.0</schemaversion> <lomm:lom> <lomm:educational> <lomm:learningResourceType> <lomm:source>APIPv1.0</lomm:source> <lomm:value>APIP Package</lomm:value> </lomm:learningResourceType> </lomm:educational> <lomm:general> <lomm:identifier/> <lomm:title/> </lomm:general> <lomm:lifeCycle> <lomm:contribute/> <lomm:version/> </lomm:lifeCycle> <lomm:rights> <lomm:copyrightAndOtherRestrictions/> <lomm:description/> </lomm:rights> </lomm:lom> </metadata> <organizations /> <resources> <resource identifier="Item_01" type="imsqti_apipitemroot_xmlv1p2"> <file href="items/item01.xml" /> <dependency identifierref="Picture_01" /> </resource> <resource identifier="Picture_01" type="associatedcontent/apip_xmlv1p0/learning-application-resource"> <file href="resources/picture01.png" /> </resource> </resources> </manifest> |
Similarly, if the content of an assessmentSection file references an assessmentItem, the resource node must list the appropriate assessmentItem as a dependency. By that same token, if the content of an assessmentTest file references an assessmentSection file, the resource node of the assessmentTest should list the assessmentSection as a dependency.
In order to prevent ambiguity in the relationships between resources, all identifier attribute values must be unique within the manifest file.
For metadata, note the presence of the <lomm:lom></lomm:lom> element within the previous manifest examples. This element is the container for IEEE 1484.12.1 Learning Object Metadata data structures, where an author may optionally provide detailed information about the title, description, life cycle, usage rights, and so forth for the package. While the presence of the lom element and the demonstrated minimal substructures is mandated, the population of its data is optional. Similar metadata elements may also be optionally inserted within individual resource elements in order to more thoroughly characterize assessment items and test structures.
More detailed explanation of the purpose and content of the Learning Object Metadata structures may be found in the IEEE standard itself, as well as in the inline documentation for the LOM XML binding schema files included within the APIP schema set.
The corresponding partial content package manifest for the APIP Item discussed in Sub-section 4.1 is shown in Code 6.3.
Code 6.3 Example of the manifest with a single APIP assessmentItem.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039 0049 0041 0042 0043 0044 0045 0046 0047 0048 0049 0050 0051 0052 0053 0054 0055 0056 0057 0058 0059 0060 0061 0062 0063 0064 0065 0066 0067 0068 0069 0070 0071 0072 0073 0074 0075 0076 0077 0078 0079 0080 0081 0082 0083 0084 0085 0086 0087 0088 0089 0090 0091 0092 0093 0094 0095 0096 0097 0098 0099 0100 0101 0102 0103 0104 |
<?xml
version=”1.0” encoding=”UTF-8”?> xmlns:lomm="http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest" xmlns:lomr="http://ltsc.ieee.org/xsd/apipv1p0/LOM/resource"
xmlns:qtim="http://www.imsglobal.org/xsd/apip/apipv1p0/qtimetadata/imsqti_v2p1"> <lomm:lom> <lomm:educational> <lomm:learningResourceType> <lomm:source>APIPv1.0</lomm:source> <lomm:value>APIP Package</lomm:value> </lomm:learningResourceType> </lomm:educational> <lomm:general> <lomm:identifier/> <lomm:title/> </lomm:general> <lomm:lifeCycle> <lomm:contribute/> <lomm:version/> </lomm:lifeCycle> <lomm:rights> <lomm:copyrightAndOtherRestrictions/> <lomm:description/> </lomm:rights> </lomm:lom> </metadata> type
="imsqti_apipitemroot_xmlv2p1"> <lomr:educational/> <lomr:general> <lomr:identifier/> </lomr:general> <lomr:lifeCycle> <lomr:version/> </lomr:lifeCycle>
<qti:qtiMetadata> <qti:interactionType>choiceInteraction
</qti:interactionType> <dependency
identifierref="I_00001_R"/>
type="associatedcontent/apip_xmlv1p0/...">
type="associatedcontent/apip_xmlv1p0/..."> type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_imscpv1p2_v1p0.xsd"/> </resource> <resource identifier="I_00002_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_cpextv1p2_v1p0.xsd"/> </resource> <resource identifier="I_00003_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_lommanifestv1p0_v1p0.xsd"/> </resource> <resource identifier="I_00004_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_lomresourcev1p0_v1p0.xsd"/> </resource> <resource identifier="I_00005_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_qtiextv2p1_prfv1p0.xsd"/> </resource> <resource identifier="I_00006_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_qtiitemv2p1_v1p0.xsd"/> </resource> <resource identifier="I_00007_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/apipv1p0_qtimetadatav2p1_v1p0.xsd"/> </resource> <resource identifier="I_00008_CF" type="controlfile/apip_xmlv1p0"> <file href="controlxsds/xml.xsd"/>
</resource>
|
The key features in the example shown in Code 6.3 are:
1. Lines (0014-0033) – the LOM for the manifest;
2. Lines (0049-0056) – the LOM and QTI (0048-0055) metadata for the resource assigned to the APIP Item QTI;
3. Lines (0037-0038) – identifies the resource assigned to the APIP Assessment Item QTI;
4. Lines (0058-0060) – identifies the file that identifies the APIP Item QTI XML instance and the two dependencies on the asset files (these have their own resource descriptions);
5. Lines (0062-0069) – the resources descriptions for the two asset files;
6. Lines (0070-0101) – the resource descriptions assigned to the XSDs also contained in the package. It is not a requirement to provide these XSDs in the package but if they are supplied then resource descriptions must be supplied in the manifest).
In the APIP package, specify the variants by declaring the relationships in the manifest XML file. In the 'resource' element for a given item, add a 'variant' element with an 'identifierref' attribute value set to the identifier of a variant item resource. The variant element may contain a metadata node describing in more detail the nature and purpose of the variant relationship, typically by use of the AfA DRDv2.0 specification. The inclusion of said metadata recommended but not mandated by APIP version 1.0. For example, if an item resource with an identifier of 'B' is the Spanish-language variant of item resource A, the manifest resources XML should contain the following:
Code 6.4 Example of manifest fragment demonstrating variants.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017
|
<resource identifier="A" type="imsqti_apipitemroot_xmlv2p1"> <file href="itemA.xml"/> <variant identifierref="B" identifier="variantRelationshipAB"> <metadata> <accessForAllResource xmlns=" http://www.imsglobal.org/xsd/accessibility/accdrdv2p0/imsaccdrd_v2p0 "> <adaptationStatement> <originalAccessMode>textual</originalAccessMode> <language>es</language> </adaptationStatement> </accessForAllResource> </metadata> </variant> </resource> <resource identifier="B" type="imsqti_apipitemroot_xmlv2p1"> <file href="itemB.xml"/> </resource>
|
APIP assessments are organized in the same manner as QTI 2.1 – by the use of assessmentTest and assessmentSection XML data well described in the standard QTI Information Model. assessmentSection elements are the atomic structure for organizing assessmentItems into groups, as they may contain multiple assessmentItems, references to other assessmentSections, and section-specific content in the form of rubricBlocks. assessmentTest elements may contain references to one or more assessmentSections, and provides additional navigation-controlling and test-scoring information. Each assessmentSection element is contained within its own XML file. assessmentTests and assessmentSections may contain other assessmentSections by use of the assessmentSectionRef element, which uses identifier and href attributes to point to the appropriate resource.
Here is an example of a package manifest that incorporates an assessmentSection that is comprised of two assessmentItems.
Code 6.5 Example of manifest with an assessmentSection and assessmentItems.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039 0049 0041 0042 0043 0044 0045 0046 |
<manifest identifier="apipManifestExample" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1 http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd" xmlns:lomm="http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest"> <metadata> <schema>APIP Package</schema> <schemaversion>1.0.0</schemaversion> lomm:lom> <lomm:educational> <lomm:learningResourceType> <lomm:source>APIPv1.0</lomm:source> <lomm:value>APIP Package</lomm:value> </lomm:learningResourceType> </lomm:educational> <lomm:general> <lomm:identifier/> <lomm:title/> </lomm:general> <lomm:lifeCycle> <lomm:contribute/> <lomm:version/> </lomm:lifeCycle> <lomm:rights> <lomm:copyrightAndOtherRestrictions/> <lomm:description/> </lomm:rights> </lomm:lom> </metadata> <organizations /> <resources> <resource identifier="Section_01" type="imsqti_apipsectionroot_xmlv1p2"> <file href="sections/section01.xml" /> <dependency identifierref="Item_01" /> <dependency identifierref="Item_02" /> </resource> <resource identifier="Item_01" type="imsqti_apipitemroot_xmlv1p2"> <file href="items/item01.xml" /> </resource> <resource identifier="Item_02" type="imsqti_apipitemroot_xmlv1p2"> <file href="items/item02.xml" /> </resource> </resources> </manifest>
|
Furthermore, here is an example of what the contents of that
section's file (found in the "section01.xml" file in the
"sections" directory) would look like.
Code 6.6 Example of assessmentSection Referencing assessmentItems.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016
|
<assessmentSection xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p1 ../../controlxsds/apipv1p0_qtisectionv2p1_v1p0.xsd" xmlns:apip="http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_qtiv1p0" identifier="Section_01" title="Example Section" visible="true"> <rubricBlock view="candidate" use="Instructions"> <p>Please answer the questions using these directions...</p> <apip:apipAccessibility> </apip:apipAccessibility> </rubricBlock> <assessmentItemRef identifier="Item_01" href="../items/item_02.xml"/> <assessmentItemRef identifier="Item_02" href="../items/item_02.xml"/> </assessmentSection> |
Note the use of "assessmentItemRef" elements to identify which assessmentItems are to be included in a given section. The identifier attribute values for these elements should match the identifier values of the resource nodes in the manifest file associated with these items.
Also introduced in this sample assessmentSection is "rubricBlock" element. "rubricBlock" nodes are used to convey content that is associated with all of the items contained within an assessmentSection. In this case, the contents are providing cursory instructions to the student ("candidate") taking the assessment. The rubricBlock element may also be used to present reading passage content, or whatever other material is appropriate to associate with an entire section rather than a given individual item. The "view" attribute of the rubricBlock is set to "candidate," indicating that the test-taker should be shown this content. Other possible values correspond to other possible target audiences, such as "author", "proctor", "scorer", "testConstructor", and "tutor".
An "apipAccessibility" element may be found following the content of the rubricBlock. The "apipAccessibility" element provides accessibility metadata for the rubricBlock's content in the same manner and with the same structure as the comparable "apipAccessibility" element found within assessmentItems.
The "assessmentSection" and "assessmentTest" structures may contain additional optional data specifying complex ordering, scoring, and presentation rules, though these are beyond the scope of the current discussion. A detailed explanation of these options may be found in the documentation for QTI version 2.1.
The assessmentTest XML structure is used to specify an entire testing experience, and is comprised of references to one or more assessmentSection XML files, which in turn reference the assessmentItems XML that typically represent individual test questions. Presented below is the code of a manifest file for an APIP test package where the assessmentTest contains references to two assessmentSections, the code contents for which are also shown.
Code 6.7 Manifest of example test package with two sections containing 5 items.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039 0049 0041 0042 0043 0044 0045 0046 0047 0048 0049 0050 0051 0052 0053 0054 0055 0056 0057 0058 0059 0060 0061 0062 0063 0064 0065 0066 0067 0068 0069
|
<manifest identifier="PackageManifest" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1"> <metadata> <schema>APIP Package</schema> <schemaversion>1.0.0</schemaversion> <lom xmlns="http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest"> <educational> <learningResourceType> <source>APIPv1.0</source> <value>APIP Package</value> </learningResourceType> </educational> <general> <identifier/> <title/> </general> <lifeCycle> <contribute/> <version/> </lifeCycle> <rights> <copyrightAndOtherRestrictions/> <description/> </rights> </lom> </metadata> <organizations/> <resources> <resource type="imsqti_apiptestroot_xmlv2p1" identifier="AssessmentTest1" href="AssessmentTest1.xml"> <file href="AssessmentTest1.xml"/> <dependency identifierref="AssessmentSection1"/> <dependency identifierref="AssessmentSection2"/> </resource> <resource type="imsqti_apipsectionroot_xmlv2p1" identifier="AssessmentSection1" href="AssessmentSection1.xml"> <file href="AssessmentSection1.xml"/> <dependency identifierref="AssessmentItem1"/> <dependency identifierref="AssessmentItem2"/> </resource> <resource type="imsqti_apipsectionroot_xmlv2p1" identifier="AssessmentSection2" href="AssessmentSection2.xml"> <file href="AssessmentSection2.xml"/> <dependency identifierref="AssessmentItem3"/> <dependency identifierref="AssessmentItem4"/> <dependency identifierref="AssessmentItem5"/> </resource> <resource type="imsqti_apipitemroot_xmlv2p1" identifier="AssessmentItem1" href="AssessmentItem1.xml"> <file href="AssessmentItem1.xml"/> </resource> <resource type="imsqti_apipitemroot_xmlv2p1" identifier="AssessmentItem2" href="AssessmentItem2.xml"> <file href="AssessmentItem2.xml"/> </resource> <resource type="imsqti_apipitemroot_xmlv2p1" identifier="AssessmentItem3" href="AssessmentItem3.xml"> <file href="AssessmentItem3.xml"/> </resource> <resource type="imsqti_apipitemroot_xmlv2p1" identifier="AssessmentItem4" href="AssessmentItem4.xml"> <file href="AssessmentItem4.xml"/> </resource> <resource type="imsqti_apipitemroot_xmlv2p1" identifier="AssessmentItem5" href="AssessmentItem5.xml"> <file href="AssessmentItem5.xml"/> </resource> </resources> </manifest>
|
Note how each
resource node includes only the direct dependencies. So, the
assessmentTest AssessmentTest1
is dependent on the assessmentSections identified as
AssessmentSection1 and AssessmentSection2. AssessmentSection1 is
in turn dependent on AssessmentItem1 and AssessmentItem2, while
AssessmentSection2 is dependent on AssessmentItem3,
AssessmentItem4, and AssessmentItem5. It is not
necessary to include indirect dependencies. So, for example, it
is not necessary for AssessmentTest1 to list all of the
assessmentItem resources as
dependencies.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010
|
<assessmentTest identifier="AssessmentTest1" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/qtitest/imsqti_v2p1"> <testPart navigationMode="nonlinear" submissionMode="individual" identifier="TestPart1"> <assessmentSectionRef href="AssessmentSection1.xml" identifier="AssessmentSection1"/> <assessmentSectionRef href="AssessmentSection2.xml" identifier="AssessmentSection2"/> </testPart> </assessmentTest>
|
The above assessmentTest XML shows us that
the assessmentTest contains a single
testPart node, which wraps
references to the two assessmentSections using
assessmentSectionRef elements.
The identifier attribute of an
assessmentSectionRef element
should match one of the resource identifiers in the manifest
pointing to an assessmentSection. The href attribute
should point to the file location of the relevant assessmentSection, relative to
the file position of the current assessmentTest file.
|
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028
|
<assessmentSection visible="true" identifier="AssessmentSection1" title="AssessmentSection1 Title" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p1"> <rubricBlock view="candidate" use="SharedStimulus"> <p>Consider the following poem by Alan Lightman:</p> <p>In the magnets of computers will be stored
Blend of sunset over wheat fields. Low thunder of gazelle. Light, sweet wind on high ground. Vacuum stillness spreading from A thick snowfall.
Men will sit in rooms Upon the smooth, scrubbed earth Or stand in tunnels on the moon And instruct themselves in how it was. Nothing will be lost. Nothing will be lost. </p> </rubricBlock> <assessmentItemRef href="AssessmentItem1.xml" identifier="AssessmentItem1"/> <assessmentItemRef href="AssessmentItem2.xml" identifier="AssessmentItem2"/> </assessmentSection>
|
Note that the XML namespace URI of the assessmentSection above, found in the xmlns attribute of the assessmentSection element, is distinct from the namespace URI used for the assessmentTest XML. AssessmentSection1 contains a reading passage within the rubricBlock element, which indicates its purpose with the use attribute value of “SharedStimulus”. AssessmentSection1, also includes two assessmentItemRef elements, which point to assessmentItem XML resources, as previously described. It is assumed that an assessment delivery system will somehow present the contents of a “SharedStimulus” type rubricBlock with all the assessmentItems within that same assessmentSection. The manner of content presentation is left as a design decision for delivery system implementers.
The contents of a
rubricBlock are only relevant for
assessmentItems within the same
assessmentSection. Thus, a
reading passage should not be presented for items in the
following section, AssessmentSection2, which lacks a rubricBlock.
|
0001 0002 0003 0004 0005 0006 0007
|
<assessmentSection visible="true" identifier="AssessmentSection2" title="AssessmentSection2 Title" xmlns="http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p1"> <assessmentItemRef href="AssessmentItem3.xml" identifier="AssessmentItem3"/> <assessmentItemRef href="AssessmentItem4.xml" identifier="AssessmentItem4"/> <assessmentItemRef href="AssessmentItem5.xml" identifier="AssessmentItem5"/> </assessmentSection> |
The loading of an APIP Item uses the same approach as the importing of an APIP package.
This reference connects a number of different APIP concepts and specifications. It relates the accessibility need to the corresponding PNP markup tags and Content tags (if applicable). The need is briefly described, and APIP compliance categories are noted below their description.
|
A. Accessibility through Alternate Representations |
|||
|
Access Feature |
Notes |
Content Tags [apip:accessibility] |
AfA PNP Mapping [accessForAllUser] |
|
Spoken (Read Aloud) Text presented to the user is spoken aloud. Graphics (tables/diagrams/pictures) would have alternate text that could be spoken aloud. For the directionsOnly, this is a boolean value (there is no inclusion order for this designation), and would be combined with the other userSpokenPreference to determine the inclusion order for that user. So a user could be both a directionsOnly user and a textGraphics user. Compliance Categories A1, A2, A3, A4, A5, A6, A7 |
Affects the item-writing process. Unless it would violate the construct being measured, it is expected that all content would include read aloud information for all content. If no other inclusion order is included, the default reading order should be taken from the nonVisual user. Optionally, you can provide audioFileInfo, which refers to a pre-recorded audio file.
|
spoken audioFileInfo (optional) [attribute: mimeType] fileHref startTime (opt) duration(opt) voiceType(opt: synthetic,
human) voiceSpeed (opt, standard, fast,
slow) spokenText (string) textToSpeechPronunciation(string)
InclusionOrder (for textOnly user): textOnlyDefaultOrder textOnlyOnDemandOrder (for textGraphics user): textGraphicsDefaultOrder textGraphicsOnDemandOrder (for nonVisual user): nonVisualDefaultOrder (for graphicsOnly user): graphicsOnlyOnDemandOrder |
content ® apipContent assignedSupport (true/false) def=true activateByDefault (true/false) def=true spokenSourcePreference (Human,
Synthetic) readAtStartPreference=true/false, def=true userSpokenPreference (TextOnly,
TextGraphics, directionsOnly (opt: true/false) def=false
display usage (required, preferred,
optionally use,
|
|
Braille Text Content would have specific text strings to be used in a refreshable Braille display device. Compliance Categories A8, A9 |
Affects the item-writing process. The Item Information would need to state whether or not the item was accessible to nonVisual (blind) users. If it is intended to be used by Braille users, the item MUST include a brailleVisualDefaultOrder. |
brailleText brailleTextString
InclusionOrder (for Braille user): brailleDefaultOrder
|
display braille brailleGrade (opt) numberOfBrailleDots (opt) numberOfBrailleCells (opt) brailleDotPressure (opt) brailleStatusCell (opt) ® assignedSupport (true/false) def=true ® activateByDefault (true/false) def=true
|
|
Tactile A tactile representation of the graphic information is made available outside of the computer testing system. The tags should include descriptions of how to locate the specific tactile sheet needed to answer the question. Compliance Category A10 |
Affects the item-writing process.
|
tactileFile tactileAudioFile (audioFileInfo) tactileSpokenText tactileBrailleText |
display tactile ® assignedSupport (true/false) def=true ® activateByDefault (true/false) def=true |
|
Sign Language Animated or live-action movie recordings are provided to the user that provide either an American Sign Language translation, or the Signed English version of the item. Compliance Categories A11, A12 |
Affects the item-writing process.
The Item information specifies whether the content is accessible to either user group (ASL or Signed English) by using signFileASL or signFileSignedEnglish. |
signFileASL OR signFileSignedEnglish [attribute: mimeType] videoFileInfo fileHref startCue endCue signFileSignedEnglish [attribute: mimeType] videoFileInfo fileHref startCue endCue InclusionOrder (for ASL user): aslDefaultOrder aslOnDemandOrder (for Signed English user): signedEnglishDefaultOrder signedEnglishOnDemandOrder |
content ® apipContent signing assignedSupport (true/false) def=true activateByDefault (true/false) def=true signingType (ASL, SignedEnglish) |
|
Item Translation A variant of the item is made, and the user is exposed to the alternate language version. The Item information would contain which specific language it is providing. Compliance Category A13 |
Affects the item-writing process. Establishes a different variant within the item package. |
|
content ® apipContent itemTranslationDisplay [language] assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Keyword Translation Certain specific words would have translations available to users who need some assistance with difficult or important words in the content. The User profile would specify the language requested, and the content would supply the translation for the program required languages. Compliance Category A14 |
Affects the item-writing process.
|
keyWordTranslation (definitionID) textString language |
content ® apipContent keyWordTranslations [language] assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Simplified Language An entirely different variant of the question is given to the user, using simpler language. Compliance Category A15 |
Affects the item-writing process. Establishes a different variant within the item package. |
|
content ® apipContent simplifiedLanguageMode assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Alternate Representation Specifies an alternate way of displaying content to facilitate the users access to the content. As an example, a text-based description of a figure displaying a life cycle might be provided, or an animation that represents a series of events described in text might be provided. Compliance Category A16 |
Affects the item-writing process. Only Text is available for v1. Future versions may support Audio, Video, Graphic, Interactive.. |
revealAlternativeRepresentation textString |
content ® apipContent alternativeRepresentations assignedSupport (true/false) def=true activateByDefault (true/false) def=true alternativeRepresentationType (Text) |
|
B. Accessibility through Adapted Presentations |
|||
|
Access Feature |
Notes |
Content Tags [apip:accessibility] |
AfA PNP Mapping [accessForAllUser] |
|
Magnification All content is magnified by the amount specified by the user. Optional magnification amount can be sent in the user profile. Compliance Categories B1, B2 |
|
|
display screenEnhancement magnification (magnification amount) display ® apipDisplay apipScreenEnhancement magnification assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Reverse Contrast All colors are reversed in the user interface. Compliance Category B3 |
|
|
display ® apipDisplay apipScreenEnhancement invertColourChoice assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Alternate Text and Background Colors User has the ability to choose a text and background color combination other than the default presentation. Optional color settings could be sent in the user profile. Compliance Categories B4, B5 |
|
|
display ® apipDisplay foregroundColour assignedSupport (true/false) def=true activateByDefault (true/false) def=false colour (hexadecimal) display ® apipDisplay backgroundColour assignedSupport (true/false) def=true activateByDefault (true/false) def=false colour (hexadecimal) |
|
Color Overlay A color tint is layed over the content (directions and questions) to aid in reading of text. This emulates the classroom use of colored acetate sheets over paper. Optional color settings could be sent in the user profile. Compliance Categories B6, B7 |
|
|
display ® apipDisplay colourOverlay activateBy Default
(true/false) def=false |
|
C. Accessibility through Adapted Interactions |
|||
|
Access Feature |
Notes |
Content Tags [apip:accessibility] |
AfA PNP Mapping [accessForAllUser] |
|
Masking For Answer Masking, by default, answer choices for multiple choice item are covered when the item is first presented. User has the ability to remove the masks at any time. For Custom Masking, User is able to create their own masks to cover portions of the question until needed. Compliance Categories C1, C2 |
|
|
display ® apipDisplay activateByDefault (true/false) def=false maskingType (CustomMask, AnswerMask) |
|
Auditory Calming Users can listen to music or sounds in the background as they take the test. Compliance Category C3 |
|
|
display ® apipDisplay auditoryBackground assignedSupport (true/false) def=true activateByDefault (true/false) def=false |
|
Additional Testing Time If a test has a time limit, the user will be allowed additional time to complete the test. Compliance Category C4 |
. |
|
control ® apipControl assignedSupport (true/false) def=true timeMultiplier (time/unlimited) |
|
Breaks User is allowed to take breaks, at their request, during the testing session, and return to their testing session when ready. Compliance Category C5 |
|
|
control® apipControl assignedSupport (true/false) def=true |
|
Keyword Emphasis Certain words are designated in the content as keywords for emphasis (beyond the default emphasis that may exist for all users). Programs would designate how they are to be emphasized (bold, italic, color background etc.) Compliance Category C6 |
Affects the item-writing process.
|
keyWordEmphasis |
content® apipContent keywordEmphasis assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Line Reader User has a tool available that assists them in moving a reading tool (line highlighter or underscore) down line by line, to assist in reading the content Compliance Categories C7, C8 |
|
|
control ® apipControl lineReader assignedSupport(true/false) def=true activateByDefault (true/false) def=false colour (hexadecimal) |
|
Language Learner Guidance Additional information is provided in the test language about words or phrases that is intended to assist a Language Learner process that information. Compliance Category C9 |
Affects the item-writing process. |
guidance languageLearnerSupport supportOrder, |
content ® apipContent languageLearner assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
|
Cognitive Guidance Additional information is provided to some users to assist in processing or understanding all or parts of the content. Compliance Category C10 |
Affects the item-writing process.
|
guidance cognitiveGuidanceSupport supportOrder, |
content ® apipContent cognitiveGuidance assignedSupport (true/false) def=true activateByDefault (true/false) def=true |
Title: IMS Global Accessible Portable Item Protocol (APIP) Best Practice and Implementation Guide
Editors: Colin Smythe (IMS Global), Mark McKell (IMS Global) and Thomas Hoffmann (Measured Progress)
Co-chairs: Gary Driscoll (ETS), Thomas Hoffmann (Measured Progress) and Wayne Ostler (Pearson)
Version: 1.0
Version Date: 26 March 2012
Release: Final 1.0
Status: Candidate Final
Summary: The aim of the APIP work is to use well established e-learning interoperability standards to enable the exchange of accessible assessment content between computer-based assessment systems, tools and applications. Users of systems, tools and applications that adopt the APIP are able to use their accessible assessment content on a wide range of systems. This document contains the best practice and implementation guidance for using the specification.
Revision Information: First release.
Purpose: This document is made available for adoption by the public community at large.
Document Location: http://www.imsglobal.org/apip/
The following individuals contributed to the development of this document:
|
Rob Abel IMS Global (USA) |
Devin Loftis McGraw-Hill/CTB (USA) |
|
Gary Driscoll ETS (USA) |
Mark McKell IMS Global (USA) |
|
Eric Hansen ETS (USA) |
Wayne Ostler Pearson (USA) |
|
Casey Hill ETS (USA) |
Zack Pierce Measured Progress (USA) |
|
Regina Hoag ETS (USA) |
Mike Russell Measured Progress (USA) |
|
Thomas Hoffmann Measured Progress (USA) |
Colin Smythe IMS Global (UK) |
|
Version No. |
Release Date |
Comments |
|
Candidate Final v1.0 |
26 March 2012 |
The first formal release of the Candidate Final Release version of this document. |
|
|
|
|
IMS Global Learning Consortium, Inc. (“IMS Global”) is publishing the information contained in this document (“Specification”) for purposes of scientific, experimental, and scholarly collaboration only.
IMS Global makes no warranty or representation regarding the accuracy or completeness of the Specification.
This material is provided on an “As Is” and “As Available” basis.
The Specification is at all times subject to change and revision without notice.
It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.
IMS Global would appreciate receiving your comments and suggestions.
Please contact IMS Global through our website at http://www.imsglobal.org .
Please refer to Document Name: IMS Global APIP Best Practice and Implementation Guide v1.0 Candidate Final Release v1.0
Date: 26 March 2012