1
System Profile
Complete the system information below
2
System lifecycle Stage
Indicate the dates of planned releases for the system.
3
System description
Briefly explain, in plain language, what you’re building. This will give reviewers the necessary context to understand
the system and the environment in which it operates
4
Supplementary Links
If you have links to any supplementary information on the system such as demonstrations, functional specifications,
slide decks, or system architecture diagrams, please include links below
5
System Purpose
Briefly describe the purpose of the system and system features, focusing on how the system will address the needs
of the people who use it. Explain how the AI technology contributes to achieving these objectives.
6
System Features
Focusing on the whole system, briefly describe the system features or high-level feature areas that already exist and
those planned for the upcoming release
7
Relation to Other Systems/Products
Briefly describe how this system relates to other systems or products. For example, describe if the system includes
models from other systems.
8
Geographic areas and languages
Describe the geographic areas where the system will or might be deployed to identify special considerations for
language, laws, and culture.
9
Deployment mode
Document each way that this system might be deployed.
10
Intended Uses
Intended uses are the uses of the system your team is designing and testing for. An intended use is a description of
who will use the system, for what task or purpose, and where they are when using the system. They are not the same as
system features, as any number of features could be part of an intended use. Fill in the table with a description of the
system’s intended use(s).
11
Stakeholders, potential benefits, and potential harms
Identify the system’s stakeholders for each intended use. Then, for each stakeholder, document the potential benefits
and potential harms.
Stakeholders for Goal-driven requirements from the Responsible AI Standard
Certain Goals in the Responsible AI Standard require you to identify specific types of stakeholders. You may have included them in the stakeholder table above. For the Goals below that apply to the system, identify the specific stakeholder(s) for this intended use. If a Goal does not apply to the system, enter “N/A” in the table.
12
Human oversight and control
This Goal applies to all AI systems. Complete the table below
13
System Intelligibility for Decision Making
This Goal applies to AI systems when the intended use of the generated outputs is to inform decision making by or
about people. If this Goal applies to the system, complete the table below
14
Communication to stakeholders
This Goal applies to all AI systems. Complete the table below.
15
Disclosure of AI interaction
Who will use or be exposed to the system?
Fairness considerations: For each Fairness Goal that applies to the system, 1) identify the relevant stakeholder(s) (e.g., system user, person impacted by the system); 2) identify any demographic groups, including marginalized groups, that may require fairness considerations; and 3) prioritize these groups for fairness consideration and explain how the fairness consideration applies. If the Fairness Goal does not apply to the system, enter “N/A” in the first column.
16
Quality of service
This Goal applies to AI systems when system users or people impacted by the system with different demographic
characteristics might experience differences in quality of service that can be remedied by building the system
differently. If this Goal applies to the system, complete the table below describing the appropriate stakeholders for this
intended use.
17
Allocation of resources and opportunities
This Goal applies to AI systems that generate outputs that directly affect the allocation of resources or opportunities
relating to finance, education, employment, healthcare, housing, insurance, or social welfare. If this Goal applies to the
system, complete the table below describing the appropriate stakeholders for this intended use.
18
Minimization of stereotyping, demeaning, and erasing outputs
This Goal applies to AI systems when system outputs include descriptions, depictions, or other representations of people,
cultures, or society. If this Goal applies to the system, complete the table below describing the appropriate stakeholders
for this intended use.
19
Technology readiness assessment
Select the description that best represents the system regarding this intended use.
20
Task complexity
Mark the description that best represents the system regarding this intended use.
21
Role of humans
Mark the description that best represents the system regarding this intended use.
22
Deployment environment complexity
Mark the description that best represents the system regarding this intended use.
23
Restricted Uses
If any uses of the system are subject to a legal or internal policy restriction, list them here, and follow the
requirements for those uses.
24
Unsupported uses
If any uses of the system are subject to a legal or internal policy restriction, list them here, and follow the
requirements for those uses.
25
Known limitations
Describe the known limitations of the system. This could include scenarios where the system will not perform well,
environmental factors to consider, or other operating factors to be aware of.
26
Potential impact of failure on stakeholders
Define predictable failures, including false positive and false negative results for the system as a whole and how
they would impact stakeholders for each intended use.
27
Potential impact of misuse on stakeholders
Define system misuse, whether intentional or unintentional, and how misuse could negatively impact each
stakeholder. Identify and document whether the consequences of misuse differ for marginalized groups. When serious
impacts of misuse are identified, note them in the summary of impact as a potential harm.
28
Sensitive Uses
Consider whether the use or misuse of the system could meet any of the Microsoft Sensitive Use triggers below.
29
Data requirements
Define and document data requirements with respect to the system’s intended uses, stakeholders, and the
geographic areas where the system will be deployed.
30
Existing data sets
If you plan to use existing data sets to train the system, assess the quantity and suitability of available data sets
that will be needed by the system in relation to the data requirements defined above. If you do not plan to use predefined data sets, enter “N/A” in the response area
31
Potential harms and preliminary mitigations
Gather the potential harms you identified earlier in the Impact Assessment in this table (check the stakeholder
table, fairness considerations, adverse impact section, and any other place where you may have described potential
harms). Use the mitigations prompts in the Impact Assessment Guide to understand if the Responsible AI Standard can
mitigate some of the harms you identified. Discuss the harms that remain unmitigated with your team and potential
reviewers.
Goal Applicability
To assess which Goals apply to this system, use the tables below. When a Goal applies to only specific types of AI systems, indicate if the Goal applies to the system being evaluated in this Impact Assessment by indicating “Yes” or “No.” If you indicate that a Goal does not apply to the system, explain why in the response area. If a Goal applies to the system, you must complete the requirements associated with that Goal while developing the system.
32
Accountability Goals
33
Transparency Goals
Write in the blank fields below
34
If you selected “No” for any of the Transparency Goals, explain why the Goal does not apply to the system.
35
Fairness Goals
Write in the blank fields below
36
If you selected “No” for any of the Fairness Goals, explain why the Goal does not apply to the system below
37
Reliability & Safety Goals
Write in the blank fields below
38
Privacy & Security Goals
39
Inclusiveness Goal
40
Write your question here
Write in the blank fields below