At a high level, these standards are based on the distinction between intentional and unintentional discrimination, sometimes referred to as disparate treatment and unequal impact, respectively. Intentional discrimination is subject to the highest legal penalties and is obviously something that all organizations that adopt AI should avoid. The best way to do this is to ensure that the AI is not exposed to data that could directly indicate the protected class, such as race or gender. You must not suffer discrimination by algorithms and systems must be used and designed in an equitable way.
Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or affect people based on their race, color, ethnicity, sex (including pregnancy, childbirth and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, this algorithmic discrimination may violate legal protections. Designers, developers, and installers of automated systems must take proactive and continuous steps to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable manner. This protection must include proactive equity evaluations as part of system design, the use of representative data and protection against substitute data for demographic characteristics, ensuring accessibility for people with disabilities in design and development, testing and mitigating pre-deployment and ongoing disparities, and clear organizational oversight.
To confirm these protections, an independent evaluation and simple language in the form of an algorithmic impact assessment, including disparity test results and mitigation information, should be made and published whenever possible. Biases and discrimination in AI data and algorithms can have serious consequences for individuals and society, such as unfair treatment, exclusion, or violation of human rights. Laws such as the Equal Credit Opportunity Act, the Civil Rights Act and the Fair Housing Act, as well as the guidelines of the Equal Employment Opportunity Commission, can help mitigate many of the discriminatory challenges posed by AI. In recent months, the Federal Trade Commission has indicated that increasing attention is being paid to fairness in AI, and one of its five commissioners has publicly stated that it should expand its oversight of discriminatory AI.
Injustice and discrimination are not limited to the repercussions on groups that are protected by the Equality Act, but we must consider whether the use of AI can also lead to unfair results for other groups. In order to control discrimination, your organization's policies must establish any margin of tolerance for the selected key performance metrics, as well as procedures for investigating escalation and deviations. You must consider the legal, ethical and social implications of your AI system, such as regulatory compliance, compliance with values and principles, and the possibility of causing harm or discrimination.