Society problems

New AI Bill of Rights Seeks to Protect Society

US President Joe Biden has passed an Artificial Intelligence (AI) Bill of Rights that aims to guide a society to protect people from potential threats posed by AI and ensure the technology is used in a way that reinforces positive values.

The White House Office of Science and Technology Policy released the AI ​​Bill of Rights yesterday (October 4), stating that technology, data, and automated systems could be used in ways that threaten the rights of people.

“Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services,” states the preamble to the AI ​​Bill of Rights. “These issues are well documented.

“In the United States and around the world, systems intended to assist in patient care have proven to be unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and replicate existing undesirable inequalities or incorporate new harmful biases and discriminations. The unchecked collection of data on social media has been used to threaten people’s opportunities, invade their privacy, or ubiquitously track their activity – often without their knowledge or consent.

He goes on to say that these outcomes don’t have to be inevitable. “Automated systems have brought extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict the paths of storms, to algorithms that can identify disease in patients. These tools are now guiding important decisions across industries, while data is helping to revolutionize global industries. »

The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.

The five principles of the AI ​​Bill of Rights are:

Safe and efficient systems

You must be protected against unsafe or inefficient systems.

Automated systems should be developed in consultation with diverse communities, stakeholders, and subject matter experts to identify potential system concerns, risks, and impacts.

Systems should be subject to pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrates that it is safe and effective based on its intended use, mitigation of unsafe results, including those beyond intended use, and adherence to domain-specific standards.

The results of these safeguards should include the ability not to deploy the system or to take a system out of service. Automated systems must not be designed with the intent or reasonably foreseeable possibility of endangering your safety or that of your community.

They should be designed to proactively protect you from harm resulting from unintended, but foreseeable, use or impact of automated systems. You must be protected against inappropriate or irrelevant use of data in the design, development and deployment of automated systems, and against further harm from its reuse.

Independent assessment and reporting confirming that the system is safe and effective, including reporting on steps taken to mitigate potential harm, should be carried out and the results made public where possible.

Protections against algorithmic discrimination

You must not be discriminated against by algorithms and systems must be used and designed fairly.

Algorithmic discrimination occurs when automated systems contribute to unwarranted different treatment or adversely impact people because of their race, color, ethnicity, gender (including pregnancy, childbirth and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.

Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems must take proactive and ongoing steps to protect individuals and communities from algorithmic discrimination and to use and design systems fairly.

This protection should include proactive equity assessments as part of system design, the use of representative data, and protection from proxies for demographic characteristics, ensuring accessibility for persons with disabilities in the design and development, pre-deployment and ongoing testing and mismatch mitigation, and a clear organization. surveillance.

Independent assessment and plain language reporting in the form of an Algorithmic Impact Assessment, including disparity test results and mitigation information, should be conducted and made public where possible to confirm these protections.

Data Privacy

You must be protected against abusive data practices through built-in safeguards and you must have authority over how data about you is used.

You should be protected against privacy breaches through design choices that ensure these protections are included by default, including ensuring that data collection is in line with reasonable expectations and that only data strictly necessary for the specific context are collected.

Designers, developers and deployers of automated systems must seek your permission and respect your decisions regarding the collection, use, access, transfer and deletion of your data appropriately and to the maximum extent possible; where this is not possible, alternative privacy-by-design safeguards should be used.

Systems must not use user experience and design decisions that obscure user choice or burden users with flaws that infringe on privacy.

Consent should only be used to justify data collection where it can be given in an appropriate and meaningful way. Any request for consent must be brief, be understandable in plain language and give you control over the data collection and the specific context of use; current difficult-to-understand practices of advice and choice for general data uses would need to be changed.

Stronger protections and restrictions for data and inferences related to sensitive areas, including health, labor, education, criminal justice and finance, and for youth data should put you first. In sensitive areas, your data and associated inferences should only be used for necessary functions, and you should be protected by ethical review and bans on use.

You and your communities must be free from uncontrolled surveillance; Surveillance technologies should be subject to heightened scrutiny including at least a pre-deployment assessment of their potential harms and limits to their scope in order to protect privacy and civil liberties.

Continuous monitoring and control should not be used in education, work, housing or other settings where the use of such monitoring technologies is likely to limit rights, opportunities or access.

Where possible, you should have access to reports confirming that your data decisions have been complied with and providing an assessment of the potential impact of surveillance technologies on your rights, opportunities or access.

Notice and explanation

You need to know that an automated system is being used and understand how and why it contributes to the results relevant to you.

Designers, developers and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system operation and the role automation plays, a notice that these systems are being used, the person or organization responsible for the system and explanations of clear, timely and accessible results.

This notice should be kept current and those affected by the system should be notified of material changes to use cases or key features.

You must know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the only input determining the outcome.

Automated systems should provide explanations that are technically valid, meaningful and useful to you and any trader or other who needs to understand the system, and calibrated to the level of risk based on the context.

Reports including summary information about such automated systems in plain language and assessments of the clarity and quality of advice and explanations should be made public where possible.

Human alternatives, consideration and withdrawal

You should be able to opt out, if necessary, and have access to someone who can quickly investigate and resolve any issues you are having.

You should be able to opt out of automated systems in favor of a human alternative, if available. Adequacy should be determined on the basis of reasonable expectations in a given context and with an emphasis on ensuring wide accessibility and protecting the public from particularly harmful impacts.

In some cases, a human or other alternative may be required by law. You should have access to timely human review and recourse through a fallback and escalation process if an automated system fails, produces an error, or if you wish to appeal or dispute its impacts on you .

Humane consideration and fallback must be accessible, fair, effective, maintained, accompanied by appropriate operator training, and must not impose an unreasonable burden on the public.

Automated systems intended for use in sensitive areas, including but not limited to criminal justice, employment, education and healthcare, must additionally be fit for purpose, provide access meaningful to monitoring, include training for anyone interacting with the system, and incorporate human consideration for adverse or high-risk decisions.

Reports including a description of these human governance processes and an assessment of their timeliness, accessibility, results and effectiveness should be made public to the extent possible.