The US Government Just Released A Bill Of Rights, For Artificial Intelligence
The United States Government released an Artificial Intelligence Bill of Rights has five common-sense protections for new technologies.
This article is more than 2 years old
In an attempt to ward off problems arising from unaccountable algorithms and abusive artificial intelligence practices, the White House recently unveiled a new blueprint for an AI Bill of Rights. The new guidelines are not legally binding in any way but are intended to urge tech companies to deploy artificial intelligence in a responsible manner. The bill of rights is the latest addition to ongoing efforts to create a set of rules to ensure the ethical and transparent use of AI technologies.
As stated in the press release from the White House Office of Science and Technology Policy (OSTP), artificial intelligence has driven great innovations, such as early cancer detection and more efficient farming techniques. But as science fiction has prophetically portrayed, there is a darker side to AI, including people being surveilled or ranked without their knowledge or permission.
The AI Bill of Rights includes five “common sense protections” for all Americans. First, people should be protected from unsafe or ineffective AI systems. Second, the bill states that Americans should not face discrimination by algorithms, and systems should be designed and used in an equitable way.
The third protection is that people should have the right to data privacy via built-in safeguards. It also says that individuals should have control over how data about them is used. The fourth provision states that Americans have the right to know when artificial intelligence is being used and how it may impact them.
The final common sense protection is titled “Human Alternatives, Consideration, and Fallback.” It states that people should be allowed to opt out of AI participation when appropriate. It states that everyone should have access to a real human who can quickly consider and fix any problems encountered.
The OSTP press release mentions several real-world consequences of artificial intelligence gone bad. Older Americans were denied health benefits when an algorithm changed, and a college student was wrongly accused of cheating by an AI-enabled surveillance video. Black Americans were blocked from a kidney transplant when AI assumed they were at lower risk of having kidney disease, and men have been arrested when facial recognition technology went wrong.
Perhaps most troublesome is how algorithms across multiple sectors are plagued with bias and discrimination. While these issues may not be an intentional part of their design, developers often fail to consider the real-world consequences of the products they generate. Additionally, the people who have to live with the results of artificial intelligence and automated systems rarely get to provide input on their design.
The Biden Administration hopes the AI Bill of Rights will help Americans hold big technology accountable for their products while protecting everyone’s civil rights. The OSTP spent over a year developing the framework for the artificial intelligence protections. They held discussions about these technologies and collected information from workers, students, software experts, community activists, CEOs, public servants, and members of the international community.
This wide representation of people expressed a shared eagerness for the government to take the lead in providing guidelines that protect the public from the negative side of artificial intelligence. However, civil rights groups are urging the Biden Administration to take the AI Bill of Rights a step further.
Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, a Washington, DC-based nonprofit, said, “Today’s agency actions are valuable, but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law.”