The White House has proposed an Artificial Intelligence (AI) Bill of Rights in response to the growing use of automation in various industries. The proposed bill is non-binding and suggests practices that developers must follow so that the technology doesn't put humans at a disadvantage.

After a year-long consultation with more than two dozen different departments, the White House has released a blueprint for an AI Bill of Rights. The document is intended as a call to action for the U.S. government to safeguard digital and civil rights in an increasingly AI-fueled world, officials said.

The "Blueprint for an AI Bill of Rights" incorporates feedback from civil society groups, technologists, industry researchers, and tech companies including Palantir and Microsoft.

The bill seeks to protect Americans from unsafe or ineffective systems, discrimination by algorithms, and abusive data collection. It also calls for transparency when it comes to AI-based decision-making, and for Americans to be given the right to know if they're being evaluated by an AI system.

Considering recent events where algorithms have been found to not prioritize the needs of Black patients, as well as facial recognition displaying an unfair bias toward darker skin tones, it has become apparent that more needs to be done to address these issues.

What exactly is the AI Bill of Rights?

It is worth noting that the White House's "AI Bill of Rights" is not an executive order issued by the president, nor is it law. It is a set of recommendations for lawmakers to use as a guide for any AI-related legislation they may draft.

The suggestions are meant to protect American citizens from some of the negative traits AI models have been known to exhibit.

Protects you from racial/sexual discrimination

Over the years, there have been several examples of AI exhibiting racist, sexist, and other discriminatory tendencies. For instance, a study conducted by Johns Hopkins University and the Georgia Institute of Technology found that AI could be biased against people of color. The AI in question was trained using a neural network model, which relies on data from sources like the internet.

As a result, computers are more likely to assign the word "homemaker" to Black and Latina women. They also used the word "criminal" to describe Black men 9% more often than White men.

Scientists say that this is a problem because the machines are not given enough information to make these kinds of judgments. This is an example of how prejudice and racial bias can be programmed into machines, even if they shouldn't be.

Protects your data and privacy

The blueprint's data protection section revolves around limiting the data that companies can collect. The AI Bill of Rights suggests collecting only the minimum amount of data required to perform necessary functions. It also references surveillance of home assistant devices.

As stated, companies are legally required to notify individuals on why an AI is in use and how it may affect them. The notice section indicates that they also explain how and why the decision was made.

In other words, if you're going to be relying on Artificial Intelligence to make decisions that could potentially impact someone's life, it's only fair that they have some understanding of the same.

AI
UNSPLASH