Stop Algorithmic Bias: Turn Two Frameworks into Fair and Just Federal Privacy Law

D0WAQGoWsAA9UbY.jpgNearly a year after the Facebook and Cambridge Analytica scandal, the Government Accountability Office released a report on the lack of a comprehensive federal privacy law governing private companies’ collection, use or sale of user data. Its big conclusion was that...yea, it’d be a good idea to have a federal law that protects people. Who would’ve thought?

Luckily, a framework already exists as a road map to creating this desperately needed federal law. It’s known as the Fair Information Practices, or FIPs for short. FIPs includes eight, thorough recommendations to create comprehensive privacy protection and digital rights in the United States, including establishing fair and just data practices and accountability to counter bias and discrimination. It also goes after inequality with a provision to prohibit “Take it or leave it” terms of service. These terms require individuals with less means to waive their privacy rights in order to get a quality good or service for less or for free. The Electronic Privacy Information Center, known as EPIC, in addition to 15 other consumer, privacy and social justice organizations, have signed on to the FIPs framework.

And in February, EPIC, along with 40 civil rights, civil liberties and consumer groups, wrote a letter to Congress to specifically address data-driven discrimination. The letter outlined the critical importance for any privacy legislation created to be consistent with the “Civil Rights Principles for the Era of Big Data.” These five principles are: ending high-tech profiling, ensuring fairness in automated decisions, preserving constitutional principles, enhancing individual control of personal information, and protecting people from inaccurate data.

You can help demand the end of algorithmic bias and discrimination. Call your members of Congress to demand that FIPs and the 5 Civil Rights Principles are included in the creation of future U.S. privacy law.

Learn more about this issue by going to and searching for “Algorithmic Transparency” under their Privacy Campaigns. You can also go to The Leadership Conference on Civil and Human Rights site at and visit the "Media & Tech" page under the “Our Work” tab. On social media, check out the hashtag #DataDiscrimination and search for the term or hashtag “algorithmic bias.”

So, if making our technological world safer, transparent and discrimination-free is important to you, be sure to hit the share buttons to spread the word about Turning Two Frameworks for Fair and Just Data Practices into Federal Privacy Law via social media so that others in your network can spread the word too.




Tell Congress to Create a Federal Privacy Law using:

Learn more:


AI can be sexist and racist — it’s time to make it fair (Nature)

Discriminating algorithms: 5 times AI showed prejudice (NewScientist)

Yes, artificial intelligence can be racist (Vox)

Even Kids Can Understand That Algorithms Can Be Biased (Scientific American, Roots of Unity Blog)

Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It (Time)

A.I. Could Worsen Health Disparities (NY Times)

What Women Know About the Internet (NY Times)


Posted April 19, 2018; Written by Best of the Left Communications Director, Amanda Hoffman

Hear the segment in the context of Best of the Left Edition #1266: Writing human bias into the code that runs our lives (Algorithms)

Sign up for activism updates