FTC Shares Practice Tips to Prevent Racial Discrimination in the Use of AI
The FTC recently published a blog on the use of artificial intelligence (AI) technology and the danger of potential discrimination by race or other protected classes if the models use data that reflects existing racial bias.
Among other things, the FTC encourages the following:
- When building the AI model, think about ways to improve your data set to avoid using data that is missing information from certain populations and that could result in unfair or inequitable treatment of legally protected groups;
- Periodically test your algorithm to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class;
- Embrace transparency and independence, conduct and publish the results of independent audits, and open your data or source code to outside inspection;
- Avoid making false and deceptive statements related to the results the algorithm can deliver;
- When explaining how you use your data, make truthful and non-deceptive statements;
- Put in place a model that does more good than harm (“under the FTC Act, a practice is unfair if it causes more harm than good”); and
- Hold yourself accountable for your algorithm’s performance.