Proxy Problems—Solving for Discrimination in Algorithms
See all Insights

Proxy Problems—Solving for Discrimination in Algorithms

Brownstein Client Alert, Feb. 2, 2022

With regulators increasingly focusing on algorithmic discrimination, human intervention in predictive model programming and artificial intelligence (AI) will be more important than ever. Although the list of positive uses of AI continues to expand, algorithms can also lead to unintentionally discriminatory results, known as “disparate impact.” Algorithmic discrimination can occur when a computerized model makes a decision or a prediction that has the unintended consequence of denying opportunities or benefits more frequently to members of a protected class than to an unprotected control set. A discriminatory factor can infiltrate an algorithm in a number of ways, but one of the more common methods is when the algorithm includes a proxy for a protected class characteristic because unrelated data suggests the proxy is predictive or correlated to a legitimate target outcome.

Before exploring proxy discrimination, it’s important to note that AI and algorithms enhance our daily lives in ways in which society benefits. For example, algorithms facilitate expedited credit assessments, allowing consumers to be approved for a loan in a matter of minutes. Cryptographic algorithms improve a consumer’s experience by facilitating digital signatures. Indeed, if you’ve ever used GPS while driving, you’ve benefited from an algorithm (algorithms determine the location of a user and map out distance and travel time).

Proxy discrimination occurs when a facially neutral trait is used as a stand-in for a prohibited trait. Proxy discrimination has sometimes been used intentionally to evade rules prohibiting discrimination in lending, housing or employment, such as the Fair Housing Act (FHA), the Equal Credit Opportunity Act (ECOA) and Equal Employment Opportunity Act (EEOA). A widespread example of proxy discrimination is “redlining” in the financial sector. During the mid-1900s, instead of overtly discriminating on the basis of race in their underwriting and pricing decisions, some financial institutions used zip codes and neighborhood boundaries in place of race to avoid lending to neighborhoods that were predominantly African American. There, proxies were used in place of the prohibited characteristic to achieve a discriminatory purpose. But proxy discrimination need not be intentional.

When a proxy that correlates with membership in a protected class is predictive of an algorithm’s legitimate goal, using that proxy can appear “rational.” For example, higher SAT scores may correlate with better repayment of student loans because the scores are designed to predict graduation rates, which is highly correlated to loan repayment. But at the same time, there are racial disparities in SAT scores. Thus, underwriting algorithms that rely on a student’s SAT scores to approve or price loans may inadvertently reflect the racial disparity in SAT scores. Although unintentional—and seemingly rational—this disparate outcome could become the basis for an ECOA discrimination claim.

Once a disparate impact exists, the burden shifts to the algorithm user to demonstrate that its practice has a legitimate and nondiscriminatory purpose that is rooted in business necessity. Even if a specific factor in the algorithm meets the legitimate business purpose standard, it may still violate the law if the algorithm could achieve its legitimate aims with a less discriminatory alternative. For example, instead of using SAT scores to evaluate likelihood to repay a student loan, would grade point average (GPA) work just as well with less discriminatory effect? The humans who oversee AI should be asking these questions and working with statisticians and economists when necessary.
 

Steps to Evaluate Your AI’s Algorithm

So what is a human to do? First, organization compliance staff should know what factors its artificial intelligence or algorithms are using in the decisioning models. Ask for a list of factors and what decisions will be made based on each of the factors. Second, compliance staff must know which factors are prohibited in making certain decisions by checking the relevant statutes and regulations, e.g., ECOA, EEOA and GINA. Third, examine whether any factors in the algorithm are directly prohibited by applicable law, or are logically connected to those factors and thus potential “proxies.”

Determining whether your decisioning factors could be proxies for discrimination requires human intervention and some sleuthing. Some examples of proxy discrimination can be illustrated by examining gender. It is illegal under fair lending laws to use gender or any proxy for gender in allocating credit. Thus, look for factors that approximate for gender, such as height, weight, first name, Netflix viewing habits, and purchasing habits (e.g., what scent of shampoo you buy). In many cases, publicly available statistics can confirm whether a factor is highly correlated to a protected characteristic.

Another example can be illustrated by using the health care industry. While health insurers are legally prohibited from using genetic tests under the Genetic Information Nondiscrimination Act (GINA), companies would be well-advised to ensure its algorithms are not using proxies such as family medical history or visits to specific websites (e.g., a disease support group). Yet another example of proxy discrimination can be illustrated based on age. For example, a facially neutral data point (years since graduation) is a clear proxy for age.

Once a proxy is suspected, the next step is to determine its impact on decisioning, its utility in the model and whether it causes or contributes to a discriminatory impact. This can be done through statistical methods and file reviews. Once the factor is identified and its impact is quantified, it is time for human judgment to take over and decide if the factor is truly necessary to achieve a legitimate goal, or if a less discriminatory substitute can have the same utility.

The takeaway is this: any data (e.g., deodorant type) can be taken out of context by an algorithm and lead to proxy discrimination. And this discrimination, regardless of intent, can lead to lawsuits and regulatory risk. Systemic and robust human monitoring is one step in the right direction to avoiding proxy discrimination and accompanying liability.

Please reach out to Jason Downs or Sarah Auchterlonie with any questions or concerns. We can help you work with statisticians and economists in the event you think you may have a proxy problem.



This document is intended to provide you with general information regarding algorithmic discrimination. The contents of this document are not intended to provide specific legal advice. If you have any questions about the contents of this document or if you need legal advice as to an issue, please contact the attorneys listed or your regular Brownstein Hyatt Farber Schreck, LLP attorney. This communication may be considered advertising in some jurisdictions. The information in this article is accurate as of the publication date. Because the law in this area is changing rapidly, and insights are not automatically updated, continued accuracy cannot be guaranteed.

Recent Insights

Loading...