fbpx

NCRC Response to Advance Notice of Proposed Rulemaking on Trade Regulation on Surveillance and Data Security, “Commercial Surveillance, R111004.”

(Download)

Office of the Secretary
Federal Trade Commission
600 Pennsylvania Avenue, NW
Suite CC-5610 (Annex B)
Washington, DC 20580 

Re:  Response to Advance Notice of Proposed Rulemaking on Trade Regulation on Surveillance and Data Security, “Commercial Surveillance, R111004.”

Dear Secretary:

The National Community Reinvestment Coalition (NCRC) welcomes the opportunity to comment on the Federal Trade Commission’s (FTC) advance notice of proposed rulemaking on the prevalence of commercial surveillance and data security (“ANPRM”). The FTC plays an essential role in safeguarding privacy rights and protecting consumers from unfair and discriminatory acts and practices, given its unique expertise on consumer protection and antitrust issues.

The National Community Reinvestment Coalition and its grassroots member organizations create opportunities for people to build wealth. We work with community leaders, policymakers, and financial institutions to champion fairness and end discrimination in lending, housing, and business. Our 600 members include community reinvestment organizations; community development corporations; local and state government agencies; faith-based institutions; community organizing and civil rights groups; minority and women-owned business associations, and local and social service providers nationwide.

Our comments respond to the ANPR’s questions related to Automated Decision-Making Systems (Question 56), Discrimination Based Upon Protected Categories (Questions 65-72), and selected questions related to Consumer Consent and Notice, Transparency, and Disclosure (Questions 81) and Competition (Question 27).

Question 56: To what extent, if at all, should new rules require companies to take specific steps to prevent algorithmic errors? If so, which steps? To what extent, if at all, should the Commission require firms to evaluate and certify that their reliance on automated decision-making meets clear standards concerning accuracy, validity, reliability, or error? If so, how? Who should set those standards, the FTC or a third-party entity? Or should new rules require businesses to evaluate and certify that the accuracy, validity, or reliability of their commercial surveillance practices are in accordance with their own published business policies?

Companies relying on automated decision-making are responsible for evaluating and certifying that their algorithms, models, and data are accurate, valid, reliable, and compliant with governing laws, including fair lending laws and UDAP.   The standards for compliance should be set by federal agencies with jurisdiction over the companies, including the FTC. Companies are also responsible for overseeing any third parties they rely upon to support this decision-making. The prudential regulators and the CFPB require risk management oversight of third parties.[1]  If a company is not in compliance with its own published business policies, it could be subject to a claim of deceptive and unfair practices.

Question 65: How prevalent is algorithmic discrimination based on protected categories such as race, sex, and age? Is such discrimination more pronounced in some sectors than others? If so, which ones?

We believe that algorithmic discrimination is widespread across all sectors. The preamble to this ANPR[2] and the White House’s Blueprint for an AI Bill of Rights[3] cite several examples of algorithmic discrimination revealed over the past few years in the employment, lending, and healthcare sectors. Academic articles have documented algorithmic discrimination for at least the past seven years.[4] Legislation[5] has been introduced in Congress since 2019 to try and correct this problem but has not been able to get out of Committee.[6] usually, the public only becomes aware of algorithmic discrimination when media investigations[7] or governmental enforcement actions[8] reveal it. These are both extremely time-consuming and costly endeavors that cannot reveal the true pervasiveness of algorithmic discrimination.

Algorithms are designed and developed by humans, and humans are flawed and biased. Algorithmic developers can introduce explicit and implicit bias at three phases of the algorithm’s creation: input variables, outcome measures, and the construction of the training procedures.[9] Though the most well-known examples of biased algorithms are limited to a few sectors, realistically, algorithmic discrimination occurs across all industries because algorithms play a significant role in all aspects of our daily lives. The increased use of algorithms in all aspects of our lives, prevalent at places as different as fintechs and soap dispenser manufacturers,[10] results in a higher likelihood that discrimination is occurring across all sectors but is not being adequately documented, exposed, or corrected.

One area we have observed this problem is appraisal bias in automated valuation models used in home appraisals. Over the past couple of years, more appraisals were conducted automatically. The shift can be attributed to the COVID-19 pandemic and from a counter-response to how human behavior in appraisals contributed to causing the Great Recession.[11] The Urban Institute studied automated valuation models (AVM) and sales prices in majority-White and majority-Black neighborhoods. Their analysis found that “the percentage magnitude of inaccuracy in majority-Black neighborhoods has consistently been larger than in majority-White neighborhoods.”[12]An inaccurate AVM may be attributable to several factors, such as shortcomings with data inputs, unrepresentative data sets used for model training, or embedded bias within the model’s logic. The flaw in the data is due to systemic racism of redlining, which continues to affect the valuation of homes in majority-Black neighborhoods.[13] Lack of regulation and accountability allows discriminatory algorithms to exist, which harms all consumers in every sector.

Algorithmic discrimination should trigger the unfairness doctrine because consumers cannot reasonably avoid algorithms. Today, algorithms permeate all aspects of every industry in a manner that did not occur 20 years ago before the invention of smartphones.

The Commission should utilize its authority under the unfairness doctrine to protect consumers in different sectors from discriminatory algorithms. The unfairness doctrine should be applied to both algorithmic inputs and outputs. 

Question 66: How should the Commission evaluate or measure algorithmic discrimination? How does algorithmic discrimination affect consumers, directly and indirectly? To what extent, if at all, does algorithmic discrimination stifle innovation or competition?

i. Established techniques can be implemented by the Commission to measure algorithmic discrimination.

We are concerned that many financial institutions may adopt “fairness through unawareness”[14] strategies, where they contend that because they do not collect demographic data, discrimination cannot happen. These views ignore the possibility that data can be proxies for protected class status. The same missed opportunity occurs when they choose to avoid measuring outcomes. In both cases, practitioners operate under the misguided assumption that discrimination can only occur through face-to-face interactions. The Commission must clarify that financial institutions are responsible for ensuring that their products and services are non-discriminatory regardless of the channel where they are marketed and delivered.

Other parties already use these techniques in various policy and regulatory contexts. Contract vendors can deploy fair lending testing systems to modify algorithms that reveal the possible presence of disparate impacts. These services may be used by regulators or financial institutions.

Consensus exists on metrics, not just for financial services but in many other policy areas. For example, adverse impact ratios (AIRs) were initially utilized to measure employment discrimination, but their structure is easily applied to the supervision of financial services. An AIR measures nominal “yes-no” decision-making: an applicant is approved for a loan or denied. Observers divide the percentage of protected class members (PCMs) approved by the percentage of non-protected class members (NPCMs) approved to get an approval ratio. The closer the ratio is 1 to 1 (parity), the less negative impact on PCMs caused by the underwriting process. Similar techniques exist for outcomes expressed in continuous variables. In financial services, that primarily relates to cost. Supervisors may compare interest rates offered to PCMs vs. NPCMs. Alternatively, they may focus on outliers, where the share of PCMs who pay a rate above a standard deviation is compared to the NPCMs with interest rates that also fall above a standard deviation. Many advocates, when measuring price discrimination in mortgage lending, use Home Mortgage Disclosure Act data to create AIRs comparing the share of PCMs who paid an interest rate 300 basis points above the prevailing 10-year Treasury bond during the month when the lender originated the loan against similar experiences of NPCMs.[15]

Reviewers sometimes seek to calibrate a fairness estimation using more than one metric. A “calibration” metric can refine a fairness test by examining approved populations’ outcomes. If repayment rates for loans made to members of protected classes differ significantly from the rates for non-protected classes, it adds insights into model fairness. Ideally, loan performance rates will be similar, but problems arise if performance among protected class members is significantly higher or lower. When rates are higher, it highlights that the underwriting model may be unfairly conservative and could potentially allow for more approvals to protected classes. When performance is lower, it raises the question of loan suitability, as underwriting standards could put borrowers into loans they cannot afford to repay. Either outcome raises concerns, but the broader point is that fairness evaluations may improve when more than one metric is considered.

A necessary step for the Commission is to give ecosystem observers some clarity on when an AIR or a Standardized Mean Difference (SMD) measure becomes concerning enough to merit further regulatory intervention. However, the best expression of guidance will pull back from a simple threshold. The best instruction to the market will indicate how quickly a measure might merit concern. Still, it would refrain from creating a “line in the sand” where institutions with less concern about disparate impacts might attempt to push the limit of disparate practices to the last mile inside the accepted threshold. Indeed, a simple line could support many unintended consequences and be blind to nuance across financial services. For example, an acceptable AIR could be different for underwriting at different credit buckets.

Classification models can identify bias in categorical or binary outcomes. For example, a property and casualty company could use a classification model to estimate the likelihood that an insurance claim falls within one of two binary outcomes – either fraudulent or not fraudulent.

ii. The Commission should clarify how it will measure disparate impacts. Measuring disparate impacts is best done through post-hoc “outcomes-based” analysis. To facilitate that approach, the Commission must define when evidence of outcomes demonstrating disparate impacts constitutes grounds for review.

Under the Equal Credit Opportunity Act (ECOA), lenders must search for “less discriminatory alternatives” and adopt them unless the same business interest cannot be achieved using an alternative and fairer practice. The framework in ECOA provides an example of how the Commission could enforce algorithmic discrimination in a manner that meets the necessary tests in Section 5.

While ensuring that inputs do not introduce bias into algorithmic decision-making, post-hoc methods provide greater insights. Shapley value techniques can reveal which inputs led to a disparate outcome on a decision-by-decision basis. A Shapley value identifies the contribution that each variable or even variables in combination have made to a decision.[16] Shapley-based methodologies offer precision in ways that analysis evaluating the impact of eliminating one variable at a time from an algorithm ( “drop-one” techniques) does not. Indeed, answers may lead to solutions that a human reviewer would not have otherwise considered. Manual re-iterating may never lead to the best models. One study noted that finding a new model that could significantly improve the accuracy and explainability of two baseline lending models required 62 and 137 attempts.[17]

Our support for using Shapley values is made with some caveats. For one, post-hoc review using mathematical approaches should include conducting real-time front-end and midstream human review. People should always be a part of model reviews.

Second, the Commission should recognize that when models use large amounts of alternative data, they could lead to nonsensical formulas. Some data scientists add monotonic limits to models to protect against illogical constructs. Monotonic models control directionality, so a monotonic constraint would say that a debt service ratio cannot be inversely correlated to loan approval. Because humans use their personal experience to apply these constraints, monotonic structures support explainability as well.[18]  Because elements of any training data set can vary with the broader diversity of populations, logic based on a model may be illogical in the real world. Human review can address some of these problems, especially when policymakers ensure that model decisions can be explained. The sensibility of such an approach will be made clear In regulatory spheres where explainability is mandatory, such as in lending, where all adverse credit decisions must be explained. 

iii. It may be helpful to clarify how testing can estimate the demographic composition of applicants.

In financial services, lenders are prohibited from soliciting applicants for their demographic information except for mortgage and small business lending. Financial services may not be unique in this regard. It is also possible that collecting demographic information may be legal but not a common practice in some activities, such as self-testing.[19]  Concerns that asking could cause discouragement sound logical, and as a result, market participants may need clarification on how to solicit demographic information without incurring liability.

In June 2022, NCRC and a group of industry partners asked the CFPB to issue guidelines for how lenders could solicit demographic information to support fair lending testing.[20] The collaboration asked the CFPB for specific information, including proper disclosure forms, suggestions on sample sizes, and how discouragement could be avoided.

Some AI model testing firms have developed underwriting techniques that apply slightly different iterations of underwriting models to satisfy dual aims of accuracy and fairness. However, in lending products outside of mortgage and small business, these models must use Bayesian Improved Surname Geocoding (BISG) to estimate the demographic composition of applicant pools. A report from the CFPB confirmed the limits to the accuracy of BISG estimations and also noted that BISG’s predictive power was weaker with certain PCM groups.[21]

However, more than understanding that it is permissible to solicit information is required to overcome hesitancy. While the CFPB has primary responsibility for ECOA, the Commission could play a part. We suggest it works with the Bureau on focus group testing for how creditors can best solicit applicants for their demographic information. Important questions could include user-testing to create channel-specific forms for multiple languages, where solicitation could unintentionally lead to discouragement, guidelines for sample sizes, and when information should be sought inside the application process.

iv. The Commission should clarify that lenders will not be vulnerable to enforcement actions if they change a model that leads to fairer outcomes for protected class members. By doing so, the Commission will support efforts by lenders to improve models.

Providers should search for less discriminatory alternatives (LDAs). Nonetheless, applying an LDA approach will require clarity on the interpretation of a business justification. In financial services, there are widely-accepted quantitative metrics to evaluate disparate impacts, but techniques like AIR and SMD will only sometimes fit the parameters of other AI use cases.

Question 67:  How should the Commission address such algorithmic discrimination? Should it consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based? If so, which standards should the Commission use to measure or evaluate disparate outcomes? How should the Commission analyze discrimination based on proxies for protected categories? How should the Commission analyze discrimination when more than one protected category is implicated (e.g., pregnant veteran or Black woman)? Which standards should the Commission consider?

The Commission could adopt some practices other regulatory bodies have already implemented to address algorithmic discrimination.

i. Following the example set by the European Union, the Commission should establish standards to determine “high-risk” areas where algorithmic discrimination can cause “substantial harm.” These areas should include access to financial services and areas adjacent to financial services.

For example, determining if a practice is unfair under the “substantial harm” prong could draw from the European Union’s framework for evaluating the use of artificial intelligence in lending. First, the EU labeled lending as a “high-risk” activity. That designation elevates credit decisions and factors adjacent to credit decisions as worthy of heightened scrutiny because of the significant impacts they can bring to a person’s life. Second, it stated that because algorithms and private data played a greater role in allocating credit, their use deserved inclusion in policymaking. In 2021, it wrote that: 

“AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example, based on racial or ethnic origins, disabilities, age, and sexual orientation, or create new forms of discriminatory impacts.”[22]

Several aspects of the EU’s framework hold relevance for the Commission. First, the EU has specifically identified artificial intelligence as a new activity whose complexities and novelty call out a need for action. Second, it links these challenges to concerns about discrimination. Then finally, it also notes that credit discrimination may carry over to separate areas but is still adjacent to creditworthiness determinations.

The EU identified a list of high-risk systems: employment, access to credit, law enforcement, border control, access to education, employee management, access to government services, and judicial decision-making.[23]

The November 2020 guidance from the Office of Management and Budget (OMB) takes a different approach that does not serve consumers or the financial service industry. The north star of the OMB’s memorandum emphasizes the need to support innovation and growth in AI-based systems through an anti-regulatory barrier-reducing philosophy. The memorandum states that all regulatory steps should include an assessment of the impact on innovation and growth. Notably, the memorandum does not call for similar respect for human rights, except when it applies its unquestioning optimism to opine that innovation drives human welfare and autonomy gains.[24]

 ii. The Commission should establish a standard that states that outputs derived from algorithmic models should be explainable. We applaud the Commission for publishing guidelines on fairness in AI, but it should push further in requiring explainability.

 In 2021, the Commission published guidance outlining expectations for businesses that use AI. Those principles moved in the right direction and, when considered against the innovation-first approach of the prior administration, reframed AI policy in the interest of consumers. Key points included using representative populations for model training, a cautionary approach to examining models for discriminatory outcomes, using transparency standards, and avoiding deceptions.[25]

Nonetheless, the principles should have called out the rights of consumers to receive accurate and comprehensible explanations of how a model came to its decisions. The Commission should state that explainability is a necessity. It should apply greater scrutiny to ensure that decisions are explained in important applications. The communications should be timely, expressed in ways that all consumers easily understand, and in channels consistent with the interaction.

For high-risk decisions, consumers deserve to understand the reasons behind a decision. In lending, rules exist to require explanations for adverse decisions. Other fields may not have such protections. When outcomes cannot be explained, consumers cannot determine how they could alter their behaviors to increase their qualification for services. 

Going back to the example set by the EU, consider how it emphasized explainability as a standard for its supervision. It stated that as a “high-risk system,” the “to flag the use of an AI system when interacting with humans. For high-risk AI systems, the requirements of high-quality data, documentation, traceability, transparency, human oversight, accuracy, and robustness are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.”[26]

The Commission should ensure that there are “humans in the loop”[27] to review the outcomes of algorithmic decisions.

Concerns about the framework for fulfilling adverse-action requirements in financial services highlight problems that must be addressed to realize explainability. Creditors have thirty days to report the grounds for an adverse decision and are expected to do so through regular mail. There are a finite set of reason codes. As a result, consumers get an answer well after the time when they are seeking credit, using paper, and potentially in a truncated form. AI decisions are made in seconds – not in a month. By the nature of its digital structure, explanations of AI could be expressed digitally. If a consumer applies for a service using a web browser or a phone, it is a missed opportunity not to use the same platform. Finally, because algorithms may deploy thousands of inputs through decision trees and other dynamic models, the complexities of reasons may need more than nine model code reasons to allow for accurate explanations. As a result, participants in many AI-driven systems would benefit from the clarification by the Commission on how decisions should be explained. 

The Commission should evaluate if explanatory methods are equally comprehensible for all demographic groups. Differences may exist when explanations are given in foreign languages, for example, or in different regions.

iii. Explanations attributing decisions to immutable applicant characteristics unfairly lock people out of services. The Commission should review explanations related to ‘adverse decisions’ to ensure that models leave consumers with the opportunity to improve their qualification for services in the future.

If an algorithm relies on human behavior to allocate access to services or goods, then it should only use variables that give a consumer a chance to improve if they were previously denied because of their prior performance. Several years ago, a regulator discovered that a leading provider of algorithms used for lending relied on the average standardized test score of an applicant’s freshmen class. The model qualified applicants for refinances of their student loans. The lender ultimately changed its model to one that used post-graduation incomes and addressed income-related disparities by applying a normalization technique for applicants from schools with high rates of students of color.[28] In the student loan example, the approach led to disparate impacts, as students of color are more likely to attend schools whose student bodies have lower standardized test scores.[29] Other regulators have concluded that cohort default rates also can lead to disparate impacts.[30]

We want to highlight our concern regarding the permanence of certain variables that reflect consumer histories or attributes that cannot be changed or are difficult to change, such as where you went to school. The Commission should also be wary of models that leave applicants without the means to improve their qualifications. That aspect stands apart from disparate impacts. An algorithm that used an immutable variable as an input locked out students from lower-scored schools. The choice of an educational variable reduced and possibly guaranteed that a person who was denied once would always be denied, even if they improved their financial health. That is wrong and unfair because consumers could not reasonably avoid the injury.

A related concern is that using a variable to describe a group or region to ascertain an individual’s qualifications poses a risk akin to “redlining.”

Regarding methodology, the Commission should consider the use of Shapley techniques to examine models for these concerns. Shapley values will identify the variables that contributed the most to a decision. When algorithms produce explanations, a benefit is that it provides clarity on model logic. If reviews of explanations can reveal when an algorithm has a “rejected once, rejected forever” profile, it will facilitate fairness. The Commission should prevent algorithms from including immutable inputs.

iv. Rather than wait for Congress to pass legislation, the Commission should address the uncertainty created by the diverse set of laws and rules that may apply to the use of AI.

The EU’s statement also touched on the question of politics. According to the EU, addressing AI did not require changes to regulations but only a clarification that AI should be added to the sets of activities worthy of supervision under existing laws and regulations. Those rules were largely drawn from the universe of rules relating to data. In the US, similar opportunities exist: The Health Insurance Portability and Accountability Act, the Fair Credit Reporting Act, the Gramm-Leach-Bliley Act, the Family Educational Rights and Privacy Act, the Electronic Communications and Privacy Act, the Children’s Online Privacy and Protection Act, and the Video Privacy Protection Act all seek to address privacy. Various state laws, most notably the California Consumer Privacy Act, also weigh in on these areas.[31] Notwithstanding that such a variety of rules leaves markets with uncertainties, they also speak to the need for an all-of-government synthesis.   

We affirm the recommendations in the FTC’s April 2020 memorandum on the use of AI and call on it to follow up those views with enforcement actions. We commend the FTC’s June 2022 report, which identified flaws in accuracy and design, bias and discrimination, and incentives to conduct surveillance as key policy issues. However, the consensus of the Commission’s Directors was to call on Congress to develop legislation to protect consumers from harm.[32]

Question 68: Should the Commission focus on harms based on protected classes? Should the Commission consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?

The Commission should focus on populations most impacted by discriminatory and anti-consumer practices based on the characteristics protected under anti-discrimination laws (such as federal fair lending laws). However, as data emerges demonstrating new vulnerable populations, the Commission should use its existing UDAP authority to protect them.

The Commission already has the authority to protect vulnerable populations through its UDAP authority. More specifically, defining additional protected classes, such as unhoused people or residents of rural communities, will not expand that authority. However, to the extent the FTC wishes to clarify that additional populations are also covered by an existing protected class status in sector-oriented fair lending laws such as ECOA, that would be appropriate where there is judicial support. See, e.g., Bostock v. Clayton Co., 140 S.Ct. 1731, 590 US ___ (2020) (gender discrimination also includes discrimination based on sexual orientation or gender identity).

Question 69: Should the Commission consider new rules on algorithmic discrimination in areas where Congress has already explicitly legislated, such as housing, employment, labor, and consumer finance? Or should the Commission consider such rules addressing all sectors?

There currently exists a legislative and regulatory framework that applies to discriminatory algorithms. The disparate impact standard of proof under civil rights laws is the legislative framework. The regulatory framework includes the recent guidance by the CFPB to recognize the inherent unfairness of discrimination and that this discriminatory unfairness is an element of UDAAP.

Discriminatory impact theory, available under the fair lending laws, recognizes that specific actions or neutral on-their-face policies – like the choice of data used in an algorithm’s input variables – can have a disproportionately negative effect on a protected class. Algorithmic discrimination falls under these current laws’ scope and provides standing for enforcement actions.

The enforcement action HUD v Facebook applied the Fair Housing Act to algorithms that resulted in discriminatory advertising. In the complaint, HUD asserts that one of the violations their investigation found was that Facebook collects data on its users and then has an algorithm that allows for the limitation of who can view the advertisements through both the explicit use of listed protected classes and proxy information that has the same effect as if they had explicitly listed a protected class.[33]

An important limitation of the current legislative framework is that these laws are specific in their jurisdiction to the lending, housing, and employment arenas and do not apply to all sectors.

In March of 2022, the CFPB provided guidance that the UDAAP rule applies to discrimination in all financial services.[34] This guidance in the form of revision to CFPB’s examination manuals clarifies that the CFPB’s authority includes rooting out discrimination in financial products that are not covered under the traditional fair lending laws of the Fair Housing Act or the Equal Credit Opportunity Act. The guidance states that discrimination is inherently unfair and harms consumers.

The FTC also has the authority to protect consumers based on the unfairness doctrine, which can apply to many areas of the economy that are not covered by sector-specific fair lending laws. 15 U.S.C. § 45.  Consumers are regularly and unknowingly being harmed by algorithms that they are not reasonably able to avoid as technology’s role in our lives continues to increase in areas where it never existed before. As technology increases, so does the data that this use creates. For example, consumers can buy a washing machine linked to their smartphone. Some consumers may see this technology as a help, enabling them to start their machine from anywhere. However, now the appliance can collect a significant amount of data on consumer behavior, like the geographical data corresponding to where the smartphone was located when the person used their machine. Data that many consumers may not be aware of is being collected.

The FTC recently exercised its UDAP authority consistent with the CFPB’s March guidance in FTC v. Passport Automotive Group, Inc., where the FTC filed a complaint and settled with a group of auto dealers, finding their discriminatory practices to be unfair.[35]  We believe that the FTC and CFPB’s application of the unfairness doctrine to discrimination is appropriate. We address the unfairness doctrine more fully below in our response to Question 71.

Question 70: How, if at all, would restrictions on discrimination by automated decision-making systems based on protected categories affect all consumers?

Methods exist to ensure that financial institutions amend their models to address disparate impacts and that it does not undermine the interests of other consumers. Through machine learning, modelers can iterate many times to find an algorithm that eliminates disparate impacts without harming others. By making automated decision-making systems fairer for protected classes of consumers, the FTC aids all consumers by ensuring that these systems have integrity. Tipping the scales adversely against certain sub-populations does not aid those who are unaffected. It only calls into question the fairness of the entire system. 

Question 71: To what extent, if at all, may the Commission rely on its unfairness authority under Section 5 to promulgate anti-discrimination rules? Should it? How, if at all, should anti-discrimination doctrine in other sectors or federal statutes relate to new rules? ]

Discrimination is unfair by its very nature. It typically meets all of the criteria of section 5(n) of the Federal Trade Commission Act for an unfair act of practice in that it causes “substantial injury,”  it is not “reasonably avoidable” by the consumer, and it is not outweighed by “countervailing benefits” to consumers or to competition. See 15 U.S.C. section 45(n).  When Congress enacted the unfairness doctrine, it did not limit the statute by confining it to specific acts or practices, wisely applying a set of criteria that could apply to any business practice, present or future, that was unfair.[36] As the FTC stated in its 1980 Policy Statement on Unfairness, “the statute was deliberately framed in general terms since Congress recognized the impossibility of drafting a complete list of unfair trade practices that would not quickly become outdated or leave loopholes for easy evasion.” Nor did it limit the statute to certain sectors of the economy, such as lending or housing, or to military service, which it has in other statutes that address anti-consumer behavior.[37]

Fifty years ago, the Supreme Court acknowledged the decades of cases applying the FTC’s broad criteria for unfairness to different business practices. In FTC v. Sperry & Hutchinson Co., 405 US 223, 244-45 n5 (1972), the Court held that even where business conduct did not violate antitrust law but did cause consumer injury and violate public policy, it could be unfair if the FTC found the facts of the case warranted such a finding. The Court found the appellate court below had erred in attempting to constrain the FTC from applying its unfairness doctrine to a business practice that arguably violated public policy and remanded the case for more specific fact-finding as to unfairness.

 The FTC can apply its existing authority under Section 5 to discrimination to complement its statutory authority under ECOA and other laws. Like the Truth In Lending Act, other consumer protection statutes do not limit the scope of the unfairness doctrine, nor do anti-discrimination laws. A policy or practice can violate both ECOA and Section 5, just as it can violate both TILA and Section 5. Just as the CFPB has updated its exam manuals to provide greater clarity when a practice can be both unfair and discriminatory,[38] an FTC rule would also ensure clearer rules of the road for financial institutions in navigating this intersection in areas other than lending. 

 Question 72: How can the Commission’s expertise and authorities complement those of other civil rights agencies? How might a new rule ensure space for interagency collaboration?

The FTC has decades of experience enforcing UDAP in conjunction with looking at marketplace competition. Given its Section 5 authority and antitrust experience, it is uniquely qualified to look at business trends and conditions. However, the FTC does not have a robust history of enforcing ECOA, focusing on studies and reports related to discrimination.[39]  The DOJ and CFPB have been far more active in filing discrimination cases. A new rule addressing discriminatory acts and practices would align the FTC with other agencies and facilitate collaborative interagency efforts. For example, the CFPB has already issued a circular that announced changes in its exam manuals and provided guidance on applying the unfairness doctrine to discrimination.[40] A complementary rule by the FTC would avoid consumer and marketplace confusion by ensuring that the two agencies enforce fair lending laws similarly.

A new rulemaking complementing the FTC’s existing authority to enforce ECOA would also make it clear that the FTC is willing to bring cases alleging unfair and deceptive practices and, where appropriate, additional ECOA claims that address instances where discrimination occurred in the lending sector. The FTC, on occasion, has effectively filed claims alleging unfairness and discrimination.[41]  The FTC should do more to address discrimination by using the wider array of statutes it has the authority to enforce. 

Question 77: To what extent should new trade regulation rules require firms to give consumers the choice of whether to be subject to commercial surveillance? To what extent should new trade regulation rules give consumers the choice of withdrawing their duly given prior consent? How demonstrable or substantial must consumer consent be if it is to remain a useful way of evaluating whether a commercial surveillance practice is unfair or deceptive? How should the Commission evaluate whether consumer consent is meaningful enough?

The practice of screen scraping puts consumers at risk of harm. Screen scraping occurs inside and outside of financial services. Even though a superior technology exists in the marketplace, most participants have yet to adopt it. The Commission must consider how it can intervene to protect consumers from this out-of-date technology.

Today, data collectors deploy two main methods to gather information about consumers on the web: screen scraping and open advanced programming interfaces (APIs). While both are prevalent, the first is far more harmful. The lack of technical sophistication of screen scraping (compared to an API) harms consumers. Screen scraping is an “all-or-nothing” proposition. Screen scraping does not enable a consumer cannot limit how data is accessed. It is an all-or-nothing prospect. In most environments, information is collected without a person’s awareness or consent.

While regulators can address the harms of screen scraping, industry must play a role as well. Many financial institutions have resisted making the investments needed to build APIs. Nonetheless, consumers will want to share their information with third parties for purposes that provide benefits, such as when they give account access to a personal financial management tool. Thus, it may be the wrong path to ban the use of screen scraping. Nonetheless, regulators should acknowledge that the choice to use screen scraping is not with the consumer, so they cannot prevent harms associated with it.

The issue must be resolved. Screenscraping is prevalent, consumers have little or no control to prevent harms that may result when it is used, which creates harm, and the use by one company can lead to negative externalities. The benefit to consumers is the same, but the harms associated with the two primary data-sharing technologies differ greatly.

i. The Commission should develop standards for consumers to control how their information is used. Consumers should be able to control the use of their data, not just when it is first solicited but throughout the duration of the relationship.

Control of data should reside with consumers. This principle is already an established, legally-enforced law in the United Kingdom. In the UK, the default control over data exists with the consumer. Only until the data user has secured permission, and only through a method that requires a justification for its use, can it be collected and utilized.[42] At any time, consumers should be able to tell a data provider to cease sharing their information. A data provider – such as a financial institution where a consumer has a bank account – should be responsible for providing a portal where consumers can see how their information is being shared. Some financial institutions offer these services already,[43] but they do so voluntarily. The portals should allow consumers to control how information is shared or block access with a simple toggle.

Systems for governing consent should give consumers the right to withdraw or edit what information they share. The benefit to a consumer of sharing information may not be constant over time. If an app utilizes bank account information to enhance a consumer’s creditworthiness, the information will be most valuable when the consumer applies for credit. Consumers should have the right to limit data collection to the extent that it matches their interests. So, for example, if a consumer permitted their bank account to an underwriter, the permission should be easily revocable after the loan decision has been made. If a consumer ceases to use a PFM, the app should make it simple for the consumer to request to end scraping on their bank account.

ii. In financial services, when data users, such as personal financial management tools, gain the right to access consumer data, the shortcomings of screen scraping create conflicting obligations for data providers.

Nonetheless, consumers and data providers acknowledge that screenscraping brings harm. In financial services, banks resent the activities of screen scrapers because it constrains their server capacity[44] and poses risks to their ability to protect consumer data privacy. The responsibilities of banks to protect consumer information remain in place even though a third party now has the ability to gather, store, and re-purpose the information.[45] In the UK and the European Union, screen scrapping has been banned, whereas Australia established a “Consumer Data Rights” regime that still accommodates the practice.[46] Unbeknownst to most people, once permissioned to do so, a screenscraping bot can sign as frequently as its programmers desire. There are no restrictions in place to prevent a bot from order log-ins on an hourly basis. While some banks no longer permit screen scraping, most banks still need to build APIs. The market is moving toward the goal of keeping data inside permissioned spaces,[47] but there are laggards. Even now, industry estimates suggest that less than twenty percent of bank accounts are set up to accept APIs.[48] For various reasons, the rest still consign their account holders to screen scraping.[49]

In response to concerns of US banks and fintechs, the Financial Data Exchange (FDX) has initiated a set of standards for data permissioning.[50] The scope of FDX’s activities centers on the common open application programming interfaces (APIs) that its members can use if they comply with FDX’s standards. However, in other countries, policymakers have not ceded leadership to industry groups. Rather, the EU and UK governments have led mandates toward using APIs.[51]

iii. Screen scraping presents additional and even greater problems outside financial services.

While privacy concerns remain relevant in other sectors like banking, screen scraping can present additional problems of even greater scope elsewhere. It may pose fundamental challenges to a business’s viability in certain markets. For example, screen scraping can undercut the revenue of a digital publisher by extracting consumer information from the website without the permission of the website or the site visitors and then reselling it.[52] Similarly, an airfare ticket seller may purchase scraped data to determine a consumer’s willingness to pay.[53]  The Commission should ensure that consumers receive a benefit for granting access to their personal information.

Screen scrapers secured credential access to consumer bank accounts, frequently as a means to provide account information to transfer money or to facilitate a personal financial management (PFM) tool. For these use cases, there is an exchange of value.

One yardstick for the Commission to consider would be to ensure that consumers receive value in return for their information. Such a framework would allow the Commission to address practices where companies use consumer information to justify price increases. It should protect consumers from price optimization and other price-maximizing practices. In certain online sales contexts, companies change prices based on algorithms that estimate a consumer’s price elasticity. For example, knowing that business travelers may be less sensitive to higher airfares, airlines may raise prices when it suspects that the shopper is a business traveler. As a principle, consumers should benefit from granting access to their information. Price optimization violates that standard.

iv. Some screen scraping occurs without any consumer consent. In these cases, the Commission should strengthen how it holds data collectors liable for data security. Collectors should tell consumers what information they hold about them and give them an opportunity to correct false information.

Bots conduct surveillance without consumer consent. Whereas scraping in financial services occurs when a consumer reveals their login credentials to provide access to an otherwise password-protected site, most screen scraping occurs on the open internet. This information is collected by “bots.” By some estimations, bots now make up more than one-fourth of all internet traffic.[54] The Commission should supervise the activities of internet bots that collect consumer data. Because some internet bots may automate tasks that realize socially-positive aims,[55] The ideal approach would not eliminate their use.

The widespread adoption of involuntary screen scraping means that information about consumers is held without their knowledge, in places that cannot find, and under regimes that do not provide consumers with the ability to verify its accuracy. Consumers have no ability to defend themselves against these systems. As a result, if incorrect information has been added to a data aggregation system, consumers may suffer harm. Without some means of knowing what has been collected, they rectify inaccuracies. The Commission should create a system that forces data aggregators to report back to consumers on what information they have stored. They should give consumers a simple way to correct the wrong information.

Question 81: Should new trade regulation rules require companies to give consumers the choice of opting out of all or certain limited commercial surveillance practices? If so, for which practices or purposes should the provision of an opt-out choice be required? For example, to what extent should new rules require that consumers have the choice of opting out of all personalized or targeted advertising?

With few or no expectations, opt-out choices are enforced as a unilateral condition of service provision. As a result, consumers face the risk of harm. All too often, data monetizers do not provide users with an equal exchange of value. The FTC should restore consumer power to control the use of their data. While the industry may contend that agencies lack the authority to limit surveillance, the Commission should counter by pointing out that at no point in time did Congress grant the right for Big Tech firms to embark on systematic surveillance.[56]

Consumers have come to accept that they have little or no control over how their data is used and no ability to prevent it from being collected. In a recent survey, more than 8 of 10 US adults felt that the risks of data collection outweighed its benefits. Almost as many were worried about how companies were using it, and most felt they had no control over its use.[57] Absent intervention, consumers have experienced regular and palpable harm.

First-order harm describes the risk that results from a compromise of data security. For example, a data breach at Equifax compromised the private information of 147 million Americans. Consumers may qualify for small payments, and all were able to ask for identity-monitoring services.[58] Still, it is entirely unacceptable that the business model that permitted this and other privacy breaches remains a de facto standard. In reality, consumers have no ability to shield themselves from these kinds of data collection systems.

The FTC should address “negative option” systems. Consumers face a “take it or leave it” scenario in these formats. Unless a consumer concedes to unfettered surveillance, they may not be able to use a product.[59] These restrictions undermine the interests of consumers. A bill currently in the legislative process in the UK would obligate service providers to take liability for the activities of internet bots operating on their platforms.

Question 27: Would any given new trade regulation rule on data security or commercial surveillance impede or enhance competition? Would any given rule entrench the potential dominance of one company or set of companies in ways that impede competition? If so, how and to what extent?

In part (b) of the Commission’s blanket request for cost-benefit analysis, the ANPR states, “the balance of costs and countervailing benefits of such practices for consumers and competition….” Additionally, in the footnotes of the ANPR, remarks attributed to Commissioner Noah Joshua Phillips note that the Commission seeks comment on how surveillance harms competition.[60]

A lack of regulation surrounding commercial surveillance has allowed a small set of “Big Tech” firms to develop market power that stifles competition. Although firms like Google initially started with an intention not to compromise the privacy of users, it shifted to data monetization when it introduced AdWords in October 2000.[61] AdWords allowed Google to derive revenue from clicks. To meet the interests of advertisers (and to maximize their revenues), Google had to develop analytics to gauge user intent. Today, data monetization is the model. Data collection has created winners and losers in commerce, leading to anti-competitive effects among businesses and reducing new job creation.

Digital surveillance capitalism tipped the scales between businesses to the benefit of a handful of large firms and at the expense of most small and medium-sized enterprises (SMEs). Only a few firms can avail themselves of the advantages conferred by consumer surveillance. Fewer have the capability to deploy algorithmic modeling techniques. The ability to collect data creates a long-term moat due to network effects, first-mover benefits, and sizeable capital investment requirements. Given the narrow range of companies in search, payment wallets, social platforms, and smartphone data services, first-movers have long-term market power.[62] 

Data inequality has a related impact on entrepreneurialism. Google, Facebook (now Meta), Instagram, and other platforms are gatekeepers to customers.[63] Google has captured 60 percent of the US search market. The portfolio of businesses owned by Meta and Alphabet accounts for fifty percent of global online advertising spending.[64] Because most Big Tech firms enjoy monopoly power, they can set prices. They can even use the sales data gleaned from goods sold by small businesses on their platforms to introduce rivalrous house-label products.[65] Large retailers can use their power over suppliers to build accounts payable balances and reduce their accounts receivables, a dynamic that essentially forces smaller businesses to provide interest-free advances.[66]

The emergence of data monetization may explain the significant drop in the number of new businesses in the United States. The rate that new firms were created fell by 20 percent between 1982 and 2018. Similarly, the share of jobs created by new firms fell by more than one-third.[67]  Several factors – some of which are outside of the Commission’s ability to affect – are in play. Experts attribute some of the fall-offs to mounting student debt and population decline in certain regions, but anti-competitive forces owing to Big Tech plays a role as well. The Commission can address how the use of data has led to an uneven playing field in business.

The Commission should consider addressing the market power enjoyed by Big Tech, Big Retail, and other platforms. It should ensure that platforms do not cannibalize the intellectual property of sellers. While we do not have a specific recommendation, we highlight the impact on competition because it projects itself across many aspects of our economy. Addressing the threat of surveillance on competition falls squarely within the directive of the Commission.

CONCLUSION

Thank you for the opportunity to comment on these issues. We commend the Commission for moving forward with these efforts to address the impacts of consumer surveillance and for its emphasis on the potential for these technologies to institutionalize discrimination.

Please reach out to me, Brad Blower (bblower@ncrc.org), or Adam Rust (arust@ncrc.org) if we can provide further information.

Sincerely,
Jesse Van Tol
Chief Executive Officer
National Community Reinvestment Coalition

 


[1] SR Letter 13-19 / CA Letter 13-21, “Guidance on Managing Outsourcing Risk” (December 5, 2013, updated February 26, 2021)(Federal Reserve Board); 4 FIL-44-2008, “Guidance for Managing Third-Party Risk” (June 6, 2008)(FDIC), OCC Bulletin 2013-29, “Third-Party Relationships: Risk Management Guidance” and OCC Bulletin 2020-10, “Third-Party Relationships: Frequently Asked Questions to Supplement OCC Bulletin 2013-29 (OCC); CFPB Bulletin 2016-012 (CFPB).

[2] Advanced Notice of Proposed Rulemaking “Trade Regulation Rule on Commercial Surveillance and Data Security” 87 FR 51273. August 22, 2022.

[3] Blueprint for an AI Bill of Rights. White House. October 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[4]King, Allan G, and Mrkonich Marko J. “Big Data and the Risk of Employment Discrimination” 68 Okla. L. Rev. 555 (2015-2016)

[6] HR 6580 – 117th Congress (2021-2022): Algorithmic Accountability Act of 2022, HR 6580, 117th Cong. (2022).https://www.congress.gov/bill/117th-congress/house-bill/6580/related-bills

[7] Angwin, Julia, Jeff Larson, Surya Mattu, and Laura Kirchner. “Machine Bias.” ProPublica, May 23, 2016. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (last accessed October 13, 2022).

[8] United States v. Meta Platforms, Inc., f/k/a Facebook, Inc. (SDNY)    https://www.justice.gov/crt/case/united-states-v-meta-platforms-inc-fka-facebook-inc-sdny

[9] Kleinberg Jon, Ludwig Jens, Mullainathan Sendhil, and Sunstein Cass R. “Algorithms as Discrimination Detectors” PNAS, December 1, 2020, vol. 117 no. 48.

[10] Maya C. Jackson, Artificial Intelligence & Algorithmic Bias: The Issues With Technology Reflecting History & Humans, 16 J. Bus. & Tech. L. 299 (2021)

Available at: https://digitalcommons.law.umaryland.edu/jbtl/vol16/iss2/5

[11] Neal Michael, Strochak Sarah, Zhu Linna, and Young Caitlin. “How Automated Valuation Models Can Disproportionately Affect Majority-Black Neighborhoods.” The Urban Institute. December 2020 https://www.urban.org/sites/default/files/publication/103429/how-automated-valuation-models-can-disproportionately-affect-majority-black-neighborhoods_1.pdf

[12] Id.

[13] Mitchell, Bruce and Juan Franco. “ HOLC ‘Redlining’ Maps: The Persistent Structure of Segregation and Economic Inequality” NCRC. March 20, 2018, https://ncrc.org/holc/

[14] Protected Attributes and “Fairness through Unawareness” (Exploring Fairness in Machine Learning for International Development). (2022). Massachusetts Institute of Technology. https://ocw.mit.edu/courses/res-ec-001-exploring-fairness-in-machine-learning-for-international-development-spring-2020/pages/module-three-framework/protected-attributes/

[15] Getter, D. E. (2008). Reporting Issues Under the Home Mortgage Disclosure Act (No. RL34720; p. 10). Congressional Research Service. https://crsreports.congress.gov/product/pdf/RL/RL34720/3

[16] Mazzanti, S. (2021, April 21). SHAP explained the way I wish someone explained it to me. Towards Data Science. https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30

[17] Laura Blattner, Jann Spiess, & P-R Stark. (2022). Machine Learning Explainability & Fairness: Insights from Consumer Lending [Working Paper]. FinRegLab and Stanford University. https://finreglab.org/wp-content/uploads/2022/04/FinRegLab_Stanford_ML-Explainability-and-Fairness_Insights-from-Consumer-Lending-April-2022.pdf

[18] Chen, C., Lin, K., Rudin, C., Shaposhnik, Y., Wang, S., & Wang, T. (2021). A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations. Decision Support Systemsforthcoming, Art. http://arxiv.org/abs/2106.02605

[19] Regulation B, 12 CFR 1002.5(b)(1) allows for the collection of demographic data for self-testing.

[20] National Community Reinvestment Coalition. (2022, June 27). NCRC, Innovation Council Call For CFPB To Clarify Lender Demographic Data Guidancehttps://ncrc.org/ncrc-innovation-council-call-for-cfpb-to-clarify-lender-demographic-data-guidance/

[21] Consumer Financial Protection Bureau. (2014). Using publicly available information to proxy for unidentified race and ethnicity: A methodology and assessment (p. 37). https://files.consumerfinance.gov/f/201409_cfpb_report_proxy-methodology.pdf

[22] European Commission. (2021). Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence). European Union. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

[23] Engler, A. (2022). The EU AI Act will have a global impact, but a limited Brussels Effect (Governance Studies). Brookings Institution Center for Technology Innovation. https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/

[24] Russell T. Vought. (2020). Guidance for Regulation of Artificial Intelligence Applications [Memorandum for the Heads of Executive Departments and Agencies]. Executive Office of the President Office of Management and Budget. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf

[25] Elisa Jillson. (2021, April 19). Aiming for truth, fairness, and equity in your company’s use of AI. Federal Trade Commission. https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

[26] European Commission. (2021). Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence). European Union. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

[27] Federal Trade Commission. (2022). FTC Report Warns About Using Artificial Intelligence to Combat Online Problems [Report to Congress]. https://www.ftc.gov/system/files/ftc_gov/pdf/Combatting%20Online%20Harms%20Through%20Innovation%3B%20Federal%20Trade%20Commission%20Report%20to%20Congress.pdf

[28] NAACP Legal Defense Fund, Student Borrower Protection Center, Upstart Network, & Relman Colfax PLLC. (2021). Fair Lending Monitorship of Upstart Network’s Lending Model (Monitorship Second Report). https://www.relmanlaw.com/media/news/1182_PUBLIC%20Upstart%20Monitorship_2nd%20Report_FINAL.pdf

[29] Ember Smith & Richard Reeves. (2020). SAT math scores mirror and maintain racial inequity. Brookings Institution. https://www.brookings.edu/blog/up-front/2020/12/01/sat-math-scores-mirror-and-maintain-racial-inequity/

[30] Consumer Financial Protection Bureau. (2012). Private Student Loans Report (p. 131) [Report to the Senate Committee on Banking, Housing, and Urban Affairs, the Senate Committee on Health, Education, Labor, and Pensions, the House of Representatives Committee on Financial Services, and the House of Representatives Committee on Education and the Workforce.]. https://files.consumerfinance.gov/f/201207_cfpb_Reports_Private-Student-Loans.pdf

[31] Thorin Klosowski. (2021, September 6). The State of Consumer Data Privacy Laws in the US (And Why It Matters). New York Times. https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/

[32] Federal Trade Commission. (2022). FTC Report Warns About Using Artificial Intelligence to Combat Online Problems [Report to Congress]. https://www.ftc.gov/system/files/ftc_gov/pdf/Combatting%20Online%20Harms%20Through%20Innovation%3B%20Federal%20Trade%20Commission%20Report%20to%20Congress.pdf

[33]  HUD v. Facebook, Inc., HUD ALJ No. FHEO No. 01-18-0323-8, Charge of Discrimination ( March 28, 2019),   https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdf

[34] Consumer Financial Protection Bureau. (2021, March 9). CFPB Clarifies That Discrimination by Lenders on the Basis of Sexual Orientation and Gender Identity Is Illegal. Consumer Financial Protection Bureau. https://www.consumerfinance.gov/about-us/newsroom/cfpb-clarifies-discrimination-by-lenders-on-basis-of-sexual-orientation-and-gender-identity-is-illegal/

[35] FTC v. Passport Automated Group, Inc., Case No. TDC-22-2070, Complaint for Permanent Injunction, Monetary Relief and Other Relief (D. Md. October 18, 2022).

[36] See HR Conf. Rep. No. 1142, 63d Cong., 2d Sess., at 19 (1914) (If Congress “were to adopt the method of definition, it would undertake an endless task”).

[37] Cite to Fair Housing Act, Equal Credit Opportunity Act, and Military Lending Act.

[38] See CFPB Supervision and Examination Manual, Unfair, Deceptive, or Abusive Acts or Practices Section at 11, 13, 14, 17 (revised Mar. 16, 2022)

[39] See, e.g., FTC Enforcement Activities under the ECOA and Regulation B in 2021: Report to the CFPB (Mithal, Feb. 23, 2022).; FTC Enforcement Activities under the ECOA and Regulation B in 2020: Report to the CFPB (Mithal, February 13, 2021).

[40] Consumer Financial Protection Bureau. (2021, March 9). CFPB Clarifies That Discrimination by Lenders on the Basis of Sexual Orientation and Gender Identity Is Illegal. Consumer Financial Protection Bureau. https://www.consumerfinance.gov/about-us/newsroom/cfpb-clarifies-discrimination-by-lenders-on-basis-of-sexual-orientation-and-gender-identity-is-illegal/

[41] Capital City Mortgage Corp. v.  Nash et al., Second Amended Complaint for Preliminary Injunction and Other Equitable Relief and Monetary Civil Penalties (April 17, 2002)(FTC in a predatory lending complaint included claims for violation of FTC Act for unfair acts and practices, TILA and ECOA).

[42] Information Commissioner’s Office. (2018). Guide to the General Data Protection Regulation. Information Commissioner’s Office. https://ico.org.uk/media/for-organisations/guide-to-the-general-data-protection-regulation-gdpr-1-0.pdf

[43] Wells Fargo. (2018, October 1). Wells Fargo Launches Control Tower SM, New Digital Experience for Customers Nationwide. Newsroom. https://newsroom.wf.com/English/news-releases/news-release-details/2018/Wells-Fargo-Launches-Control-Tower-SM-New-Digital-Experience-for-Customers-Nationwide/default.aspx

[44] Hirschey, J. (2014). Symbiotic Relationships: Pragmatic Acceptance of Data Scraping. Berkeley Technology Law Journal29, 38. https://doi.org/10.2139/ssrn.2419167

[45] Committee on Banking Supervision. (2019). Report on open banking and application programming interfaces. Bank of International Settlements. https://www.bis.org/press/p191119.htm

[46] Liu, H.-W. (2020). Two Decades of Laws and Practice Around Screen Scraping in the Common Law World and Its Open Banking Watershed Moment [SSRN Scholarly Paper]. https://papers.ssrn.com/abstract=3756093

[47] Pimentel, B. (2021, October 5). Banks and fintechs agree: It’s time for screen scraping to go. Protocol. https://www.protocol.com/fintech/fdx-financial-data

[48] Financial Data Exchange. (2021). Financial Data Exchange Comments Docket No. CFPB-2020-0034—Consumer Access to Financial Records Consumer Financial Protection Bureau (CFPB). https://finledger.com/wp-content/uploads/sites/5/2021/03/Financial-Data-Exchange-Comments-to-CFPB.pdf

[49] Rebecca Ayers & Suman Bhattacharyya. (2021, March 10). Why screen scraping still rules the roost on data connectivity. FinLedger. https://finledger.com/articles/why-screen-scraping-still-rules-the-roost-on-data-connectivity/

[50] Financial Data Exchange. (2020, December 8). Financial Data Exchange Releases New Open Finance Standards. https://www.financialdataexchange.org/FDX/FDX/News/Press-Releases/FDX_Launches_Open_Finance_Standards_And_FDX_API_4.5.aspx

[51] Rebecca Ayers & Suman Bhattacharyya. (2021, March 10). Why screen scraping still rules the roost on data connectivity. FinLedger. https://finledger.com/articles/why-screen-scraping-still-rules-the-roost-on-data-connectivity/

[52] Liu, H.-W. (2020). Two Decades of Laws and Practice Around Screen Scraping in the Common Law World and Its Open Banking Watershed Moment [SSRN Scholarly Paper]. https://papers.ssrn.com/abstract=3756093

[53] Goldenberg, A. (2019, June 11). Top five pricing trends in airline revenue management. FairFly. https://www.fairfly.com/insights/top-five-pricing-trends-in-airline-revenue-management/

[54] Nabeel Hassan. (2016, April 18). Good or Evil? What Web Scraping Bots Mean for Your Site. Blog. https://www.imperva.com/blog/web-scraping-bots/

[55] For example, the Housing Law Bot scans the web to alert housing lawyers of potential violations of housing laws. @housing_law_bot. (n.d.). Housing Case Law Bot () / Twitter. Twitter. Retrieved November 10, 2022, from https://twitter.com/housing_law_bot

[56] Shoshan Zuboff. (2017). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs Books. https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/

[57] Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019, November 15). Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/

[58] Zara, C. (2022). Equifax breach settlement email: What to know about the upcoming payments. Fast Company. https://www.fastcompany.com/90794363/equifax-breach-settlement-email-payment

[60] Non-Compete Clauses in the Workplace: Examining Antitrust and Consumer Protection Issues, (2020) (testimony of Commissioner Noah Joshua Phillips). https://www.ftc.gov/system/files/documents/public_statements/1561697/phillips_-_remarks_at_ftc_nca_workshop_1-9-20.pdf

[61] Google. (2000, October 23). Google Launches Self-Service Advertising Program [News Announcements]. http://googlepress.blogspot.com/2000/10/google-launches-self-service.html

[62] CB Insights Research. (2021). 25 Business Moats That Helped Shape The World’s Most Massive Companies. https://www.cbinsights.com/research/report/business-moats-competitive-advantage/

[63] Chamath Palihapitiya. (2018, October 31). Social Capital Interim Annual Letter. https://www.socialcapital.com/annual-letters/2018.pdf

[64] Beard, A. (2022, February). Can Big Tech Be Disrupted? Harvard Business Review. https://hbr.org/2022/01/can-big-tech-be-disrupted

[65] Dana Mattioli. (2020, April 23). Amazon Scooped Up Data From Its Own Sellers to Launch Competing Products. Wall Street Journal. https://www.wsj.com/articles/amazon-scooped-up-data-from-its-own-sellers-to-launch-competing-products-11587650015

[66] Team, T. (2010, July 9). Putting Screws To Suppliers Means Big Cash For Wal-Mart. Forbes. https://www.forbes.com/sites/greatspeculations/2010/07/09/putting-screws-to-suppliers-means-big-cash-for-wal-mart/

[67] Congressional Budget Office. (2020, December 29). Federal Policies in Response to Declining Entrepreneurship [Report for Congress]. https://www.cbo.gov/publication/56945

Print Friendly, PDF & Email
Scroll to Top