General Publications January 12, 2024

“Major EU AI Banking Ruling Will Reverberate Across Sectors,” Law360, January 12, 2024.

Extracted from Law360

On Dec. 7, 2023, the European Court of Justice, or CJEU, issued an important decision in OQ v. Land Hessen, referred to as the so-called Schufa case, on how the General Data Protection Regulation governs artificial intelligence-assisted decisions.[1]

The case arose in the financial services context, with the court holding that the GDPR's AI rules apply when banks use credit scores to make consumer credit decisions. But the decision will likely not just affect financial services.

European regulators have already indicated that it may apply to other industries or business processes in which AI increasingly plays a role, such as employment, healthcare or housing. This article summarizes the case and provides salient take-homes for companies.

What happened?

Schutzgemeinschaft für Kreditsicherung Holding AG, or Schufa, which translates to the protective association for the securing of credit, is Germany's largest consumer credit rating agency. Like all credit agencies, Schufa collects information about consumers and processes it with algorithms to generate scores that predict whether consumers will meet financial commitments, such as scoring whether a consumer is likely to repay a loan.

When consumers apply for loans or other financial products at German banks, it is common for the bank to obtain the consumer's Schufa score. A bank employee may manually review the Schufa score in deciding whether to make the loan to the consumer. Still, according to the CJEU, "in almost all cases," the SCHUFA score determines whether the consumer is granted or denied credit.

Article 22 of the GDPR states that individuals "have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects … or similarly significantly affects him or her." Such AI-facilitated decisions are permitted only (1) with the individual's express consent, (2) where "necessary" for the individual to enter a contract, or (3) where expressly permitted by a local enabling statute.

A German consumer whose loan application had been denied by a bank ended up challenging Schufa's credit-scoring practices as violative of Article 22 of the GDPR in the German courts.

The courts faced a complex situation: Schufa uses AI to make the credit score, but did not make the actual credit decision. On the other hand, the bank did not use AI to make the credit score but did make the credit decision — and the decision was made by an employee who reviewed the score.

So, did Schufa make an automated decision that triggered Article 22 of the GDPR, even though it didn't make the ultimate decision about whether or not to extend credit? Did the bank make an automated decision by using an AI-generated score, even though it asked employees to be the final decider? Or did neither make the kind of automated decision that triggered Article 22 of the GDPR? The German courts ultimately referred the case to the CJEU for further clarification.

The CJEU clarified that Schufa triggered the GDPR.

The CJEU started by expressing that it was not going to miss the forest for the trees by ultimately allowing no one to be responsible for complying with the rules in Article 22 on automated decision making:

[I]n circumstances such as [these], in which three stakeholders are involved [i.e., Schufa, the bank and the consumer], there would be a risk of circumventing Article 22 …GDPR and, consequently, a lacuna in legal protection if [Schufa's] establishment of the probability value must only be considered as a preparatory act and only the act adopted by the [bank] can … be classified as a 'decision' within the meaning of Article 22(1) [of the GDPR].

Instead, the CJEU noted that "an insufficient probability value [from Schufa] leads, in almost all cases, to the refusal of that bank to grant the loan applied for." Thus, Schufa scores constituted automated decision making that triggered Article 22 of the GDPR when a bank draws strongly on them "to establish, implement or terminate a contractual relationship" with a consumer.

In short, the CJEU held it didn't matter that Schufa wasn't making credit decisions using its AI-derived credit scores. Banks were making decisions based on those scores, and even if humans were involved in the decision-making process, they were following the Schufa scores for almost all credit decisions.

This was enough for the CJEU to conclude that Schufa scores, as used by banks, were "decision[s] based solely on automated processing … which produces legal effects … or similarly significantly affects [an individual]" under Article 22 of the GDPR.

Interestingly, Germany has a national enabling statute — Section 31 of the Federal Data Protection Act, or FDPA — that was passed specifically to allow Schufa to process personal data to create consumer credit scores. It was passed in 2017 as part of Germany's GDPR implementation statute, in part to maintain the pre-GDPR rules that had been passed to enable Schufa's credit-scoring business.

But in lower court proceedings, the German courts expressed doubts about whether Section 31 of the FDPA complies with European Union law. So, the CJEU remanded the case to the German courts to determine if Section 31 of the FDPA is sufficient to enable Schufa scoring, including banks' use of Schufa scores for credit decisions.

What are the main takeaways from the decision?

This case has the potential to significantly affect a number of industries. In the near term, businesses may want to consider the following points.

The initial take-home of this case is that if a financial institution uses consumer credit scores for credit decisions, its processes may not be in compliance with the GDPR. Financial services providers with EU business should promptly review and reassess their consumer financial products for compliance.

Prior to this case, financial institutions may have taken the position that including human intervention in the process of granting or denying consumer applications — even if the human reviewer typically followed the Schufa score — prevented Article 22 of the GDPR's automated decision rules from applying. That position may no longer be tenable.

When making AI-powered decisions subject to Article 22 of the GDPR, financial service providers must (1) obtain consent to make an AI-powered decision, (2) permit consumers to contest the decision and express their point of view, and (3) enable consumers to appeal to human intervention. All of this can require significant updates to business processes and resources.

Under Article 15 of the GDPR, consumers can also request information about how AI-powered decisions are reached, including "meaningful information about the logic involved." This may require disclosing information about how scores are calculated. It can create questions about whether financial services companies can provide such information, or whether vendors can be involved in the request-fulfillment process.

This case arose in the context of consumers applying for loans with banks, but its reasoning could potentially apply to other common consumer financial products and services, such as:

  • Insurance policies;
  • Leases, e.g., automobile leases;
  • Buy-now-pay-later arrangements or other microloans; and
  • Consumer installment contracts, e.g., for appliances.

The decision is not limited to financial services and will affect numerous industries.

The same day that the CJEU issued its decision, the Hamburg Data Protection Commissioner, or HDPC, issued a press release titled "Impacts of the SCHUFA Case on AI Applications."[2] It noted employers are using AI to "pre-sort job applications," and medical institutions are using AI to "analyze which patients are particularly suited for a study."

The HDPC is expressly viewing these types of nonfinancial algorithms as subject to the Schufa decision. It further noted that thanks to the Schufa case, the scores such AI applications generate can no longer be viewed as pure suggestions for human reviewers.

Instead, businesses will need to show that any human reviewer engages in meaningful independent review of AI output. In the HDPC's words, "the person making the final decision needs expertise and enough time to question the machine-made initial decision." Otherwise, it remains an automated decision subject to Article 22 of the GDPR's rules on consent, opt-out and human review.

What has been clarified?

European regulators will likely start viewing all AI applications as potentially subject to the Schufa decision, irrespective of its industry and use case. Some fields where scrutiny may be expected could include:

  • Employment;
  • Housing;
  • Financial services, as outlined above;
  • Healthcare;
  • Communications, e.g., credit checks as part of access to internet; and
  • Any other service that can be deemed essential or important.

As a result, companies with business in the EU should identify where AI is used in their organization. Any AI applications that trigger Article 22 of the GDPR will require the same compliance to be built as outlined above in the financial services context.

As an example, if a company uses AI-assisted recruiting tools in the EU, it would need to assess if they must (1) obtain express consent from applicants, (2) enable applicants to contest AI scoring of their applications, and (3) enable applicants to obtain human review of their applications.

What remains to be clarified?

It is not clear yet how impactful a decision needs to be in order for Article 22 of the GDPR to apply. For example, many companies use algorithms to segment their customers into interest groups, so they can personalize the advertising they send to them.

Would inferring that a consumer likes coffee more than tea — so he or she can receive coffee coupons, not tea coupons — be important enough to trigger the GDPR's automated decision making rules?

Similarly, if a taxi company uses an automated routing service, does Article 22 of the GDPR apply if the routing service occasionally routes riders on slightly longer or slightly shorter routes? Decisions at this level seem to be addressable by other means, e.g., simple customer service, without having to break out the big guns of Article 22 of the GDPR.

Furthermore, it remains to be seen when an AI-powered decision should be considered necessary for an individual to enter a contract with a company. European regulators traditionally take a narrow view of necessity, and the HDPC noted it should be viewed as an exceptional case.

The only example the HDPC provided was "online platforms where an immediate and binding response is required," which still seems to leave open the question of when such a response is required.

Other European regulators suggested several years ago that AI could be necessary in the human resource context, if a company receives massive numbers of applications for a job posting — "automated decision-making may be necessary in order to make a short list of possible candidate."[3] Companies should in any case expect to have to justify any claim that AI is necessary for an agreement.

Other unclear questions include those of controllership, responsibility and liability in regard to AI applications. Positions will likely be developed in the market prior to regulators or litigation providing binding guidance.

For the moment, both AI customers and AI providers may need to work together to ensure that, when compliance with Article 22 of the GDPR needs to be built, it can be deployed as needed in customer-facing interactions.

All of the above is yet another reason companies should start inventorying the AI applications they use and building out AI governance alongside privacy compliance, particularly now that a political agreement has been reached on the long-awaited EU AI Act.[4]

The U.S. already has similar rules to the Schufa case.

A number of U.S. state privacy statutes already contain rules permitting consumers to opt-out of "decisions that produce legal or similarly significant effects" on them. This language originated from the GDPR. Thus, the Schufa decision, and the regulatory practice it fosters, may be relevant to how U.S. regulators go about enforcing AI rules in the U.S.

For example, in Colorado, privacy regulations enacted in 2023 introduce concepts similar to where the CJEU landed in Schufa. Under the Colorado regulations, if a human reviews the output of "solely automated processing," like AI, but does not engage in meaningful consideration of how the AI reached its result, the processing continues to be treated as if it's solely automated. This resembles the CJEU's holding that, if human reviewers almost always follow an AI-generated score, the decision remains "solely automated" under GDPR.

Similarly, New York City's ordinance governing "automated employment decision tools" considers recruiting tools to be fully automated if they replace human decision making — or if they substantially assist it.

Conclusion

AI rules have been part of EU data protection law since 1996, when rules on automated decision making were first included in the EU Data Protection Directive. But enforcement has been limited, in part because — like in the Schufa case — algorithms have yet to completely remove human involvement from many business processes.

The CJEU's Schufa decision makes clear that pro forma human involvement will not exempt AI-driven processes from the GDPR's AI rules. Companies should begin getting ready to respond to increased scrutiny on their AI usage.

Interestingly, the approach from across the ocean may be helpful, as U.S. government agencies are now required by presidential order to (1) inventory their AI use cases, (2) conduct risk assessments, and (3) install AI governance. Companies would be well-advised to consider similar steps, particularly as Europe prepares to enact the world's first comprehensive AI regulation.


[1] OQ v. Land Hessen , Case C-634/21 (7 December 2023), https://curia.europa.eu/juris/document/document.jsf?text=&docid=280426&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=202647.

[2] Hamburg Data Protection Commissioner, Auswirkungen des Schufa-Urteils auf KI-Anwendungen [Impacts oft he SCHUFA Case on AI Applications] (7 December 2023), https://datenschutz-hamburg.de/news/auswirkungen-des-schufa-urteil-auf-ki-anwendungen.

[3] European Data Protection Board, Guidelines on the Automated Individual Decision-Makiong and Profiling for the Purposes of Regulation 2016/679 (22 August 2018), https://ec.europa.eu/newsroom/article29/items/612053.

[4] European Parliament, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI (9 December 2023), https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.

Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.