Yesterday, Europe agreed on an Artificial Intelligence Act, shaping the landscape of AI applications. Amidst the discussions, Human Rights in Finance (EU) as advocate for the protection of human rights in finance, seeks to highlight the crucial need for appropriate oversight in AI use within the financial sector. Local parliaments in EU Member States should designate the oversight of AI in financial sector to local Data Protection Authorities to protect human rights.
Right now, the AI Act designates, as a starting point, financial supervisory authorities, instead of DPAs, for overseeing AI in financial services. Luckily, Member States can still choose to deviate from this, and we think this is crucial for the protection of human rights. We urgently call on local European parliaments to proactively designate their local DPAs as the primary authority for regulating AI in the financial sector, including the financial supervisors themselves.
Why are we concerned?
At the core of our concern is the acknowledgment that social scoring, now identified as high-risk under the AI regulation, is already conducted by financial institutions during the execution of anti-money laundering rules as mandated by the AMLD directive. The risk factors and evaluation criteria outlined in Annexes 1-3 of the directive lead to the profiling of customers based on varying risk levels. With the increasing reliance on Artificial Intelligence for these tasks, a careful and ethical approach is crucial to protect individual rights.
Financial supervisors and financial institutions are already invested in this profiling and AI usage, relying on the catchphrase ‘but the GDPR applies’ to sidestep human rights considerations. But practice and legal advice already indicate that AI use for anti-money laundering monitoring is not inherently beneficial and legitimate. In fact, HRIF.EU has observed GDPR violations by both financial institutions and financial law supervisors themselves, highlighting a significant blind spot in human rights considerations.
Find the blind spot for human rights yourself?
Don’t believe us? Try finding the words ‘privacy’ or ‘human rights’ in this recent consultation of the European Banking Authority, detailing how Europe should implement a mass personal data distribution obligation by the end of 2024 (in Regulation 2023/1113). Or read up on the court case in which an overzealous Dutch supervisor had to retract their requirements as this forced mass monitoring without due legal title. Once you start looking for this blind spot, you see it everywhere in financial legislation (apart from the lip service to GDPR). And you will conclude with us: in todays supervisory state of affairs for finance, relying on financial supervisors for AI oversight is untenable.
The fundamental lack of respect and knowledge with respect to protection of human rights is also why our foundation initiated the annulment procedure for this Regulation (case T-555/23 AJ) as well as an infringement procedure on erroneous Dutch implementation of the AMLD-5 directive.
Call to action to member states parliaments: please designate the local data protection authority for AI oversight in Finance to truly protect the human rights at stake
It’s crucial to recognize that a regulatory framework must not place conflicting objectives, such as combating money laundering and overseeing AI with respect to human rights, in the same hands. Let’s build a regulatory framework that separates and prioritizes these objectives, upholding the principles of fairness, accountability, and respect for individual rights in the deployment of AI technologies in the financial sector.
Thus, we reiterate our call to local parliaments in EU Member States to designate the oversight of AI in the Finance sector to local Data Protection Authorities, already experienced in supervising profiling practices in big tech companies and other sectors.