Research paper: ‘I am you. Are you ‘you’?’ – combating financial crime in the age of generative AI
Ashima Chopra | Insights, News, Women In Open Banking
30 May 2024
The philosopher Thomas Hobbes’s comments on the “Ship of Theseus”, a thought experiment and paradox, were:
“For if that Ship of Theseus (concerning the Difference whereof, made by continual reparation, in taking out the old Planks, and putting in new, the sophisters of Athens were wont to dispute) were, after all the Planks were changed, the same Numerical Ship it was at the beginning; and if some Man had kept the Old Planks as they were taken out, and by putting them afterward together in the same order, had again made a Ship of them, this, without doubt, had also been the same Numerical Ship with that which was at the beginnings and so there would have been two Ships Numerically the same, which is absurd… But we must consider by what name anything is called when we inquire concerning the Identity of it… so that a Ship, which signifies Matter so figured, will be the same, as long as the Matter remains the same; but if no part of the Matter is the same, then it is Numerically another Ship; and if part of the Matter remains, and part is changed, then the Ship will be partly the same, and partly not the same.”
— Hobbes, “Of Identity and Difference” [1]
Generative AI threat
An example: Deepfakes
“The last time Generative AI loomed this large, the breakthroughs were in computer vision. Selfies transformed into Renaissance-style portraits and prematurely aged faces filled social media feeds. Five years later, it’s the leap forward in natural language processing, and the ability of large language models to riff on just about any theme, that has seized the popular imagination.”[2] Key takeaways from an earlier AI Summit pointed out that multimodal models in generative AI will supercharge the technology by models aligning “… different modalities such as image, video, audio along with language” which can be used for many applications.[3]
For the purpose of illustrating the threat of generative AI multimodal models, we will use deepfakes as one example throughout this article.
The manipulation or generation of multimodal models by AI which can be created with ease by the public are, at the very least, a disturbing reality and, at worst, a safety fear targeting entire sectors, beyond merely financial crime. The threat of deepfakes, which are multimodal (voice and image), exists in public and personal spaces, homes and offices, in daily unplanned interactions or planned transactions. These deepfakes can be used to coerce, manipulate, misinform and damage reputations, commit fraud, generate crime, among a plethora of other applications.
Safeguards around AI
The opportunities and the threats that AI offers are widely discussed in various media. Those creating advances in AI have been vocal about the risks these advances pose, by their very availability, and with damaging outcomes when in the hands of nefarious actors. The warning labels of the capabilities of AI for misuse are aplenty.
Within the efforts to safeguard advances in AI are a number of proposed frameworks, international and more regional. In addition there are efforts at organising summits to gather leaders in the field and those in government to move towards formulating regulations to ensure that the technology doesn’t resort to roughish acts, inadvertently, or when in the hands of the wrong people. Some of these safeguards are to be voluntary, while some are to be imposed. Given that AI is to touch all our lives in some manner, to regulate it is a behemoth undertaking.
The undertaking at regulation and/or other efforts to mitigate the negative results of AI are twofold. On the one hand it is imperative to be able to understand the advances in AI, stay ahead of them and then counter them, likely while combating them with these same advances. On the other hand, it involves the art, insight, deftness, agility and speed of determining where and how this AI can strike, how it can be applied, and what safeguards to install. And, while having to go back to the regulations or frameworks to plug holes, close caveats, redefine and refine the understanding of threats. Redefine, at times, the very meaning of a threat itself.
Regulations and shortfalls – static snapshots versus dynamic leaps
There are plenty of regulations that are about to impact the digital worlds and that have an impact on potential creation and use of AI. There is the Data Protection and Digital Information (DPDI) Bill, which had been almost at its final stages before being granted Royal Assent but will now need to be reintroduced in the next Parliament with a general election having been called, and is heavily quoted in documents prepared by the Joint Regulatory Oversight Committee which is overseeing the future of Open Banking in the UK.
The Financial Services and Markets Act 2023 targets Authorised Push Payments (APP) fraud, making reimbursement up to £415,000 mandatory to those who have lost money via APP fraud. Another Act, the Economic Crimes and Corporate Transparency Act, 2023, has a provision of a new “failure to prevent fraud” offence, within it.[4]
The assumption in the regulations above is that there is a “living” person that is identifiable and who may commit the fraud or who is a victim of the fraud. The DPDI Bill states that the living person should be identifiable either directly or indirectly.[5] However, with deepfakes, the person is not living but synthetic and may or may not exist. With Deepfakes that have been manipulated, the identity may be identifiable, but the person is not living but a manipulated likeness. The very definitions of identity, in this case, and what it entails, and within which bounds/boundaries, comes under scrutiny where the very basis of “identity” is now much wider, can be widely stretched, and goes beyond what was factored in before.
To redevelop an understanding of identity, where there is, for example, another category of identity in the form of a synthetic one, freely able to interact with unassuming individuals, able to hijack one’s own identity is a process that is tedious, needing human comprehension to grasp it at its own pace, and unfamiliar to the natural understanding of “identity”, is a truly mind bending exercise. While this new “snapshot” is evaluated and absorbed into our psyche over time, it stays static until we understand it, and until we act upon it as individuals. AI, and generative AI in particular, is dynamic, learning rapidly, thinking rapidly and enhancing itself rapidly. Regulations or human efforts to stymie negative impacts of AI will, therefore, always be at a disadvantage in that they will move much slower than the reason for their creation will allow them to.
The international, collaborative, consultative and human interfaced effort involved at churning out regulations/frameworks will lag far behind the speed of technological advances. The efforts will be lost in an iterative process of rectifying obsolete assumptions based on obsolete features aimed at minimising AI risks, as new features develop. This would be followed by amendments that rectify the body of these regulations/frameworks, updating them only to be upstaged by another novel advance in AI that no-one had factored into these works.
Outsmarting the smart – dynamic thinking and dynamic frameworks
Dynamic Systems Theory “is a theoretical framework that is used to understand and predict self-organising phenomena in complex systems that are constantly changing, reorganizing, and progressing over time … Dynamic [Systems] aim to study the complex processes driving … changes. They are complex because whether occurring in a single system/organism, or a group of individuals, change occurs as the product of multileveled interactions between the various elements constituting these systems[6] The value of dynamic systems is that it provides theoretical principles for conceptualising, operationalising, and formalising … complex interrelations of time, substance, and process. It is a meta-theory in the sense that it may be (and has been) applied to different … grains of analysis.[7] This is but one example of how inter-relations between many dimensions can be analysed.
The route to combating the dynamic threat of generative AI needs to be a dynamic process, with dynamic tools, dynamic findings and a dynamic canvas. Only when this is achieved can it be possible to capture, understand and negotiate the dynamic landscape of generative AI. The Dynamic Systems Theory is just one tool in the toolbox to achieving such a framework. The question remains of how this can be done so that the points of infliction by AI in the billions of recurring events that impact our lives are captured, restrained, encapsulated and tamed. And, importantly, analysed.
It is this dynamic framework involving dynamic actors, dynamic events and dynamic processes that will be the most difficult yet to create into a “system” to realistically and accurately capture the progress and impact of AI. Only then will we be able to grasp the real impacts of generative AI, its masterplan and, potentially, its next landing. It is only then will we know how to reshape, from the chaos, an ordered reality different from all we have known. Where we must concern ourselves with inquiring and considering the reshaped meaning not only of identity and the individual, but the meaning and reshaping of all manner of thinking – for, in generative AI, the Ship of Theseus is all there is.
Ashima Chopra, Ph.D., is chief executive officer and co-founder of Datambit Limited
[1] Thomas Hobbes, 1656 “Of Identity and Difference”
[2] “What is Generative AI”, IBM Report, 20 April 2023
[3] Deborah Yao, 2023, “5 Takeaways from the AI Summit NY 2023”, AI Business, Deborah Yao, December 2023
[4] “Economic Crimes and Corporate Transparency Act, 2023, Law Society, 2023
[5] Data Protection and Digital Information (DPDI) Bill, 2023
[6] Connell, P., DiMercurio, A., Corbetta, D. 2017 “Dynamic Systems Theory”, Springer Link
[7] Thelen, E., Smith, L.B., 2005, “Dynamic Systems Theories” in “Dynamic Systems Approach to the Development of Cognition and Action”, MIT Press
Further reading: Open Banking and AI – a new era of personalised banking?