8 Ethical Dilemmas of Using AI in Finance

We know that in the end AI will be a good thing for us humans. But that doesn't mean we don't need to be careful as we innovate along the way.

A woman pondering the 8 ethical dilemmas of using AI

As you know, we're big fans of AI. Advanced algorithms have revolutionized how entire industries operate, including finance, which uses AI to:

  • Improve the customer service experience
  • Enhance fraud and money laundering detection
  • Automate mundane, data-rich tasks
  • Provide unique insights into financial decisions

If you’re not excited about these perks, then just know that they help companies like Cleo help you.

But we're not blind optimists to the whole thing. There can be some very real unintended consequences when applying AI to finance, which we keep top of mind as we build Cleo.

Let’s look at some of them up close and personal.

1. Algorithmic bias and discrimination

All modern AIs “train” on historical data scraped from the internet or fed to them by creators. That’s how an AI learns. But it's also how it picks up inaccurate information and discriminatory biases. Without correction, algorithms can easily incorporate these biases into their day-to-day functions. 

What does that mean for finance? Just the potential for discriminatory business, investing, lending, and hiring decisions. 

For instance, say a bank’s historical data suggests a certain minority group statistically has a lower credit score. The bank’s AI may automatically assign higher interest rates to members of that group, regardless of each individual’s credit score. 

Or, if the data indicates one gender is more likely to default, that gender may be automatically less likely to receive a loan. 

Without the ability to fully understand nuances, AI can easily act discriminately or illegally to marginalize underrepresented, under-banked groups. 

2. The “Black Box” problem

The “black box” problem describes how many AIs (and their creators) can’t explain how they reach conclusions. Since they can’t always name sources or track their “thoughts,” it’s harder to hold AI systems accountable for their results. 

Unfortunately, this also enhances AI’s discrimination concerns. 

Continuing the example above, say a bank historically engaged in redlining. (Redlining is the illegal practice of preventing certain groups from purchasing property in certain areas.) If an AI trains on that data, it might start rejecting minority-submitted mortgage applications for homes in those areas.

And if the algorithm is a “black box,” finding and unlearning those biases becomes much harder. 

At Cleo we try to be as open as we can about how we use AI.

3. AI’s inability to be accountable

Another ethical dilemma of using AI in finance is determining who’s responsible for AI’s misdeeds. 

You may have seen this dilemma used for AI-assisted cars. The technology generally comes with disclaimers that drivers remain responsible for the car. Yet self-driving cars can be steered and autopilot to some extent. So if the car crashes, do we blame the computer, the driver, or the automaker?

For some, the intuitive answer might be that whoever implemented the AI should be responsible, but as AI learns faster and makes more human-like predictions, can those who use the AI be blamed? More broadly, what controls should be required before AI launches to prevent such occurrences?

Without clear human accountability, assigning blame and fixing the problem grows more difficult.  

4. Privacy and data concerns

As part of their operation, AI systems may collect tons of data about and from individuals. Think bank cameras that use facial recognition to deter theft or monitor employee activities. Finance apps that collect data on your financial goals. Websites that sell your browsing and usage data to third-party marketers. 

If all this data isn’t properly encrypted, stored, and/or anonymized, it could easily turn up in a data leak or be siphoned off by bad actors. 

Businesses that use AI don’t just have the onus to protect users’ data. They must also be transparent about their data collection and privacy practices. And new regulations must consider the need for explicit consent in collecting and using customer information. 

Important side note:  Cleo’s AI is not a snitch. She’ll never sell your data 👀

5. Advanced tech leads to advanced problems 

It’s inevitable: as technology evolves, someone comes along with a new way to abuse it. And as AI systems grow, some worry that a rogue system (or, more likely, a criminal using AI) could untraceably gather or even alter data. 

Finance in particular deals with all kinds of personal, juicy tidbits. Credit scores. Loans and credit cards numbers. Social Security numbers. Banking data. 

Pair AI with finance-based data systems, and the potential to commit fraud, theft, insider trading, and money laundering grows exponentially. 

Imagine if someone could use AI to hack into a bank system and near-instantly move your money to a scammer’s international account. Or a scammer who used AI to con millions out of their life savings with realistic, human-sounding phone calls. (As opposed to Cleo, for example, which always uses industry-leading security measures to protect your information and prevent data leaks.) 

For now, thankfully, AI is successfully used to do the opposite: protecting client information and enforcing anti-fraud laws. But all it might take is the right tweak to take AI from savior to siphon. 

6. Machine algorithms may manipulate markets

Financial risks from AI aren’t limited to biases, discrimination, and bad actors – its very nature makes them natural disruptors. 

Consider the stock markets. Or more specifically, the nervous investors and automated trading algorithms that drive the stock markets. All it takes is a snippet of bad news or a rash of large trades to spook investors and move prices. 

Many firms already use machine learning algorithms to trade. 

Using AI algorithms to make high-frequency, high-dollar trades has high profit potential. However, the right (or wrong) trade could accidentally manipulate markets or cause “flash crashes” even when traders act in good faith. 

And if a person intentionally used AI to create and profit from “flash crashes” or long-term manipulation, the nature of automated trading could make it harder to detect.  

7. Job displacement and outsourcing

Let’s be blunt: properly-trained AI is faster and more efficient than humans, especially in data-intensive jobs. (Read: most jobs in finance.) 

As AI advances, that puts millions in danger of being outsourced to cheaper, better machine labor. Worldwide, over 300 million full-time jobs could fall to generative AI over the coming years. In the U.S. alone, AI-driven financial technology could displace and outsource 200,000 banking jobs in the next decade. 

Naturally, that carries major ethical questions surrounding outsourced labor, taxing automatic “workers,” and even instituting a universal basic income so displaced humans don’t wind up homeless.  

We’ll note though, that the future is hard to predict. And this gloomy scenario might not happen the way a lot of doomers are seeing it.

8. Inconsistent (or nonexistent) regulation 

The major problem underlying these dilemmas is the yawning need for practical, timely regulation that ensures transparent, ethical AI development without unnecessarily impeding development. 

This will require AI creators, businesses, regulatory organizations, and governments to collaborate on a massive scale. Nations will also have to grapple with unfavorable outcomes like outsourced jobs increasing unemployment and lower labor tax revenues.

But until then, we’ll have to make do with patchwork, inconsistent, and nonexistent regulations that have failed to keep pace with technology’s evolution. 

Building an ethical AI requires vigilance

The growing use of AI in finance presents fantastic opportunities and less-than-fantastic ethical concerns. 

Each problem on this list highlights the need for consistent regulation, collaboration, and a watchful eye on AI’s development. 

We all know Cleo gives you some super personal and helpful money advice, But we also know that all AI, even our dear Cleo, can make mistakes in the learning process. 

So that's why we keep an eye on things. Unless she’s roasting last month’s fast food expenses. Then you’re on your own.

Want to try the world’s first AI assistant dedicated to personal finance? Download Cleo for free 💙

Still have questions? Find answers below.
Is Cleo safe?
How do you make sure Cleo gives me the right answers?
Are you concerned about AI at all?
Is Cleo a snitch?
Written by

Read more

Why Nationas Are Building their own AI blog logo

Why Nations Are Building Their Own AI

At the World Government Summit in Dubai, Jensen Huang, the CEO of Nvidia remarked that every country would want to build their own soverign AI system. Here's a quick look as to why that might be the case and which countries are already building their own AI models.


signing up takes
2 minutes

QR code to download cleo app
Talking to Cleo and seeing a breakdown of your money.