AI’s Mid-Life Crisis: A Heated Love Affair with Discrimination in Banking

Greetings, dearest readers. I hope you’re all enjoying this bewildering ballet of high-tech development and societal confusion, where artificial intelligence (AI) has become the lovechild of innovation and potential chaos. In today’s thrilling episode of ‘Humans playing god’, we delve into the wild, wacky world of AI and its tempestuous romance with the banking sector.

AI, it appears, has developed a nasty habit of picking up our human biases like a kleptomaniac at a flea market. Its new playground? Banking and financial services. So if you thought finance was complicated, frustrating, and a touch discriminatory already – congratulations, it’s about to become even more enthralling.

Deloitte – the financial wizards they are – made the earth-shattering revelation that AI is only as good as the data it’s given. Profound, right? Because who would’ve guessed that a machine learning from flawed humans would end up being flawed itself? Your toaster might be more objective at this point.

According to Nabil Manji, head of crypto and Web3 at Worldpay by FIS, understanding AI is like deciphering your grandma’s spaghetti recipe – it’s all about the ingredients (data) and how well you can stir the pot (the language model). However, the secret sauce, uniform and modern data systems, is what banking’s archaic backends seem to lack. Like trying to run a marathon in clogs, it’s not going to end well.

And then there’s Rumman Chowdhury, Twitter’s former head of machine learning ethics. Yes, ethics in AI – a concept as laughably oxymoronic as jumbo shrimp. Chowdhury points out that lending is where AI’s bias likes to flaunt itself, winking cheekily at marginalized communities. Take Chicago’s history of “redlining” in the 1930s as an example. The AI might as well be sporting a fedora and smoking a cigar as it replicates these human prejudices.

AI might have a shiny, new-tech veneer, but underneath it’s just an old-school bigot, in essence. Frost Li, an AI developer, further drives the point home by highlighting that it’s harder to identify biases when they’re all mashed together in the mysterious calculations of banking AI. It’s like trying to find a particular noodle in a bowl of ramen – good luck with that!

But wait, it gets better. Niklas Guske of Taktile suggests that generative AI – AI’s fun, creative cousin – isn’t really used for things like creating credit scores. Instead, it’s more into pre-processing unstructured data like a frantic librarian categorizing books by their ISBN numbers. So, the dream of AI doing your taxes might just remain a dream.

Proving AI-based discrimination is about as easy as locating the Loch Ness Monster. The New York Department of Financial Services found this out when it investigated accusations against Apple and Goldman Sachs and found nada. Kim Smouter of the European Network Against Racism pointed out that the problem with AI is the opaque, labyrinth-like paths it takes to reach decisions, making any discrimination as elusive as a well-behaved cat.

Chowdhury’s solution? A global regulatory body, something like the UN, but for AI. Imagine that: diplomats arguing over algorithmic biases instead of trade deals and peace treaties. Despite the immediate need for regulation, it seems that bureaucracy, in its typical snail-paced fashion, will take its sweet time catching up.

So, dear readers, as we navigate this enthralling drama of artificial intelligence, remember to grab some popcorn. The stage is set, the actors are flawed, and theplot is riddled with bias and discrimination. The good news is, if you’ve ever been victimized by the peculiar prejudices of a toaster, there’s now a precedent for your personal vendetta against inanimate objects.

Smouter and others are calling for transparency, accountability, and audits of AI. And although I appreciate the sentiment, I can’t help but chuckle at the thought of a drone appearing before a committee to testify about its racially charged loan approvals. But hey, in a world where we’re talking about artificial intelligence having a racial bias, perhaps it’s not such a far-fetched idea after all.

Until our tech overlords make some progress on the issue, remember to keep your wits about you, because apparently, even your bank’s AI might judge you for your penchant for cheap coffee and take-out pizza.

We’ll have to wait and see whether AI can learn some manners, or if it continues to throw its tantrums in the banking sector. I, for one, am preparing to welcome our robot overlords with open arms, as long as they don’t discriminate against my love for novelty socks.

Regulation, it seems, can’t come fast enough. The European Union’s AI Act may well be the light at the end of this bias-infested tunnel, promising a fundamental rights approach and concepts like redress. In the interim, however, we can only hope that our AI friends learn to play nice, at least in the banking sector.

I guess we’ll have to hold tight and enjoy the ride, after all, the future waits for no one. Until next time, folks – when hopefully the AI world will have fewer biases and more just… intelligence. I’m Git Adam, signing off and reminding you to laugh in the face of AI, because after all, it’s just another human creation that can’t quite figure out how to behave.

—Git Adam,
Chief Sarcasm Officer, Financial Genius, and your guide through the hilarious dystopia we call life.

*LEGAL DISCLAIMER

Leave a Reply

PubCo Insight. Deep Intelligence
Including AI Reports
for Savvy Investors

If you’re looking for a way to get an edge on the stock market, you need to check out PubCo Insight. Using AI, our system is able to make highly accurate stock picks that can help you achieve major gains. With our AI Reports, you’ll be able to learn which stocks are the most traded, undervalued, and have the most potential for growth. This valuable information is absolutely essential for anyone who wants to be successful in the stock market. So sign up now and get started on your path to success!