It started with a single tweet in November 2019. David Heinemeier, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.
The allegations spread like wildfire, with Heinemeier stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”
While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.
Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.
That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.
Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.
EU officials hope that with extra oversight and restrictions on the type of AI models that
Read more on theguardian.com