Cookie policy

By pursuing your navigation on our website, you allow us to place cookies on your device. These cookies are set in order to secure your browsing, improve your user experience and enable us to compile statistics. For further information, please report to our cookie policy and our data protection notice.

Article (48/252)
Artificial Intelligence and the importance of good conduct
Artificial Intelligence and the importance of good conduct
Back

Artificial Intelligence and the importance of good conduct

11/09/2020


Sonia Dupuy

Sonia Dupuy

Head of Conduct

BNP Paribas Securities Services

View profile
Benoit Strek

Benoit Strek

Programme Director, Artificial Intelligence

BNP Paribas Securities Services

View profile

Financial services organisations around the world are racing to explore and exploit the possibilities that artificial intelligence can bring. AI accelerates and automates time-consuming and repetitive processes, and new use cases are emerging all the time. Yet with the opportunities comes a new risk: ensuring AI treats clients fairly and that proper “conduct” is followed at all times.

Why conduct matters

Artificial intelligence applications introduce a host of ethical, legal and social issues. Do AI tools discriminate against certain clients? Are clients’ needs and interests being met? Can the tools properly take account of a client’s knowledge and experience of the product or service? Do they build or undermine trust? Do actions and decisions comply with legal and regulatory rules?

Incorporating rigorous conduct principles into the technology is essential in addressing these concerns. “Within BNP Paribas, the objective of our code of conduct is to properly manage our risk and to avoid taking inappropriate decisions,” says Sonia Dupuy, Head of Conduct with BNP Paribas Securities Services.

“That includes treating our clients fairly, and communicating with them in the most honest and transparent way.”

Managing regulatory risk is a major consideration. In Europe, for instance, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019. The Guidelines put forward a set of seven requirements that AI systems should meet to ensure, inter alia:

  • Full respect for privacy and data protection.
  • Transparency of the data, system and AI business models.
  • Avoidance of unfair bias.
  • Implementation of proper oversight mechanisms.

More than regulatory compliance though, the guiding principle behind conduct in AI should be to maintain client trust. Both clients and organisations can reap huge benefits from AI applications – but only if they are seen to work in clients’ interests at each stage of the value chain. As Dupuy points out:

“Any hint of cognitive bias or unfair treatment gives a bad impression. And an organisation’s reputation has no price.”

The boom in chatbots

One of the biggest fields of AI potential and application is the use of chatbots – an area where BNP Paribas, like many other banks, has a dynamic development programme.

“Virtual agents can add significant value,” says Benoit Strek, Programme Director – Artificial Intelligence at BNP Paribas Securities Services. “They offer automation, robustness, help minimise human error and, in our case, provide answers to clients 24/7.” Such operational flexibility and resilience have proven particularly valuable during the COVID-19 crisis, he adds.

“We’ve seen a strong increase in volumes, and our virtual agent has helped ensure we can reply to all queries during a difficult period when operational teams have been stretched.”

But the benefits virtual agents can bring mean nothing without adoption. And key to adoption is acceptance.

Acceptance comes in part by being transparent vis-à-vis the client, says Dupuy.

“With chatbots, for example, it is important clients know whether they are dealing with a virtual or a human agent.”

Managing client expectations – so they are clear about the system’s capabilities and limitations, and how the tool works – is also crucial. As is taking account of clients’ needs, and their knowledge and experience of the product in question. “Because the question clients may have is, ‘do I possess enough knowledge on the issue I am facing to explain it properly to the virtual agent, to ensure I’ll get the right answer at the end?’” says Dupuy.

“For certain activities, the target users are not always financial experts, and may not be familiar with the vocabulary around things like dividends and stock options,” Strek explains. “We developed the virtual agent so it can adapt to various types of vocabulary. Also, it doesn’t discriminate if the user has one share or a million. It’s important it can be used by everyone, that no client is given priority and they have access to the same functionality.”

Protecting client data

Another vital element in building trust, and complying with regulations, is data protection.

“Whatever the process or tool we are using within the bank, our conduct policies focus on preserving the client’s data confidentiality,”

says Dupuy.

With BNP Paribas’ virtual agent, that means anonymising and encrypting the conversations.

“Clients must be comfortable that their data and conversations with the virtual agent cannot be misused,”

notes Strek.

How to integrate conduct into AI

So how can organisations embed robust conduct safeguards into their AI programmes?

Data, and access to the relevant data science expertise, is crucial, says Strek.

“We use historical data and conversations between clients and the call centre service to train our virtual agent to recognise the values, questions and intent of the clients. It involves spending a lot of time ensuring the dataset is clean by anonymising the data and removing all contextual aspects of the conversation like name, gender or situation, as that can create a bias in the machine learning. We also make sure the weight of each population is equally represented and there is no statistical bias in the dataset.”

Collaborating with clients is another important element. “With our virtual agent programme, we asked some of our clients what they wanted to see and how they wanted it to work,” says Strek. “They were able to test it early in the process, which built trust when the product was fully available.”

Interdepartmental collaboration is also vital. Moving from proof of concept to something industrialised requires validation from all the functions and departments, not least compliance and legal, given the different awareness they bring of the risks involved.

“And then training staff so they are well-informed about conduct topics, and can take those into consideration when designing and working on the virtual agent,”

adds Dupuy.

Staying relevant

Enabling fair conduct in AI systems is not a fire-and-forget activity either. Ongoing oversight and auditability are needed to detect if an AI tool isn’t operating in line with the organisation’s conduct principles. Plus the rules and social mores around good conduct are evolving.

“The European Commission is trying to strike the right balance between regulation and allowing room for innovation, but clearly regulation will play a bigger role in the future,”

says Strek.

Public opinion about what types of conduct are acceptable or not is shifting too, adds Dupuy. AI tools need the flexibility to learn and adapt alongside those societal changes.

Artificial intelligence is a fast developing space that offers financial services organisations a plethora of opportunities. But to optimise those opportunities, the capabilities must be carefully applied. Integrating the highest levels of conduct into the technology will be central to AI’s acceptance and long-term success.

Read more:

AI

Follow us