Financial Services | Rulex https://www.rulex.ai The platform for smart data management Wed, 16 Apr 2025 11:10:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.rulex.ai/wp-content/uploads/2024/05/cropped-favicon_rulex_white-32x32.png Financial Services | Rulex https://www.rulex.ai 32 32 Exploring the AI ACT: Transparency as the Key for Future Technology https://www.rulex.ai/exploring-the-ai-act-transparency-as-the-key-for-future-technology/ Thu, 29 Feb 2024 09:00:22 +0000 https://www.rulex.ai/?p=241514

Nowadays, artificial intelligence (AI) systems are seamlessly integrated into our daily lives, making tailored suggestions and influencing our decisions. While artificial intelligence offers incredible advantages, it is necessary to address the potential issues of bias, discrimination, and privacy associated with its use.

In response to its pervasive influence in contemporary society, various initiatives have been pursued, notably those headed by the European Union (EU). The EU’s landmark Artificial Intelligence Act (AIA) represents a robust regulatory architecture designed to address the challenges posed by AI.

The Artificial Intelligence Act is the first law to harmonize, regulate, and restrict the use of artificial AI in Europe. It is expected to enter into force in 2024.

A risk-based approach

A crucial aspect of the EU AI Act is its risk-based methodology. The greater the risk associated with the use of a specific artificial intelligence system, the greater the responsibilities for those who use or provide it. This can extend to a prohibition on the use of systems deemed to have an unacceptable level of risk, thereby emphasizing individual rights and model transparency.

Indicatively, the classification includes the following risk levels:

  • Minimum/low risk: systems with minimal risk to people’s safety and fundamental rights should be subject to transparency obligations, ensuring a basic level of clarity and comprehension.
  • High-risk AI: systems whose application could have substantial implications, potentially leading to harm. Consequently, they are subject to stringent regulations aimed at mitigating bias and discrimination. Identifying risks and implementing corresponding mitigation strategies is imperative across the entire life cycle of these AI systems. Thus, ensuring transparency becomes essential for interpreting results and facilitating proper oversight of the decision-making process. In fact, the Artificial Intelligence Act stipulates that high-risk AI systems are subject to a number of requirements and obligations, such as the adoption of necessary technical documentation, transparency of information, adequate levels of cybersecurity, etc.
  • Unacceptable risk: any AI system which is considered a direct threat to fundamental human rights, and is consequently prohibited.

Furthermore, guidelines and standards have been implemented for both basic AI systems, necessitating clear disclosure when individuals interact with them, and general purpose AI systems (GPAI), whose capability to operate across market sectors carries the risk of having a systemic negative effect on the society as a whole.

The importance of transparency

Meeting the stringent transparency requirements of the new AI Act could be extremely challenging – if not impossible – with traditional AI technologies.

For example, one of the crucial applications of AI is in the realm of credit rating systems, where it empowers banks to examine vast sets of customer data for accurate evaluations of creditworthiness. Considering that these systems provide a perspective on an individual’s financial standing by examining not only financial indicators, but also spending habits and behavioral patterns from diverse sources, ensuring fairness in the process is of paramount importance.

Explainable AI (XAI) is a facet of artificial intelligence that can produce clear results and provide the rationale for its predictions and subsequent decisions, consequently enhancing accountability and acting as a safeguard against the influence of bias and discrimination

Rulex’s XAI vision

Rulex’s journey began in the 1990s, fueled by a singular mission: to make AI explainable while maintaining its accuracy and speed. For the past two decades, its ground-breaking eXplainable AI has remained focused on addressing these very challenges within the data management process.

Central to this achievement is the Logic Learning Machine (LLM), an algorithm developed by Rulex’s founder. The innovation lies in its ability to articulate explicit and straightforward rules, presented in a logical if-then structure. This approach mirrors the cognitive processes of the human brain, ensuring a transparent and traceable workflow.

This commitment not only ensures compliance with GDPR and other privacy regulations but also lays a solid groundwork for the impending implementation of the AI Act.

Benefits of XAI

  • Trust: Establishing transparency in decision-making is essential to cultivate a trusting relationship with all stakeholders. Business experts can effectively grasp and articulate the decision-making process, utilizing eXplainable AI systems to reassure the involved parties
  • Compliance: XAI can assist companies in identifying and utilizing only the strictly necessary and crucial information from extensive datasets, thereby reducing certain risks associated with their management. In this way, actions are taken in compliance with regulations and in respect of individuals’ privacy
  • Responsibility: The transparency and traceability of XAI ensure decisions are made without relying on discriminatory influence, thereby imposing a greater sense of accountability and responsibility on users.

A transparent credit rating solution

Over the years, Rulex has applied eXplainable AI principles within the financial services sector, developing numerous solutions ranging from fraud detection to NPL management and churn prevention.

Among these, Rulex’s credit rating solution serves as a prime example of the improved comprehension that our native XAI can offer regarding the underlying process logic.

This solution integrates a decision-making workflow that comprehensively covers every stage of the product lifecycle, from automated score calculation to rating assignment and continuous performance monitoring.

Rulex’s XAI algorithm generates intuitive if-then rules identifying the distinctive features of each rating class, enabling the classification of new cases.

These clear predictions allow experts to confidently make well-informed decisions and effectively communicate them to clients, all while mitigating bias and promoting fairness.

*At the time of writing this article, the final text of the Artificial Intelligence Act is still awaiting approval.

Discover more about Rulex for financial services

Rulex Platform
]]>
Predicting customer churn using machine learning https://www.rulex.ai/predicting-customer-churn-using-machine-learning/ Tue, 04 Oct 2022 07:00:52 +0000 https://www.rulex.ai/?p=234446

Predicting customer churn using machine learning has proved effective in identifying potential churners and developing successful retention strategies for the financial sector.

The process of digital transformation has opened up great new possibilities for banks and credit institutions. Online and mobile banking make it possible to reach every customer with an electronic device, no matter where they are in the world.

But as the market broadens, competition increases, any digital bank may potentially be a competitor. And since they are no longer obliged to stick to local banks, customers have a huge range of options when deciding where to open a bank account.

Furthermore, if a bank’s portfolio is uncompetitive in terms of offer and price, or its services are run poorly, it is likely to lose customers. This phenomenon is called churn.

Customer churn: facts and stats

In a nutshell, customer churn refers to customers who stop using a company’s products or services within a certain timeframe. If it happens over a short period, from several days to a month, we call it “hard churn”. It is known as “soft churn” if it happens between a couple of months and a year.

Customer churn has become a real struggle for international banks. According to recent stats, the annual attrition rate for banks in North America is about 11%. This means that banks spend considerable amounts of money and energy signing up new customers just to balance their books. Moreover, attracting and winning over new customers is extremely expensive, something like 6 or 7 times more than retaining existing ones.

In the digital era, banks must therefore focus their energies on developing an effective retention strategy to keep as many customers as possible. In this article, we’ll discuss what causes customers to leave and how to calculate churn prediction using machine learning.

Why customer churn happens

Customers may leave their bank for various reasons. Unfortunately, according to 1st Financial Training Services, 96% of unhappy customers don’t complain; 91% of them don’t explain why they are unhappy and simply leave. Based on our clients’ experience, we have drawn up a list of the main reasons why people decide to switch banks.

• Poor service

Good quality service is the basis for any solid business. But some businesses don’t understand how important it is until they start losing customers. A recent study reported that almost 9 in 10 customers abandon a firm because they experience poor customer service.

• Poor product-market fit

In the age of online banking, customers are constantly looking for better options. This means that if a bank can’t offer a good range of innovative, affordable products, not only will it be unlikely to find new customers, it will certainly lose existing ones.

• Slightly off-key product offers

Even when banks have a competitive product portfolio, they may not know their customers well enough and end up offering slightly off-key products, thereby lowering customer engagement.

• Difficult user experiences

Online banking websites that aren’t user-friendly and are difficult to navigate are a real pain for customers. Not to mention mobile banking apps that crash frequently, interrupting money transfers and online payments.

Predicting customer churn using machine learning

In today’s highly competitive market, successful banks are those which fully embrace digital transformation. Not only do they provide digital services and products, but they also use data-driven technology to enhance their decision-making process.

By exploiting the full potential of customer data, banks can better understand client behaviors and learn churn patterns from past records. This allows them to predict their customers’ future movements and respond accordingly.

Let’s see how.

• Managing customer data with advanced analysis tools

Banks need to smarten up their analytics if they want quick, accurate insights on their customers. But how? Spreadsheets alone aren’t enough to prevent banks from losing valuable pieces of information. They are ineffective when big volumes of data, from different sources and in multiple formats, are involved.

Equipped with an effective data analysis tool like Rulex Platform, banks can easily merge all their customer data from a wide range of databases into a single place in the same format. This facilitates data analysis, allowing banks to learn client behaviors and implement an effective retention strategy.

• Tracking churn patterns from historical records

Many different drivers cause customers to leave their banks, and they vary across life stages and demographics. Drawing on a bank’s historical data, machine learning models can generate insight on churn patterns. This helps business experts make predictions on possible future churners.

Rulex’s eXplainable AI (XAI) has proven particularly effective in the financial services sector. Rulex’s XAI quickly analyzes historical records and produces explainable outcomes regarding which customers are more likely to churn and why. On high-confidence predictions, Rulex technology’s success rate is between 90 and 99%.

• Increasing customer satisfaction with AI

Banks need to engage with their customers on a deeper level to improve customer retention. Knowing your clients means, for example, being able to offer them on-key products at the right time or enhance their experience. Once again, AI technologies like Rulex’s XAI can come in handy by suggesting the best ways to improve customer satisfaction.

After pinpointing customers who are likely to churn, Rulex’s XAI proposes corrective actions to prevent them from leaving. For example, for customers dissatisfied with service, Rulex’s XAI will suggest improvements to its quality. The corrective action will be to place customers who contact the support center twice or more into a special priority queue, so their calls are handled more quickly.

By using historical records and applying similar types of corrective action for all their customers, one of our clients was able to reduce churn rate by almost 3%, equal to a revenue increase of 11%.

If you feel it’s time for your bank to tackle the issue of customer churn, visit our Financial services page and get in touch with our experts for a free consultation.

]]>
Ethical and responsible AI – the future of data-driven technology https://www.rulex.ai/ethical-and-responsible-ai/ Thu, 03 Mar 2022 08:00:38 +0000 https://www.rulex.ai/?p=230738

As the applications for data-driven technology increase in everyday life, questions around ethical and responsible AI are emerging in the public arena.

Every time we play a song on Spotify, watch a video on YouTube, or order a takeaway online, we leave data footprints. Companies collect and use them to feed AI tools, which empower their business by helping them make better decisions. By using customer data, for example, companies can understand people’s behaviors and run better targeted marketing campaigns.

But “with great power there must also come great responsibility”. When AI-based decisions have a major impact on people’s lives, as in the case of receiving a bank loan or extra medical care, companies have a responsibility towards customers in terms of fairness and transparency. In other words, AI-based decisions should not be biased and, as required under the GDPR, they must be explainable – meaning accompanied by clear explanations.

In fact, if AI has the potential to help organizations make better decisions, why not make them fairer?

This is the question we’ll explore in this article, as we discuss cases of bad and good AI technology.

Discriminating data

AI solutions haven’t always been proven to work for the public good. They have been found to generate biased and unfair decisions in many circumstances. But how can data-driven technology discriminate against people? The reason lies in the principle of machine learning itself. AI systems learn how to make decisions by looking at historical data, so they can perpetuate existing biases. In other words, if the data contain biases, then the output will do too, unless appropriate precautions are taken.

In 2015, Amazon’s AI recruiting tool turned out to be biased against female applicants. It penalized resumes where the word “women” appeared, such as in “women’s chess club captain”. Since the tech industry is historically a male-dominated world, by learning from historical data, Amazon’s tool was preferring male candidates over women.

But that is not the only case. In 2019, an algorithm widely adopted in the U.S. healthcare system was proven to be biased against black people. The algorithm was used to guide health decisions, predicting which patients would benefit from extra medical care. Learning from historical data, the tool perpetuated long-standing racial disparities in medicine; its results favored white patients over black patients.

Whoever designed the aforementioned algorithms did not care about explaining the AI-based decisions, and did not realize the gravity of their mistakes, compromising brand credibility.

Biases can also occur if there is a lack of complete data when building an AI tool. In fact, if data are not complete, they may not be representative, and the tool may therefore include biases. This is exactly what has happened to many facial recognition tools. Because they weren’t built on complete data, the tools encountered issues with recognizing non-white faces. Particularly notorious is the case of the iPhone X, whose facial recognition feature was defined racist because on many occasions it failed to distinguish between Chinese users.

Responsibilizing AI

As more cases of biased AI appear, responsible AI becomes a real necessity. In 2019, the European Union started tackling the problem by publishing a series of guidelines for achieving ethical AI, The Ethics Guidelines for Trustworthy Artificial Intelligence. Major tech companies such as Google and Microsoft have already moved in this direction by releasing their responsible AI manifestos. The road to responsibilizing AI is still long, but every business can play its part. Companies can adopt different approaches to enforce fairness constraints on AI models:

  1. FIXING THE ROOT PROBLEM – IMPROVING DATA PREPARATION
    Since most cases of bias in AI are produced by biased historical data, improving the data preparation phase can fix the root problem. In this phase, human operators can identify both clear and hidden discriminatory data, and evaluate if the data are representative of the group taken into consideration.It’s important that these operations are performed by domain experts, as they have a better understanding of the problem. Since business experts might not have a data science background, no-code and simple data preparation tools, like Rulex Platform, become a must.
  2. OPENING AI – MAKING OUTPUT EXPLAINABLE
    Adopting eXplainable AI (XAI) over black box AI does make a huge difference. XAI tools produce explainable and transparent outcomes. This means business experts can understand and evaluate the outcomes, and detect and delete possible biases from automated decisions.
    “A good decision could improve your business today, but an explained decision could bring you to better understand and improve your processes in the future”, says Enrico Ferrari, Head of R&D Projects at Rulex. He has worked side-by-side with firms for many years now, striving to innovate their decision-making process.
    “We were working with eXplainable AI when the concept was still unknown to the wider public, creating solutions with a very high level of explainability and transparency like Logic Learning Machine (LLM) – an algorithm that produces outcomes in the form of IF-THEN rules. In 2016, our commitment to explainable technology was recognized by MIT Sloan, which honored us for having one of the most disruptive technologies.”

The path towards ethical and responsible AI may not be easy, but it is vital for companies who want to grow their customers’ trust and safeguard their rights and privacy, avoiding risky gaffes which may affect their credibility.

]]>
Energy Churn nonadult
Digital transformation in action – 2 financial services case studies https://www.rulex.ai/digital-transformation-financial-services/ Tue, 30 Nov 2021 08:30:31 +0000 https://www.rulex.ai/?p=229959

The true is not all digital transformations are the same. This makes it particularly difficult for financial institutions to identify the AI-based solution that fits best their business. In a previous article, we listed 3 key points which, in our experience, make a difference to the digital transformation of financial services: 1. valuing domain expertise, 2. staying human-centric, 3. respecting privacy regulations. These are the principles we follow every time we help a business approach and integrate eXplainable AI into its decision-making process:

1. Rulex platform seamlessly integrates data-driven insight with human expertise expressed through heuristic rules, to reach optimum results.

2. It supports users through its clear box technology, providing human understandable reasons for each prediction, so users are always fully in control of the process.

3. It employs eXplainable AI to produce predictive models in the form of self-explanatory logical if-then rules (see here for more details), which can be fully understood and explained by business users, making it GDPR compliant by design.

Digital transformation in action: ad-hoc financial services solutions 

Rulex offers a suite of advanced solutions for the financial sector, from compliance and data governance to lending and credit risk, and marketing & commerce. In this article, we see Rulex technology in action, considering two case studies.

How we manage false positives in fraud detention

Detecting fraud is statistically complex, due to the limited samples of fraud, which correspond to approximately 1% overall. Consequently, the risk of creating false positives is very high.

Rulex technology found a solution to this problem and was key to building a successful anti-fraud solution together with official partner GFT, for the banking and insurance sector. How does this solution work?

Rulex’s innovative LLM algorithm (characterized by a high level of explainability) is able to identify patterns from the limited sample data of actual fraud and build reliable and accurate predictive models. The output is a fraud shortlist, which can subsequently be tailored to the anti-fraud team needs, integrating their domain business knowledge. Finally, a score is added to each record in the shortlist, which ranks the likelihood of fraud.

How to evaluate the best course of action for NPLs?

The logical approach of many credit institutes is to pursue legal actions to recover funds from non-performing loans. However, this is an expensive and time-consuming strategy, the highest cost derives from initiating and pursuing legal actions to force repayment, also considering that many litigated loans are lost to default nonetheless.

Rulex’s NPL solution is able to distinguish between cases where it is worth starting a legal action to retrieve funds, and cases where a legal action would simply represent a further loss of money.

For those credits where collection is recommended, Rulex’s NPL solution also suggests the most efficient course of action in terms of success and cost of resources. It provides a list of operational actions that guide the operator in the credit recovery decision process, thus providing not only tactical, but also strategical information to empower the decision process.

The next step: how Rulex can help your business

The use of eXplainable artificial intelligence can help organizations combat fraud, estimate risks, and recover funds. But what can Rulex technology do for your business? Visit our dedicated Financial Services page to explore all our case-studies, find more information regarding our solutions, and get in touch with our experts.

 

 

 

 

]]>
3 key points that make a difference to the digital transformation of financial services https://www.rulex.ai/digital-transformation-of-financial-services/ Wed, 03 Nov 2021 07:00:20 +0000 https://www.rulex.ai/?p=229002

The financial services sector is constantly changing, adapting to new policies, regulatory compliance, and the ongoing digital transformation. Numerous decisions must be taken every day to manage business efficiently, making it extremely complicated to pinpoint the right decisions in a shifting landscape.

Increasingly often, financial institutions are turning to eXplainable AI (XAI) software to help them approach current and future challenges. By providing clear and understandable data insight (for instance via If-Then rules, see here for more details), eXplainable AI can dramatically improve the whole decision-making process, making it more efficient and informed.

But not all digital transformations are the same, and many financial institutions still find it difficult to decide which solution to adopt and how to integrate it within their organization. In other words, eXplainable AI still raises doubts, especially in the financial sector, which is so sensitive to industry-specific regulations and privacy law.

How can eXplainable AI help financial institutions approach the digital transformation in the best possible way?

In our experience, there are 3 key points that make a difference to the digital transformation of financial services.

1.Valuing domain expertise

While accurately applied machine learning algorithms produce important data-driven insight, the value of domain expert know-how and company policies should never be overlooked. Any digital transformation software should integrate both machine outcomes and human understanding.

Domain experts, such as credit specialists and banking professionals, will continue to play a vital role in business decisions. Technology “augments” their know-how, rather than replacing it. Explainable AI plays a fundamental role in this sense, since it provides understandable outputs that can be “integrated” into domain experts’ knowledge

2.Staying human-centric

Relying blindly on a new technology is extremely risky, particularly when dealing with sensitive data. The risk increases when we rely on black box algorithms, i.e., systems producing output that is not interpretable by humans.

There may be biases in input data, for example, gender, racial or ideological biases, as well as incomplete or unrepresentative datasets. The issue is particularly relevant when dealing with sensitive decisions, such as granting loans or detecting cases of fraud.

The transition to data-driven decision-processes should be transparent and human-centric. By providing clear, transparent output, eXplainable AI makes it possible to identify biases in advance and thereby avoid them. 

3.Respecting privacy regulations

According to GDPR and other international regulations, algorithms applied to personal information must process data transparently and provide a clear explanation of any predictions made. This is impossible with black box systems, which although they provide explanations on decision-making methods, do not allow us to interpret an individual decision. This is unacceptable in cases where clients are entitled to a proper explanation, such as when they are refused a loan.

By contrast, eXplainable AI guarantees individual explanations by using If-Then rules. While XAI provides explanations, it requires an expert to outline the final decision to the client. 

Conclusion – future perspectives for financial institutions

In recent years, the banking and insurance market has witnessed a significant increase in the use of artificial Intelligence to combat fraud, estimate risks, and recover funds.

The increasing use of these disruptive technologies has, however, been accompanied by concerns over sensitive data. Rulex’s eXplainable AI is a practical example of trustworthy, GDPR compliant, human-centric AI, which can be confidently adopted in the financial services field. 

 

To find out more about Rulex’s eXplainable AI and how it can help your organization, visit our dedicated Financial Services page.

]]>