Francesca Manca | Rulex https://www.rulex.ai The platform for smart data management Thu, 14 Nov 2024 11:53:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.rulex.ai/wp-content/uploads/2024/05/cropped-favicon_rulex_white-32x32.png Francesca Manca | Rulex https://www.rulex.ai 32 32 Exploring the AI ACT: Transparency as the Key for Future Technology https://www.rulex.ai/exploring-the-ai-act-transparency-as-the-key-for-future-technology/ Thu, 29 Feb 2024 09:00:22 +0000 https://www.rulex.ai/?p=241514

Nowadays, artificial intelligence (AI) systems are seamlessly integrated into our daily lives, making tailored suggestions and influencing our decisions. While artificial intelligence offers incredible advantages, it is necessary to address the potential issues of bias, discrimination, and privacy associated with its use.

In response to its pervasive influence in contemporary society, various initiatives have been pursued, notably those headed by the European Union (EU). The EU’s landmark Artificial Intelligence Act (AIA) represents a robust regulatory architecture designed to address the challenges posed by AI.

The Artificial Intelligence Act is the first law to harmonize, regulate, and restrict the use of artificial AI in Europe. It is expected to enter into force in 2024.

A risk-based approach

A crucial aspect of the EU AI Act is its risk-based methodology. The greater the risk associated with the use of a specific artificial intelligence system, the greater the responsibilities for those who use or provide it. This can extend to a prohibition on the use of systems deemed to have an unacceptable level of risk, thereby emphasizing individual rights and model transparency.

Indicatively, the classification includes the following risk levels:

  • Minimum/low risk: systems with minimal risk to people’s safety and fundamental rights should be subject to transparency obligations, ensuring a basic level of clarity and comprehension.
  • High-risk AI: systems whose application could have substantial implications, potentially leading to harm. Consequently, they are subject to stringent regulations aimed at mitigating bias and discrimination. Identifying risks and implementing corresponding mitigation strategies is imperative across the entire life cycle of these AI systems. Thus, ensuring transparency becomes essential for interpreting results and facilitating proper oversight of the decision-making process. In fact, the Artificial Intelligence Act stipulates that high-risk AI systems are subject to a number of requirements and obligations, such as the adoption of necessary technical documentation, transparency of information, adequate levels of cybersecurity, etc.
  • Unacceptable risk: any AI system which is considered a direct threat to fundamental human rights, and is consequently prohibited.

Furthermore, guidelines and standards have been implemented for both basic AI systems, necessitating clear disclosure when individuals interact with them, and general purpose AI systems (GPAI), whose capability to operate across market sectors carries the risk of having a systemic negative effect on the society as a whole.

The importance of transparency

Meeting the stringent transparency requirements of the new AI Act could be extremely challenging – if not impossible – with traditional AI technologies.

For example, one of the crucial applications of AI is in the realm of credit rating systems, where it empowers banks to examine vast sets of customer data for accurate evaluations of creditworthiness. Considering that these systems provide a perspective on an individual’s financial standing by examining not only financial indicators, but also spending habits and behavioral patterns from diverse sources, ensuring fairness in the process is of paramount importance.

Explainable AI (XAI) is a facet of artificial intelligence that can produce clear results and provide the rationale for its predictions and subsequent decisions, consequently enhancing accountability and acting as a safeguard against the influence of bias and discrimination

Rulex’s XAI vision

Rulex’s journey began in the 1990s, fueled by a singular mission: to make AI explainable while maintaining its accuracy and speed. For the past two decades, its ground-breaking eXplainable AI has remained focused on addressing these very challenges within the data management process.

Central to this achievement is the Logic Learning Machine (LLM), an algorithm developed by Rulex’s founder. The innovation lies in its ability to articulate explicit and straightforward rules, presented in a logical if-then structure. This approach mirrors the cognitive processes of the human brain, ensuring a transparent and traceable workflow.

This commitment not only ensures compliance with GDPR and other privacy regulations but also lays a solid groundwork for the impending implementation of the AI Act.

Benefits of XAI

  • Trust: Establishing transparency in decision-making is essential to cultivate a trusting relationship with all stakeholders. Business experts can effectively grasp and articulate the decision-making process, utilizing eXplainable AI systems to reassure the involved parties
  • Compliance: XAI can assist companies in identifying and utilizing only the strictly necessary and crucial information from extensive datasets, thereby reducing certain risks associated with their management. In this way, actions are taken in compliance with regulations and in respect of individuals’ privacy
  • Responsibility: The transparency and traceability of XAI ensure decisions are made without relying on discriminatory influence, thereby imposing a greater sense of accountability and responsibility on users.

A transparent credit rating solution

Over the years, Rulex has applied eXplainable AI principles within the financial services sector, developing numerous solutions ranging from fraud detection to NPL management and churn prevention.

Among these, Rulex’s credit rating solution serves as a prime example of the improved comprehension that our native XAI can offer regarding the underlying process logic.

This solution integrates a decision-making workflow that comprehensively covers every stage of the product lifecycle, from automated score calculation to rating assignment and continuous performance monitoring.

Rulex’s XAI algorithm generates intuitive if-then rules identifying the distinctive features of each rating class, enabling the classification of new cases.

These clear predictions allow experts to confidently make well-informed decisions and effectively communicate them to clients, all while mitigating bias and promoting fairness.

*At the time of writing this article, the final text of the Artificial Intelligence Act is still awaiting approval.

Discover more about Rulex for financial services

Rulex Platform
]]>