XAI(eXplainable AI) Seminar
Feb. 21. 2020본문
The need to trust AI, especially in the financial services and healthcare industry, is paramount. With the advances of algorithms and different modeling techniques, companies are now using machine learning to enhance decisions.
However, many of the algorithms, particularly used for deep learning neural networks, are not able to be examined after the fact to understand specifically how and why a decision has been made. This brings inherent challenges around the trust and accountability of AI models.
Last week, our CEO(JS) was invited by the Educational Center of Future Technology(ECFT) to share how we address the black box AI model in banking, insurance, and payment industry. ABACUS’s patented XAI(eXplainable AI) offers a multi-dimensional view of individual customers and provides logical reasons behind each prediction results.
Our clients can get answers to questions like: Why this transaction is highly likely to be a fraud? What specific factors are affecting the level of delinquency for this customer? We also provide intuitive visualizations to help clients understand data, model and customer behavior better. Explainability is pivotal to foster trust in AI, and we are leading XAI to help clients to develop interpretable models and deploy them with confidence.