Hello,
please check the resources section to get details of this catalyst. If you would like to see the demo and know more about it, our team is available and you could simply click on "Book a meeting" button to fix up some time.
For AI to gain mass adoption, it needs to be trusted by governments, enterprises, and users directly or indirectly impacted by such applications. A trusted AI framework is then imperative. A failed AI program can not only lead to customer churn, reduced profitability and litigation but can also result in loss of reputation and regulatory scrutiny.
So, how do we make AI systems more trustworthy?
Subex, ADL, Bolgaiten, Brytlyst & AWS have joined hands with our champions like Axiata, Ncell Axiata & Dialog to address the trust issues in AI systems by building a comprehensive framework to measure trust. We have picked up 3 use cases to measure trust:
1. Churn Prediction
2. Maintaining Model Trust in Real-Time Analytics
3. Credit Rating
Please scan the QR code from your WeChat application:
This Catalyst project aims to build a system of trust within the Artificial Intelligence (AI) environment that adheres to the ethics and governance policies of organizations. The system will be based on a comprehensive framework that uses qualitative and quantitative measures to assess trust factors throughout the AI lifecycle (i.e., explainability, fairness, privacy, reliability and security).
Establishing a system of trust will ensure any AI system is safe, technically robust, transparent, accountable, non-discriminative and able to mitigate bias. A trusted ecosystem that includes internal and external model development will be able to expose potential risks as early as possible.
Communications service providers (CSPs) have started to experience some benefits from early AI implementations, such as reducing costs and introducing revenue-generation schemes. But there remains a large trust gap between AI and its beneficiaries that could deter further adoption and limit its potential. CSPs need certainty that AI models and algorithms will not expose their brands to negative or legal risks by violating their own corporate ethics and governance policies.
The communications industry must take adequate measures to mitigate AI technology’s negative impact.