Please scan the QR code from your WeChat application:
This Catalyst project aims to build a system of trust within the Artificial Intelligence (AI) environment that adheres to the ethics and governance policies of organizations. The system will be based on a comprehensive framework that uses qualitative and quantitative measures to assess trust factors throughout the AI lifecycle (i.e., explainability, fairness, privacy, reliability and security).
Establishing a system of trust will ensure any AI system is safe, technically robust, transparent, accountable, non-discriminative and able to mitigate bias. A trusted ecosystem that includes internal and external model development will be able to expose potential risks as early as possible.
Communications service providers (CSPs) have started to experience some benefits from early AI implementations, such as reducing costs and introducing revenue-generation schemes. But there remains a large trust gap between AI and its beneficiaries that could deter further adoption and limit its potential. CSPs need certainty that AI models and algorithms will not expose their brands to negative or legal risks by violating their own corporate ethics and governance policies.
The communications industry must take adequate measures to mitigate AI technology’s negative impact.