A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models …
M Arashpour - Journal of Environmental Management, 2023 - Elsevier
Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid …
To provide higher data rates, as well as better coverage, cost efficiency, security, adaptability, and scalability, the 5G and beyond 5G networks are developed with various …
M Taddeo, T McCutcheon, L Floridi - Nature Machine Intelligence, 2019 - nature.com
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in …
Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications such as medicine. The field of explainable AI (XAI) has …
YS Lin, WC Lee, ZB Celik - Proceedings of the 27th ACM SIGKDD …, 2021 - dl.acm.org
EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs through model saliency explanations that highlight the input parts deemed …
Despite artificial intelligence (AI)'s significant growth, its “black box” nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in …
While 5G is well-known for network cloudification with micro-service based architecture, the next generation networks or the 6G era is closely coupled with intelligent network …
To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide …