Applying the Explainability Approach to Guide Cloud Implementation


Overview/Description
Expected Duration
Lesson Objectives
Course Number
Expertise Level



Overview/Description

In this course, you'll explore the concept of AI Explainability, the role of CloudOps Explainability in managing multi-cloud solutions, how to evaluate explanatory systems, and the properties used to define systems to accommodate explainability approaches. You'll look at how users interact with explainable systems and the effect of explainability on the robustness, security, and privacy aspects of predictive systems. Next, you'll learn about the use of qualitative and quantitative validation approaches and the explainability techniques for defining operational and functional derivatives of cloud operation. You'll examine how to apply explainability throughout the process of operating cloud environments and infrastructures, the methodologies involved in the three stages of AI Explainability in deriving the right CloudOps model for implementation guidance, and the role of explainability in defining AI-assisted Cloud Managed Services. Finally, you'll learn about the architectures that can be derived using Explainable Models, the role of Explainable AI reasoning paths in building trustable CloudOps workflows, and the need for management and governance of AI frameworks in CloudOps architectures.



Expected Duration (hours)
1.1

Lesson Objectives

Applying the Explainability Approach to Guide Cloud Implementation

  • discover the key concepts covered in this course
  • describe the concept of AI Explainability and differentiate between AI Explainability and CloudOps Explainability
  • describe CloudOps Explainability and the role it plays in CloudOps implementation for managing multi-cloud solutions
  • define explanatory systems and evaluate them from functional, operational, usability, security, and validation perspectives
  • identify properties that are used to define systems to accommodate explainability approaches and recognize how users interact with explainable systems and what is expected of them
  • recall the effect of explainability on the robustness, security, and privacy aspects of predictive systems and describe approaches of evaluating how well the explanation is understood using qualitative and quantitative validation approaches
  • list explainability techniques that can be used to define operational and functional derivatives of CloudOps including leave one column out, permutation impact, and local interpretable model-agnostic explanations
  • recognize the role of explainability and how it can be applied throughout the process of operating cloud environments and infrastructures to ensure efficient service delivery following the CloudOps paradigm
  • describe the three stages of AI Explainability along with the methodologies that are used in each stage to derive the right CloudOps model for implementation guidance
  • recognize the role of explainability in defining AI-assisted Cloud Managed Services that can be used to manage large cloud enterprise distributed applications
  • list the architectures that can be derived using Explainable Models and that can help share CloudOps or DevOps Model Explainability with the stakeholders to establish better collaboration
  • recognize the role of Explainable AI reasoning paths in building CloudOps workflows that can be trusted by customers, employees, regulators, and other key stakeholders
  • describe the role of CloudOps and DevOps Explainability in mitigating challenges along with the need for management and governance of AI frameworks in CloudOps architectures
  • summarize the key concepts covered in this course
  • Course Number:
    it_cocoexdj_01_enus

    Expertise Level
    Intermediate