Explainable AI workshop

Complete this form to download the resource

Thank you for your interest!

Your Documents is ready to download. please click below to initiate the download.

If you are unable to download the document or have same query, please contact us on

Oops! Something went wrong while submitting the form.

From the lab to production: explainable, reliable, trustable AI

About the workshop:

AI in production requires explainability and accountability. There is a lot of buzz around explainable AI aka XAI today. While there are widely adopted methods like LIME, SHAP, LOCO, IG etc., some of these methods are now facing criticism for being vague, producing approximations, being compute intensive or complex!

Arya.ai built ‘AryaXAI’, a new framework to ensure responsible AI can be adapted as part of design. We introduced a new patent pending approach called ‘Back-trace’ to explain Deep Learning systems. It can generate true to model explanations Local/Global by assessing the model directly.  

Our workshop on ‘Explainable AI’ covers the best practices on XAI, general challenges with current XAI approaches, details on functioning of AryaXAI framework, hands-on workshop on implementing AryaXAI API on image classification use case, how to validate explanations from AryaXAI.

Topics discussed:

  • About Arya.ai
  • Introducing Explainable AI
  • XAI: Current Methods for Deep Learning and brief comparisons
  • Back-trace: Arya.ai’s patent pending framework that addresses XAI in a simple, interpretable and true-to-model manner. Details on the algorithm and comparison
  • Implementation of AryaXAI API on image classification

See how Libra expedites the full-scale adoption of Autonomous AI systems

Learn how to gain flexibility and scalability in a platform, explore relevant use cases for your team, and get pricing information for Libra.