Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Dangi, Mohan Kumar"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Can I Make My Deep Network Somewhat Explainable?
    (Indian Statistical Institute, Kolkata, 2019-07) Dangi, Mohan Kumar
    Deep neural networks (DNNs), as well as shallow networks, are usually black boxes due to their nested non-linear structure. In other words, they provide no information about what exactly makes them arrive at their predictions/decisions. This lack of transparency can be a major drawback, particularly in critical applications, such as medicine, judiciary, and defense. Apart from this, almost all DNNs make a decision even when the test input is not from one of the classes for which they were trained or even when the test point is far from the training data used to design the system. In other words, such systems cannot say “don't know” when they should. In this work, we develop systems that can provide some explanations for their decisions and also can indicate when they should not make a decision. For this, we design DNNs for classification, which can classify an object and provide us with some explanation. For instance, if the network classifies an image, say a bird of kind Albatross, the network should provide some explanatory notes on why it has classified the image as an instance of Albatross. The explanation could be pieces of information that are distinguishing characteristics of Albatross. The system also detects situations when the inputs are not from the trained classes. To realize all these, we use four networks in an integrated manner: a pre-trained convolutional neural network (we use it as we do not have an adequate computing power to train from the scratch), two multilayer perceptron networks, and a self-organizing (feature) map. Each of these networks serves a distinctive purpose. ix

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify