Please use this identifier to cite or link to this item: http://hdl.handle.net/10263/7506
Full metadata record
DC FieldValueLanguage
dc.contributor.authorA, Patrick Jeeva-
dc.date.accessioned2025-02-05T10:59:13Z-
dc.date.available2025-02-05T10:59:13Z-
dc.date.issued2024-06-
dc.identifier.citation65p.en_US
dc.identifier.urihttp://hdl.handle.net/10263/7506-
dc.descriptionDissertation under the supervision of Dr. Swarup Mohalik and Dr. Ansuman Banerjee.en_US
dc.description.abstractIn recent years of advancements in reinforcement learning (RL), utilizing neural network based models to make decisions in dynamic and complex environments has emerged as a powerful paradigm. In particular, model based reinforcement learning has been widely used for its ability to increase learning efficiency and performance. By constructing an environment model beforehand, the agent attains a prior knowledge of the dynamics of the model to take informed decisions and converge fast to optimal policies. Real-world environments are often intricate and subject to external disturbances, posing substantial challenges for accurate modeling. Addressing these challenges requires the application of sophisticated neural network-based models that can effectively approximate the underlying environment dynamics. In this work, we develop and evaluate extensive neural network models, specifically focusing on Gaussian Ensemble models, Bayesian neural networks, and Monte Carlo Dropout techniques, to approximate various standard gym environments. These models are trained on different numbers of samples to understand their efficiency and accuracy in capturing environment dynamics. Once trained, the neural network models are used to construct Markov Decision Processes (MDPs) with various discretization strategies. The constructed MDPs are then analyzed and compared to evaluate the performance of each neural network approach. The purpose of this thesis is to present a comprehensive study on the construction of environment models using advanced neural network techniques. We aim to approximate the standard environments in the reinforcement learning setup, utilizing a variety of neural networks and compare the efficiency based on the reconstruction of MDPs.en_US
dc.language.isoenen_US
dc.publisherIndian Statistical Institute, Kolkataen_US
dc.relation.ispartofseriesMTech(CS) Dissertation;22-20-
dc.subjectGaussian Ensemble Modelen_US
dc.subjectBayesian Neural Networken_US
dc.subjectMonte Carlo Dropout Modelen_US
dc.subjectMarkov Decision Processesen_US
dc.titleVerification of Reinforcement Learning Models:en_US
dc.title.alternativeComparing Construction of Environment Modelsen_US
dc.typeOtheren_US
Appears in Collections:Dissertations - M Tech (CS)

Files in This Item:
File Description SizeFormat 
Patrick_jeeva-Cs2220-Mtech-CS-2024.pdfDissertations - M Tech (CS)2.69 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.