DSpace logo

Please use this identifier to cite or link to this item: http://142.54.178.187:9060/xmlui/handle/123456789/5304
Title: Learning to learn: An Automated and Continuous Approach to Learning in Imperfect Environments
Authors: Mujtaba, Hasan
Keywords: Computer science, information & general works
Issue Date: 2010
Publisher: FAST National University of Computer & Emerging Sciences, Islamabad, Pakistan.
Abstract: Our quest to understand, model, and reproduce natural intelligence has opened new avenues of research. One such area is artificial intelligence (AI). AI is the branch of computer science aiming to create machines able to engage in activities that humans consider intelligent. The ability to create intelligence in a machine has intrigued humans ever since the advent of computers. With recent advancements in computer science we are coming closer every day to the realization of our dreams of smarter or intelligent machines. New algorithms and methods are constantly being designed by researchers. However these techniques must be evaluated and their performance compared before they can be accepted. For this purpose games have caught the attention of AI researchers and gaming environment have proven to be excellent test beds for such evaluation. Although games have redeemed AI research, one limitation most researchers have applied is of perfect information. Perfect information environments imply that the information available to the agents in the environment does not change. Essentially what this means is that agents can detect entities that they have been trained for but will ignore entities for which training has not taken place. This limitation results in agents that do not gain a single iota of learning while they are in the environment. Whatever learning has taken place during their training, they will not increase upon it. This would all be fine if we were living in a static world of perfect information, but we do not! Learning in such an unpredictable and changing environment is a continuous process for the agents. For this reason we developed a “Continuous Learning Framework” (CLF). CLF enables each agent to detect the changes in the environment and take necessary action accordingly. Agents who fail to do so die out during the evolutionary process. CLF based learning is triggered by stimulus from the environment. We have intentionally kept CLF independent of this environment or of the underlying evolutionary approaches, allowing our CLF to be ported to other environments with dynamic nature. Learning new abilities and adapting successful strategies is crucial to the survival of species. Results of our experimentation show that CLF not only enables agents to learn new strategies suitable to their current environmental state but also5BAbstract ensures proper dissemination of information within a species. Forgetfulness is an inherent feature of the co-evolutionary processes. Keeping this in view we have also explored the integration of historical information and the ability to retain and recall past learning experiences. We have tested a social learning based flavor of our CLF to see whether learning from past is profitable for agents. Each of the species was allowed to maintain a social pool of successful strategies. Results from these experiments show that strategy from the pool results in a significant boost to performance in cases where the environmental conditions are similar to when the strategy was established. This social pools acts like a general reservoir of knowledge which is similar in nature to the one we humans hold with ancient civilizations. This historical information also results in performance boosts by eliminating the “reinvention of wheel” phenomena common to evolutionary strategies. This research not only presents a new way of learning along within a dynamic and uncertain medium but also aims to establish the importance of learning in such an imperfect environment. Much work still needs to be undertaken in this path. Possible future channels of this research include designing better performance evaluation criteria of agents residing in different locations of the environment, and establishing individual archive for learning based on personal experience.
URI: http://142.54.178.187:9060/xmlui/handle/123456789/5304
Appears in Collections:Thesis

Files in This Item:
File Description SizeFormat 
1063.htm128 BHTMLView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.