Please use this identifier to cite or link to this item: http://localhost:80/xmlui/handle/123456789/4975
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKhanam, Assia-
dc.date.accessioned2017-11-28T09:01:42Z-
dc.date.accessioned2020-04-11T15:34:28Z-
dc.date.available2020-04-11T15:34:28Z-
dc.date.issued2008-
dc.identifier.urihttp://142.54.178.187:9060/xmlui/handle/123456789/4975-
dc.description.abstractAnimated facial avatars are being increasingly incorporated into many applications including teleconferencing, entertainment, education, and web-commerce. In many cases, there is a tendency to replace human performers with these virtual actors, owing to the benefits they offer in terms of cost and flexibility. However, these animated face models are under a strong constraint to offer a convincing behavior in order to be truly acceptable to the users. One of the most important aspects of this believability is the use of facial expressions by an avatar. It is extremely desirable that an animated facial model should be able, on the one hand to understand the facial expressions shown by its user, and on the other, to show facial expressions to the user, and to adapt its own facial expressions in response to any change in the situation in which it is immersed. Motion-capture based Performance Driven Facial Animation (PDFA) provides a cost- effective and intuitive means of creating expressive facial avatars. In PDFA, facial actions of an input facial video are tracked by different methods, and re-targeted to any desired facial model which may or may not be the same as the input face. However, there are a few limitations that are hindering the progress of PDFA at a pace it truly deserves. Most of these limitations arise from a lack of editing facilities in PDFA. In PDFA, once an input video has been shot, this is all that can be generated for the target model. Existing PDFA systems have to resort to frequent re-capture sessions in order to incorporate any deviations in the output animation vis-à-vis the input motion-capture data. This thesis presents a new approach to add flexibility and intelligence to PDFA by means of context-sensitive facial expression blending. The proposed approach uses a Fuzzy Logic based framework to automatically decide the facial changes that must be blended with the captured facial motion in the light of the present context. The required changes are translated into spatial movements of the facial features to be used by the synthesis module of the system for generating the enhanced facial expressions of the avatar. Within the above mentioned scope, the work presented in this thesis covers the following areas: • Expression Analysis • Intelligent Systems • Digital Image Processing • Expression Synthesis The presented approach lends to PDFA a flexibility it has been lacking so far. It has several potential applications in diverse areas including, for example, virtual tutoring, deaf communication, person identification, and the entertainment industry. Experimental results indicate very good analysis and synthesis performance.en_US
dc.description.sponsorshipHigher Education Commission, Pakistan.en_US
dc.language.isoenen_US
dc.publisherNATIONAL UNIVERSITY OF SCIENCES & TECHNOLOGY, PAKISTANen_US
dc.subjectComputer science, information & general worksen_US
dc.titleINTELLIGENT EXPRESSION BLENDING FOR PERFORMANCE DRIVEN FACIAL ANIMATIONen_US
dc.typeThesisen_US
Appears in Collections:Thesis

Files in This Item:
File Description SizeFormat 
451.htm127 BHTMLView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.