ebook img

Statistical Task Modeling of Activities of Daily Living for Rehabilitation PDF

191 Pages·2016·2.9 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Statistical Task Modeling of Activities of Daily Living for Rehabilitation

Statistical Task Modeling of Activities of Daily Living for Rehabilitation E´milie Mich`ele D´eborah Jean-Baptiste School of Engineering University of Birmingham A thesis submitted for the degree of Doctor of Philosophy February 2016 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. i Acknowledgements I would like to thank my supervisor, Prof. Martin Russell, for his supervision throughout my PhD. His guidance has been essential to my research. He has always found the time to meet me and to answer my questions. I am very grateful to him. I am also grateful to Dr. Pia Rotshtein for her support and her insights on cognitive decision models. I also would like to thank Prof. Alan Wing for his excellent team management. The success of the final CogWatch prototype was the result of a joint e↵ort. For that I would like to thank the CogWatch team, which has provided a highly motivating environment for me to carry out this research. Finally, my thanks go to my family and my unwavering source of inspiration, for their continuous encouragement and support. Abstract Stroke survivors su↵ering from cognitive deficits experience di culty completing their daily self-care activities. The latter are referred to as activities of daily living (ADL) [54]. The resulting loss of indepen- dence makes them rely on caregivers to help them go through their daily routine. However, such reliance on caregivers may conflict with their need for privacy and willingness to keep a control over their life. A possible solution to tackle this issue is the development of an assistive or rehabilitation system. Ideally, the aim of such a system would be to deliver the same services as a human caregiver. For example, the system could provide mean- ingful recommendations or hints to stroke survivors during a task, so they have a higher probability of successfully continuing or complet- ing it. In order to fulfill such an aim, an assistive or rehabilitation system would need to monitor stroke survivors’ behavior, constantly keep track of what they do during the task, and plan the strategies they should follow to increase their task completion. The module in charge of planning is really important in this process. Indeed, this module interacts with stroke survivors or any users dur- ing the task, analyzes how far they might be in the completion of this task, and infers what they should do to succeed it. To do so, the plan- ning module needs to receive information about users’ behavior, and be trained to “learn” how to take decisions that could guide them. In the case where the information it receives are incorrect, the main challenge of the planning module is to cope with the uncertainty in its inputs, and still be able to take the right decisions as far as users are concerned. Di↵erent decision theory models exist and could be implemented, for example cognitive models [22; 23] or statistical models such as Markov Decision Process (MDP) [86] or Partially Observable Markov Decision Process (POMDP) [52]. The MDP assumes full observability as far as the system’s environment is concerned, while the POMDP provides a rich and natural framework to model sequential decision-making problems under uncertainty. Hence, it is potentially a good candidate for a system whose aim is to guide stroke survivors during ADL, even if the information it receives is potentially erroneous. Since a POMDP-based system acknowledges the fact that the infor- mation it receives about a user may be incorrect, it maintains a prob- ability distribution over all potential situations this user might be in. These probability distributions are referred to as “belief states”, and the belief state space containing all belief states is infinite. Many methods can be implemented in order to solve a POMDP. In the case of a system in charge of guiding users, to solve a POMDP means to find what are the optimal recommendations to send to a user during a task. Exact POMDP solution methods are known to be in- tractable, due to their aim of computing the optimal recommendation for all possible belief states contained in the belief state space [103]. A way to sidestep this intractability is to implement approximation algorithms by considering only a finite set of belief points, referred to as “belief subspace”. In the work presented in this thesis, a belief state representation based on the MDP reduced state space is explained. We will show how re- stricting the growth of the MDP state space helps maintain the belief state’s dimensionality at a relatively small size. The thesis also ana- lyzes the potential for improving the strategy selection process during execution. In the case of a POMDP-based system, since strategies are found only for a subspace of belief states, this may lead the system to face the challenge of deciding what strategy to take in a situation it has not been trained for. In this case, we investigated the e↵ect of di↵erent methods, which can be used during execution to approx- imate an unknown belief state to a belief state the system has seen during training. Overall, this work represents an important step forward in the devel- opment of an artificial intelligent planning system designed to guide users su↵ering from cognitive deficits during their activities of daily living. Nomenclature General • AI - Artificial intelligence • MDP - Markov Decision Process • POMDP - Partially Observable Markov Decision Process • MC - Monte Carlo • NL - Numerical label • NNS - Nearest Neighbor Search • N - Set of natural numbers • P(.) - Probability • P(.|.) - Conditional probability Rehabilitation • AADS - Apraxia or action disorganization syndrome • ADL - Activity of daily living • EF - Errorfull • EL - Errorless v CogWatch system • CW - CogWatch • SimU - Simulated User • ARS - Action recognition system • TM - Task Manager • APM - Action policy module • ERM - Error recognition module • - SimU’s compliance probability • - SimU’s probability to forget • au - User’s action • o - ARS’s output • ! - Task Manager’s prompt • µ - Task Manager’s recommendation (i.e., system’s action) • e - Task Manager’s interpretation of user’s error • ⇥ - Signal from virtual Cue Selector • ⇣ - Cue from Cue Selector • rs - User’s state representation • sd - User’s history of action vi Task formalism • BT - Black tea • BTS - Black tea with sugar • WT - White tea • WTS - White tea with sugar • BTr - Button trigger • AD - Addition error • AN - Anticipation error • OM - Omission error • PE - Perplexity error • PsE - Perseveration error • QT - Quantity error • FE - Fatal error • NFE - Non fatal error • NE - Not an error Markov Decision Process • A - Set of recommendations (i.e., set of system’s actions) • µt - TM’s recommendation at step t (i.e., system’s action at step t) • S - Set of states • st - State at step t • c(s, µ) - Cost incurred when taking µ in state s vii

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.