Learning, inference, and decision making with probabilistic user models,
including considerations of preferences about outcomes under uncertainty,
may be infeasible on portable devices. The subject invention provides
systems and methods for pre-computing and storing policies based on
offline preference assessment, learning, and reasoning about ideal
actions and interactions, given a consideration of uncertainties,
preferences, and/or future states of the world. Actions include ideal
real-time inquiries about a state, using pre-computed
value-of-information analyses. In one specific example, such
pre-computation can be applied to automatically generate and distribute
call-handling policies for cell phones. The methods can employ learning
of Bayesian network user models for predicting whether users will attend
meetings on their calendar and the cost of being interrupted by incoming
calls should a meeting be attended.