DSpace Collection:http://hdl.handle.net/2440/10782015-05-23T02:43:32Z2015-05-23T02:43:32ZLP-based approximation algorithms for reliable resource allocationLiao, K.Shen, H.http://hdl.handle.net/2440/912552015-05-20T23:41:46Z2013-12-31T13:30:00ZTitle: LP-based approximation algorithms for reliable resource allocation
Author: Liao, K.; Shen, H.
Abstract: We initiate the study of the reliable resource allocation (RRA) problem. In this problem, we are given a set of sites ℱ each with an unconstrained number of facilities as resources. Every facility at site i ∈ ℱ has an opening cost and a service reliability pi. There is also a set of clients to be allocated to facilities. Every client j ∈ accesses a facility at i with a connection cost and reliability lij. In addition, every client j has a minimum reliability requirement (MRR) rj for accessing facilities. The objective of the problem is to decide the number of facilities to open at each site and connect these facilities to clients such that all clients’ MRRs are satisfied at a minimum total cost. The unconstrained fault-tolerant resource allocation problem studied in Liao and Shen [(2011) Unconstrained and Constrained Fault-Tolerant Resource Allocation. Proceedings of the 17th Annual International Conference on Computing and Combinatorics (COCOON), Dallas, Texas, USA, August 14–16, pp. 555–566. Springer, Berlin] is a special case of RRA. Both of these resource allocation problems are derived from the classical facility location theory. In this paper, for solving the general RRA problem, we develop two equivalent primal-dual algorithms where the second one is an acceleration of the first and runs in quasi-quadratic time. In the algorithm's ratio analysis, we first obtain a constant approximation factor of 2+2√2 and then a reduced ratio of 3.722 using a factor revealing program, when lij's are uniform on i (partially uniform) and rj's are uniform above the threshold reliability that a single access to a facility is able to provide. The analysis further elaborates and generalizes the inverse dual-fitting technique introduced in Xu and Shen [(2009) The Fault-Tolerant Facility Allocation Problem. Proceedings of the 20th International Symposium on Algorithms and Computation (ISAAC), Honolulu, HI, USA, December 16–18, pp. 689–698. Springer, Berlin]. Moreover, we formalize this technique for analyzing the minimum set cover problem. For a special case of RRA, where all rj's and lij's are uniform, we derive its approximation ratio through a novel reduction to the uncapacitated facility location problem. The reduction demonstrates some useful and generic linear programming techniques.2013-12-31T13:30:00ZCharacterness: an indicator of text in the wildLi, Y.Jia, W.Shen, C.van den Hengel, A.http://hdl.handle.net/2440/911852015-05-19T23:16:24Z2013-12-31T13:30:00ZTitle: Characterness: an indicator of text in the wild
Author: Li, Y.; Jia, W.; Shen, C.; van den Hengel, A.
Abstract: Text in an image provides vital information for interpreting its contents, and text in a scene can aid a variety of tasks from navigation to obstacle avoidance and odometry. Despite its value, however, detecting general text in images remains a challenging research problem. Motivated by the need to consider the widely varying forms of natural text, we propose a bottom-up approach to the problem, which reflects the characterness of an image region. In this sense, our approach mirrors the move from saliency detection methods to measures of objectness. In order to measure the characterness, we develop three novel cues that are tailored for character detection and a Bayesian method for their integration. Because text is made up of sets of characters, we then design a Markov random field model so as to exploit the inherent dependencies between characters. We experimentally demonstrate the effectiveness of our characterness cues as well as the advantage of Bayesian multicue integration. The proposed text detector outperforms state-of-the-art methods on a few benchmark scene text detection data sets. We also show that our measurement of characterness is superior than state-of-the-art saliency detection models when applied to the same task.2013-12-31T13:30:00ZArtistic image analysis using the composition of human figuresChen, Q.Carneiro, G.http://hdl.handle.net/2440/910272015-05-11T06:16:38Z2014-12-31T13:30:00ZTitle: Artistic image analysis using the composition of human figures
Author: Chen, Q.; Carneiro, G.
Abstract: Artistic image understanding is an interdisciplinary research field of increasing importance for the computer vision and art history communities. One of the goals of this field is the implementation of a system that can automatically retrieve and annotate artistic images. The best approach in the field explores the artistic influence among different artistic images using graph-based learning methodologies that take into consideration appearance and label similarities, but the current state-of the-art results indicate that there seems to be lots of room for improvements in terms of retrieval and annotation accuracy. In order to improve those results, we introduce novel human figure composition features that can compute the similarity between artistic images based on the location and number, i.e., composition, of human figures. Our main motivation for developing such features lies in the importance that composition, particularly the composition of human figures, has in the analysis of artistic images when defining the visual classes present in those images. We show that the introduction of such features in the current dominant methodology of the field improves significantly the state-of-the-art retrieval and annotation accuracies on the PRINTART database, which is a public database exclusively composed of artistic images.
Description: see http://link.springer.com/content/pdf/10.1007/978-3-319-16178-5.pdf2014-12-31T13:30:00ZKSM based machine learning for markerless motion captureTangkuampien, T.Suter, D.http://hdl.handle.net/2440/910162015-05-11T06:10:26Z2008-12-31T13:30:00ZTitle: KSM based machine learning for markerless motion capture
Author: Tangkuampien, T.; Suter, D.
Abstract: A marker-less motion capture system, based on machine learning, is proposed and tested. Pose information is inferred from images captured from multiple (as few as two) synchronized cameras. The central concept of which, we call: Kernel Subspace Mapping (KSM). The images-to-pose learning could be done with large numbers of images of a large variety of people (and with the ground truth poses accurately known). Of course, obtaining the ground-truth poses could be problematic. Here we choose to use synthetic data (both for learning and for, at least some of, testing). The system needs to generalizes well to novel inputs:unseen poses (not in the training database) and unseen actors. For the learning we use a generic and relatively low fidelity computer graphic model and for testing we sometimes use a more accurate model (made to resemble the first author). What makes machine learning viable for human motion capture is that a high percentage of human motion is coordinated. Indeed, it is now relatively well known that there is large redundancy in the set of possible images of a human (these images form som sort of relatively smooth lower dimensional manifold in the huge dimensional space of all possible images) and in the set of pose angles (again, a low dimensional and smooth sub-manifold of the moderately high dimensional space of all possible joint angles). KSM, is based on the KPCA (Kernel PCA) algorithm, which is costly. We show that the Greedy Kernel PCA (GKPCA) algorithm can be used to speed up KSM, with relatively minor modifications. At the core, then, is two KPCA’s (or two GKPCA’s) - one for the learning of pose manifold and one for the learning image manifold. Then we use a modification of Local Linear Embedding (LLE) to bridge between pose and image manifolds.2008-12-31T13:30:00Z