We study the problem of determining the complexity of the lower envelope of a collection of n geometric objects. For collections of rays; unit length line segments; and collections of unit squares to which we apply at most two transformations from translation, rotation, and scaling, we prove a complexity of Θ(n). If all three transformations are applied to unit squares, then we show the complexity becomes Θ(nα(n)), where α(n) is the slowly growing inverse of Ackermann’s function.
We give a brief history of distance learning in Sweden. We then present the main electronically available resources that have been established in recent years for learning purposes. We divide the resources into two categories: Administrative resources. These include LMS-systems and other systems to administer applications, student result, course material, etc. We present studera.nu which is the Swedish application website for all university courses and programs, Ladok and Skola24, the student result databases for universities and schools, nationwide. Educational resources. These are mainly web-based systems containing material that can be used for education. We present LIBRIS Uppsök and SwePub, the two Swedish National Libary websites containing all theses and essays published at Swedish universities. We also look at the electronic library search system for Malmö University as an example of an online library system. Finally, we discuss internationally available resources such as Wikipedia, Google search and Google scholar as well as the educational aspects of social networking systems such as Facebook, Myspace and Classroom 2.0.
A k-transmitter g in a polygon P, with n vertices, k-sees a point p ∈ P if the line segment gp intersects P’s boundary at most k times. In the k-Transmitter Watchman Route Problem we aim to minimize the length of a k-transmitter watchman route along which every point in the polygon—or a discrete set of points in the interior of the polygon—is k-seen. We show that the k-Transmitter Watchman Route Problem for a discrete set of points is NP-hard for histograms, uni-monotone polygons, and star-shaped polygons given a fixed starting point. For histograms and uni-monotone polygons it is also NP-hard without a fixed starting point. Moreover, none of these versions can be approximated to within a factor c · log n, for any constant c > 0.
Online routing in a planar embedded graph is central to a number of fields and has been studied extensively in the literature. For most planar graphs no O (1)-competitive online routing algorithm exists. A notable exception is the Delaunay triangulation for which Bose and Morin (SIAM J Comput 33(4):937-951, 2004) showed that there exists an online routing algorithm that is O(1)-competitive. However, a Delaunay triangulation can have Omega (n) vertex degree and a total weight that is a linear factor greater than the weight of a minimum spanning tree. We show a simple construction, given a set V of n points in the Euclidean plane, of a planar geometric graph on V that has small weight (within a constant factor of the weight of a minimum spanning tree on V), constant degree, and that admits a local routing strategy that is O (1)-competitive. Moreover, the technique used to bound the weight works generally for any planar geometric graph whilst preserving the admission of an O (1)-competitive routing strategy.
Online routing in a planar embedded graph is central to a number of fields and has been studied extensively in the literature. For most planar graphs no O(1)-competitive online routing algorithm exists. A notable exception is the Delaunay triangulation for which Bose and Morin showed that there exists an online routing algorithm that is O(1)-competitive. However, a Delaunay triangulation can have Omega(n) vertex degree and a total weight that is a linear factor greater than the weight of a minimum spanning tree. We show a simple construction, given a set V of n points in the Euclidean plane, of a planar geometric graph on V that has small weight (within a constant factor of the weight of a minimum spanning tree on V), constant degree, and that admits a local routing strategy that is O(1)-competitive. Moreover, the technique used to bound the weight works generally for any planar geometric graph whilst preserving the admission of an O(1)-competitive routing strategy.
We present an O(n^{2} · min{m, n}) time and O(n · min{m, n}) storage algorithm to compute the minsum set of m watchmen routes given their starting points in a Stiegl polygon ― a staircase polygon where the floor solely consists of one horizontal and one vertical edge ― with n vertices.
We prove that the minimum line covering problem and the minimum guard covering problem restricted to 2-link polygons are APX-hard.
The time dependent traveling salesman problem is a variant of TSP with time dependent edge costs. We study some restrictions of TDTSP where the number of edge cost changes are limited. We find competitive ratios for online versions of \tdtsp. From these we derive polynomial time approximation algorithms for graphs with edge costs one and two. In addition, we present an approximation algorithm for the orienteering problem with edge costs one and two.
This work presents an extension of Thompson Sampling bandit policy for orchestrating the collection of base recommendation algorithms for e-commerce. We focus on the problem of item-to-item recommendations, for which multiple behavioral and attribute-based predictors are provided to an ensemble learner. In addition, we detail the construction of a personalized predictor based on k-Nearest Neighbors (kNN), with temporal decay capabilities and event weighting. We show how to adapt Thompson Sampling to realistic situations when neither action availability nor reward stationarity is guaranteed. Furthermore, we investigate the effects of priming the sampler with pre-set parameters of reward probability distributions by utilizing the product catalog and/or event history, when such information is available. We report our experimental results based on the analysis of three real-world e-commerce datasets.
We study bandit algorithms for e-commerce recommender systems. The question we pose is whether it is necessary to consider reinforcement learning effects in recommender systems. A key reason to introduce a recommender system for a product page on an e-commerce site is to increase the order value by improving the chance of making an upsale. If the recommender system merely predicts the next purchase, there might be no positive effect at all on the order value, since the recommender system predicts sales that would have happened independent of the recommender system. What we really are looking for are the false negatives, i.e., purchases that happen as a consequence of the recommender system. These purchases entail the entire uplift and should be present as reinforcement learning effects. This effect cannot be displayed in a simulation of the site, since there are no reinforcement learning effects present in a simulation. The attribution model must capture the uplift to guarantee an increased order value. However, such an attribution model is not practical, due to data sparsity. Given this starting point, we study some standard attribution models for e-commerce recommender systems, and describe how these fare when applied in a reinforcement learning algorithm, both in a simulation and on live sites.
This work presents an extension of Thompson Sampling bandit policy for orchestrating the collection of base recommendation algorithms for e-commerce. We focus on the problem of item-to-item recommendations, for which multiple behavioral and attribute-based predictors are provided to an ensemble learner. We show how to adapt Thompson Sampling to realistic situations when neither action availability nor reward stationarity is guaranteed. Furthermore, we investigate the effects of priming the sampler with pre-set parameters of reward probability distributions by utilizing the product catalog and/or event history, when such information is available. We report our experimental results based on the analysis of three real-world e-commerce datasets.
Clique clustering is the problem of partitioning the vertices of a graph into disjoint clusters, where each cluster forms a clique in the graph, while optimizing some objective function. In online clustering, the input graph is given one vertex at a time, and any vertices that have previously been clustered together are not allowed to be separated. The goal is to maintain a clustering with an objective value close to the optimal solution. For the variant where we want to maximize the number of edges in the clusters, we propose an online algorithm based on the doubling technique. It has an asymptotic competitive ratio at most 15.646 and a strict competitive ratio at most 22.641. We also show that no deterministic algorithm can have an asymptotic competitive ratio better than 6. For the variant where we want to minimize the number of edges between clusters, we show that the deterministic competitive ratio of the problem is n−ω(1), where n is the number of vertices in the graph.
A clique clustering of a graph is a partitioning of its vertices into disjoint cliques. The quality of a clique clustering is measured by the total number of edges in its cliques. We consider the online variant of the clique clustering problem, where the vertices of the input graph arrive one at a time. At each step, the newly arrived vertex forms a singleton clique, and the algorithm can merge any existing cliques in its partitioning into larger cliques, but splitting cliques is not allowed. We give an online strategy with competitive ratio 15.645 and we prove a lower bound of 6 on the competitive ratio, improving the previous respective bounds of 31 and 2.
Several techniques are available for constraint satisfaction problems. Most of them lack real time capability and distributed computational properties. Configuration of complex systems has today become a competitive edge, e.g. within e-business. The other side of the same coin can be used for diagnosis and simulation of safe critical applications. This paper gives a scientific background and comparison of methods for constraint satisfaction problems implemented in real time environment. Especially, the technique and its features of the array-based logic will be illustrated by three examples. The examples are: 1.Modelling and simulation of system wide protection applications of power systems 2.Modelling, simulation and implementation of safe critical switching in power systems 3.Modelling, simulation and implementation of safe critical systems of automatic train control Array-based logic is a novel technology, which is founded on a geometrical representation of logic in terms of nested data arrays. In this representation, logical inferences are executed on finite domain systems by a few standardised array operations. In computational practice, array-based logic makes it possible to solve large-scale constraint problems with combinations very efficiently. The major advantages of the technology are completeness, compactness, and speed of simulation (real-time capability), all of which derive from intrinsic properties of the underlying geometrical model. A system in array-based logic is considered analogously to electrical networks and is thus described by the constraints of the isolated elements as well as the constraints of the interconnected elements (the system topology). All system constraints are eliminated in accordance with the methods known from electrical network theory and other engineering disciplines rooted in classical physics. The configuration space is represented explicitly in terms of nested data arrays and the system can therefore be simulated in real-time by co-ordinate indexing on these arrays. This data driven approach makes it suitable for parallel processing, which is an important quality for distributed systems.
Clique clustering is the problem of partitioning a graph into cliques so that some objective function is optimized. In online clustering, the input graph is given one vertex at a time, and any vertices that have previously been clustered together are not allowed to be separated. The objective here is to maintain a clustering the never deviates too far in the objective function compared to the optimal solution. We give a constant competitive upper bound for online clique clustering, where the objective function is to maximize the number of edges inside the clusters. We also give almost matching upper and lower bounds on the competitive ratio for online clique clustering, where we want to minimize the number of edges between clusters. In addition, we prove that the greedy method only gives linear competitive ratio for these problems.
We study the ART GALLERY Problem under k-hop visibility in polyominoes. In this visibility model, two unit squares of a polyomino can see each other if and only if the shortest path between the respective vertices in the dual graph of the polyomino has length at most k. In this paper, we show that the VC dimension of this problem is 3 in simple polyominoes, and 4 in polyominoes with holes. Furthermore, we provide a reduction from PLANAR MONOTONE 3SAT, thereby showing that the problem is NP-complete even in thin polyominoes (i.e., polyominoes that do not a contain a 2 x 2 block of cells). Complementarily, we present a linear-time 4-approximation algorithm for simple 2-thin polyominoes (which do not contain a 3 x 3 block of cells) for all k is an element of N.
We exhibit two linear time approximation algorithms for guarding rectilinear staircase polygons both having approximation factor 2. The first algorithm benefits from its simplicity, whereas the second provides more insight to the problem.
In this paper, we consider the variant of the art gallery problem where the input polygon is a staircase polygon. Previous works on this problem gave a 2-approximation for point guarding a staircase polygon (where guards can be placed anywhere in the interior of the polygon and we wish to guard the entire polygon). It is still unknown whether this point guarding variant is NP-hard. In this paper we consider the vertex guarding problem, where guards are only allowed to be placed at the vertices of the polygon, and we wish to guard only the vertices of the polygon. We show that this problem is NP-hard, and we give a polynomial-time 2-approximation algorithm.
We study the problem of optimizing recommendation systems for e-commerce sites. We consider in particular a combinatorial solution to this optimization based on the well known Maximum Coverage problem that asks for the k sets (products) that cover the most elements from a ground set (consumers). This formulation provides an abstract model for what k products should be recommended to maximize the probability of consumer purchase. Unfortunately, Maximum Coverage is NP-complete but an efficient approximation algorithm exists based on the Greedy methodology.We exhibit test results from the Greedy method on real data sets showing 3-8% increase in sales using the Maximum Coverage optimization method in comparison to the standard best-seller list. A secondary effect that our Greedy algorithm exhibits on the tested data is increased diversification in presented products over the best-seller list.
We study the approximation complexity of certain kinetic variants of the Traveling Salesman Problem where we consider instances where points move with fixed constant velocity in a fixed direction. We prove the following results: 1. If the points all move with the same velocity and in the same direction, then there is a PTAS for the Kinetic TSP. 2. The Kinetic TSP cannot be approximated better than by a factor of two by a polynomial time algorithm unless P=NP, even if there are only two moving points in the instance. \item The Kinetic TSP cannot be approximated better than by a factor of $2^{\Omega(\sqrt{n})}$ by a polynomial time algorithm unless P=NP, where $n$ is the size of the input instance, even if the maximum velocity is bounded.
Exploring a polygon with robots, when the robots do not have knowledge of the surroundings can be viewed as an online problem. Typical for online problems is that decisions must be made based on past events without complete information about the future. In our case the robots do not have complete information about the environment. Competitive analysis can be used to measure the performance of methods solving online problems. The competitive ratio of such a method is the ratio between the method's performance and the performance of the best method having full knowledge of the future. We are interested in obtaining good bounds on the competitive ratio of exploring polygons and prove constant competitive strategies and lower bounds for exploring a simple rectilinear polygon in the $L_1$ metric.
We consider the following problem from swarm robotics:given one or more “awake” robots in some metric space M, wake upa set of “asleep” robots. A robot awakens a sleeping robot by moving tothe sleeping robot’s position. When a robot awakens, it is available toassist in awakening other slumbering robots. We investigate offline andonline versions of this problem and give a 2-competitive strategy and alower bound of 2 in the case when M is discrete and the objective isto minimize the total movement cost. We also study the case when Mis continuous and show a lower bound of 7/3 when the objective is tominimize the time when the last robot awakens.
We prove a 5/3-competitive strategy for exploring a simple rectilinear polygon in the $L_1$ metric. This improves the previous factor two bound of Deng, Kameda and Papadimitriou.
We investigate parallel searching on $m$ concurrent rays. We assume that a target $t$ is located somewhere on one of the rays; we are given a group of $m$ point robots each of which has to reach $t$. Furthermore, we assume that the robots have no way of communicating over distance. Given a strategy $S$ we are interested in the competitive ratio defined as the ratio of the time needed by the robots to reach $t$ using $S$ and the time needed to reach $t$ if the location of $t$ is known in advance. If a lower bound on the distance to the target is known, then there is a simple strategy which achieves a competitive ratio of~9 --- independent of $m$. We show that 9 is a lower bound on the competitive ratio for two large classes of strategies if $m\geq 2$. If the minimum distance to the target is not known in advance, we show a lower bound on the competitive ratio of $1 + 2 (k + 1)^{k + 1} / k^k$ where $k = \left\lceil\log m\right\rceil$ where $\log$ is used to denote the base 2 logarithm. We also give a strategy that obtains this ratio.
We show that vertex guarding a monotone polygon is NP-hard and construct a constant factor approximation algorithm for interior guarding monotone polygons. Using this algorithm we obtain an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon. If the size of the smallest interior guard cover is \OPT\ for a rectilinear polygon, our algorithm produces a guard set of size~$O(\OPT^2)$.
A polygon P is x-monotone if any line orthogonal to the x-axis has a simply connected intersection with P. A set G of points inside P or on the boundary of P is said to guard the polygon if every point inside P or on the boundary of P is seen by a point in G. An interior guard can lie anywhere inside or on the boundary of the polygon. Using a reduction from Monotone 3SAT, we prove that the decision version of this problem is NP-hard. Because interior guards can be placed anywhere inside the polygon, a clever gadget is introduced that forces interior guards to be placed at very specific locations.
We study the art gallery problem for opposing halfguards: guards that can either see to their left or to theirright only. We present art gallery theorems, show thatthe problem is NP-hard in monotone polygons, presentapproximation algorithms for spiral and staircase poly-gons, and show that the location of half guards in 2-guardable polygons is not restricted to extensions.
It is shown that a minimum spanning tree of $n$ points in $R^d$ under any fixed $L_p$-metric, with $p=1,2,\ldots,\infty$, can be computed in optimal $O(T_d(n,n) )$ time in the algebraic computational tree model. $T_d(n,m)$ denotes the time to find a bichromatic closest pair between $n$ red points and $m$ blue points. The previous bound in the model was $O( T_d(n,n) \log n )$ and it was proved only for the $L_2$ (Euclidean) metric. Furthermore, for $d = 3$ it is shown that a minimum spanning tree can be found in $O(n \log n)$ time under the $L_1$ and $L_\infty$-metrics. This is optimal in the algebraic computation tree model. The previous bound was $O(n \log n \log \log n)$.
The watchman route of a polygon is a closed tour that sees all points of the polygon. Computing the shortest such tour is a well-studied problem. Another reasonable optimization criterion is to require that the tour mini- mizes the hiding time of the points in the polygon, i.e., the maximum time during which any points is not seen by the agent following the tour at unit speed. We call such tours surveillance routes. We show a linear time 3/2-approximation algorithm for the optimum surveillance tour problem in rectilin- ear polygons using the L_1 -metric. We also present a polynomial time O(polylog w_max )-approximation algo- rithm for the optimum weighted discrete surveillance route in a simple polygon with weight values in the range [1, w_max].
We study the problem of clearing connections by agents placed at some vertices in a directed graph. The agents can move only along directed paths. The objective is to minimize the number of agents guaranteeing that any pair of vertices can be connected by a underlying undirected path that can be cleared by the agents. We provide several results on the hardness, approximability and parameterized complexity of the problem. In particular, we show it to be: NP-hard, 2-approximable in polynomial-time, and solvable exactly in O(alpha n^3 2^{2alpha}) time, where alpha is the number of agents in the solution. In addition, we give a simple linear-time algorithm optimally solving the problem in digraphs whose underlying graphs are trees. Finally, we discuss a related problem, where the task is to clear with a minimum number of agents a subgraph of the underlying graph containing its spanning tree. We show that this problem also admits a 2-approximation in polynomial time.
We show a constant factor approximation algorithm for interior guarding of monotone polygons. Using this algorithm we obtain an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon. If the size of the smallest interior guard cover is OPT for a rectilinear polygon, our algorithm produces a guard set of size O(OPT ^{2}).
We present a fast algorithm for computing a watchman route in a simple polygon that is at most a constant factor longer than the shortest watchman route. The algorithm runs in $O(n\log n)$ time as compared to the best known algorithm that computes a shortest watchman route which runs in $O(n^6)$ time.
We present a university course in "Web Development" that has been taught at Malmö University since 2002 as a distance and e-learning course. We discuss course content, objectives and student results as a function of web-based course material and e-learning set-up. We study the positive and negative effects of "availability." Many of the solutions and methods taught in the course are available on the web. How does this affect learning and how does one move from copying as a way of cheaingt, to copying as a means of learning? Over time the outside parameters of the course have changed, e.g., the "Learning Management Systems" (LMS) have changed three times since the conception of the course. We present a short list of requirements for LMS-systems to be able to function well in e-learning courses in higher education.
We present an O(nrG) time algorithm for computing and maintaining a minimum length shortest watchman tour that sees a simple polygon under monotone visibility in direction θ, while θ varies in[0, 180 ◦ ), obtaining the directions for the tour to be the shortest one overall tours, where n is the number of vertices, r is the number of reflex vertices, and G ≤ r is the maximum number of gates of the polygon used at any time in the algorithm.
Given a set S of regions with piece-wise linear boundary and a positive angle α < 90°, we consider the problem of computing the locations and orientations of the minimum number of α-floodlights positioned at points in S which suffice to illuminate the entire x-axis. We show that the problem can be solved in O(n log n) time and O(n) space, where n is the number of vertices of the set S.
The two-watchman route problem is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has minimal length, and the minsum measure, where we want the tours for which the sum of their lengths is smallest. It is known that computing the minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. We exhibit a polynomial time 7.1416-factor approximation algorithm for computing the minmax two-watchman route in simple polygons.
The two-watchman route problem is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any c-approximation algorithm for the minmax two-watchman route is automatically a 2c-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times O ( n 8 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n<^>8)$$\end{document} and O ( n 4 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n<^>4)$$\end{document} respectively, where n is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the fixed two-watchman route problem running in O ( n 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n<^>2)$$\end{document} time, i.e., when two starting points of the two tours are given as input.
The watchman route problem is a well investigated problem in which a closed tour inside a polygon that satisfies visibility constraints is optimized. We ex- plore a new variant in which a tour needs to see a set of locations inside a polygon so that the maximum time interval in which those locations are not guarded (the delay) is minimized. We call such tours surveil- lance tours. We show that an algorithm that follows the ideas of the classic watchman problem provides a 2-approximation to our problem. We then investi- gate the case where each location is associated with a weight that represents the importance of not leaving the location unguarded for long periods. Thus, the tour is required to see locations with larger weights more frequently. We show that this variant is NP- hard and present two approximation algorithms.
We consider the watchman route problem for a k-transmitter watchman: standing at point p in a polygon P, the watchman can see �∈� if ��¯ intersects P’s boundary at most k times—q is k-visible to p. Traveling along the k-transmitter watchman route, either all points in P or a discrete set of points �⊂� must be k-visible to the watchman. We aim for minimizing the length of the k-transmitter watchman route.
We show that even in simple polygons the shortest k-transmitter watchman route problem for a discrete set of points �⊂� is NP-complete and cannot be approximated to within a logarithmic factor (unless P=NP), both with and without a given starting point. Moreover, we present a polylogarithmic approximation for the k-transmitter watchman route problem for a given starting point and �⊂� with approximation ratio �(log2(|�|⋅�)loglog(|�|⋅�)log|�|) (with |�|=�).
We consider the online two-dimensional vector packing problem, showing a lower bound of 11/5 on the competitive ratio of any AnyFit strategy for the problem. We provide a strategy with competitive ratio max{2,6/(1+3tan(𝜋/4−𝛾/2))+𝜖}max{2,6/(1+3tan(π/4−γ/2))+ϵ} and logarithmic advice, for any instance where all the input vectors are restricted to have angles in the range [𝜋/4−𝛾/2,𝜋/4+𝛾/2][π/4−γ/2,π/4+γ/2], for 0≤𝛾<𝜋/30≤γ<π/3. In addition, we give a 5/2-competitive strategy also using logarithmic advice for the unrestricted vectors case. These results should be contrasted to the currently best competitive strategy, FirstFit, having competitive ratio 27/10.
We present a (k +h)-FPT algorithm for computing a shortest tour that sees k specified points in a polygon with h holes. We also present a k-FPT approximation algorithm for this problem having approximation factor √2. In addition, we prove that the general problem cannot be polynomially approximated better than by a factor of (log n), unless P=NP, where n is the total number of edges of the polygon.
We study the computational properties of placing a minimum number of lighthouses in different geometric domains and under different notions of visibility, enabling a vehicle placed anywhere in the domain to navigate to a given specific target. This problem shares common elements with the art gallery problem in that the whole domain must be covered with as few lighthouses as possible. Our main result is an algorithm that places a minimum set of strip lighthouses in a simple rectilinear polygon. These correspond to sliding cameras in art gallery vernacular.
We present new results on two types of guarding problems for polygons. For the first problem, we present an optimal linear time algorithm for computing a smallest set of points that guard a given shortest path in a simple polygon having 𝑛n edges. We also prove that in polygons with holes, there is a constant 𝑐>0c>0 such that no polynomial-time algorithm can solve the problem within an approximation factor of 𝑐log𝑛clogn, unless P=NP. For the second problem, we present a (𝑘+ℎ)(k+h)-FPT algorithm for computing a shortest tour that sees 𝑘k specified points in a polygon with ℎh holes. We also present a 𝑘k-FPT approximation algorithm for this problem having approximation factor 2‾√2. In addition, we prove that the general problem cannot be polynomially approximated better than by a factor of 𝑐log𝑛clogn, for some constant 𝑐>0c>0, unless P=NP.
We experiment on two real e-commerce datasets and survey more than 30 popular e-commerce platforms to reveal what methods work best for product recommendations in industrial settings. Despite recent academic advances in the field, we observe that simple methods such as best-seller lists dominate deployed recommendation engines in e-commerce. We find our empirical findings to be well-aligned with those of the survey, where in both cases simple personalized recommenders achieve higher ranking than more advanced techniques. We also compare the traditional random evaluation protocol to our proposed chronological sampling method, which can be used for determining the optimal time-span of the training history for optimizing the performance of algorithms. This performance is also affected by a proper hyperparameter tuning, for which we propose golden section search as a fast alternative to other optimization techniques.
Despite the increasing interest towards session-based and streaming recommender systems, there is still a lack of publicly available evaluation frameworks supporting both these paradigms. To address the gap, we propose FlowRec — an extension of the streaming framework Scikit-Multiflow, which opens plentiful possibilities for prototyping recommender systems operating on sessionized data streams, thanks to the underlying collection of incremental learners and support for real-time performance tracking. We describe the extended functionalities of the adapted prequential evaluation protocol, and develop a competitive recommendation algorithm on top of Scikit-Multiflow’s implementation of a Hoeffding Tree. We compare our algorithm to other known baselines for the next-item prediction task across three different domains.
Reported evidence of biased matchmaking calls into question the ethicality of recommendations generated by a machine learning algorithm. In the context of dating services, the failure of an automated matchmaker to respect the user’s expressed sensitive preferences (racial, religious, etc.) may lead to biased decisions perceived by users as unfair. To address the issue, we introduce the notion of preferential fairness, and propose two algorithmic approaches for re-ranking the recommendations under preferential fairness constraints. Our experimental results demonstrate that the state of fairness can be reached with minimal accuracy compromises for both binary and non-binary attributes.