ebook img

Cheating-Resilient Incentive Scheme for Mobile Crowdsensing Systems PDF

0.32 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Cheating-Resilient Incentive Scheme for Mobile Crowdsensing Systems

Cheating-Resilient Incentive Scheme for Mobile Crowdsensing Systems Cong Zhao∗, Xinyu Yang∗, Wei Yu†, Xianghua Yao∗, Jie Lin∗ and Xin Li∗ ∗Xi’an Jiaotong University †Towson University Abstract—Mobile Crowdsensing is a promising paradigm for this paper), even when it pays fairly for data acquisition. ubiquitous sensing, which explores the tremendous data col- Intuitively, we can see that: (i) the MCS server needs to lected by mobile smart devices with prominent spatial-temporal 7 recruitcredibleusersinsensingtasks,and(ii)honestresponses coverage. As a fundamental property of Mobile Crowdsensing 1 deserve substantial reward while dishonest reporting requires Systems, temporally recruited mobile users can provide agile, 0 fine-grained, and economical sensing labors, however their self- reprimand. A mechanism satisfying these two requirements 2 interest cannot guarantee the quality of the sensing data, even simultaneously is necessary for MCS implementations. n when there is a fair return. Therefore, a mechanism is required User Incentive schemes [5] aim at encouraging self- a for the system server to recruit well-behaving users for credible interested users to participate in system tasks by rewarding J sensing, and to stimulate and reward more contributive users monetary or tradable paybacks. Generally speaking, existing 8 based on sensing truth discovery to further increase credible schemes in participatory systems model the user incentive reporting. In this paper, we develop a novel Cheating-Resilient ] Incentive(CRI)schemeforMobileCrowdsensingSystems,which process as an optimization problem for either the system I achieves credibility-driven user recruitment and payback max- server [6], [7] or users [8], [9] by designing mechanisms N imization for honest users with quality data. Via theoretical based on auction or game theory. Nonetheless, research on . analysis, we demonstrate the correctness of our design. The s incentivemechanismsthatconsidersactivecheatingbehaviors c performance of ourschemeisevaluated based onextensive real- from self-interested users is relatively limited. Schemes in [ worldtrace-drivensimulations.Ourevaluationresultsshowthat ourschemeisproventobeeffectiveintermsofbothguaranteeing [10] and [11] aim to guarantee the ‘bidder’s truthfulness’ in 1 sensing accuracy and resisting potential cheating behaviors, as designed auctions. Also, incentive schemes were proposed to v demonstrated in practical scenarios, as well as those that are evaluate the quality of user reports[12], [13]. However,these 8 intentionally harsher. 2 schemes cannot be directly deployed in highly dynamic and IndexTerms—MobileCrowdsensing;Cheating-ResilientIncen- 9 opportunistic MCSs. To stimulate the service time of MCS tive Scheme; Mobile Applications 1 participators, a Stackelberg game-based incentive mechanism 0 is proposed in [14], which maximizes the utility of the MCS . I. INTRODUCTION 1 platform and proves that a best strategy for all self-interested 0 The proliferation of mobile smart devices has promoted participators can be centrally determined. However, since the 7 the development of Mobile Crowdsensing Systems (MCSs), purposeof this work is to stimulate user participation,and no 1 a promising paradigm for agile, fine-grained, and economical sensingdataqualityrelatedfactorisconsidered,itcannotsolve : v sensing with prominent spatial-temporal coverage [1]. Ex- the dishonest user reporting issue. An incentive mechanism i X ploring people-centric data collected by smart devices with thatencouragesqualitydatareportingisnecessarytoguarantee enriched sensors (e.g. Global Positioning System, gyroscope the usability of MCSs. r a and microphone), a growing number of MCS prototypes In this paper,we developa novelCheating-ResilientIncen- have been developed to support applications, including urban tive(CRI)schemeforMCSs,whichguaranteestheaccuracyof sensing [2], environmental monitoring [3] and mobile social crowdsensingtaskswhileencouragingmobileuserstoprovide networking [4]. quality data without cheating for maximum paybacks. Our ObservingthatthedatasourceofMCSsisasetofpersonal contributions are summarized as follows: mobile devices temporallyrecruited, the self-interested nature • Based on the participation-drivenincentivization in [14], ofmobileusersneedstobetakenintoaccountforMCSimple- we develop a reputation-driven method for the MCS mentations. From the perspective of mobile users considering server to recruit the most credible users autonomously the potential costs (e.g. physical labor, device battery life, according to their historical behaviors. Meanwhile, re- and network bandage usage), participation is unlikely in an cruited users can obtain maximum paybacks only when MCS sensing task unless there was a considerable payback they contribute no less than expected. We demonstrate under the rational person hypothesis. From the perspective of the correctness of our design with theoretical analysis. the MCS server, the credibility of reported observations from • We introduce the truth discovery technique [15] into the temporally recruited users is not guaranteed (i.e. users may user incentive issue to evaluate the actual contribution cheat in sensing tasks just for the payback without reporting of recruited users in MCS tasks. The adaptive truth quality data, which is referred to as Cheating Behavior in discoveryguaranteestheaccuracyofcrowdsensingwhile To guarantee the quality of sensing reports, s needs to considertwo issues: (i) theselection ofemployeesfora given taskshouldbebasedontheapplicant’sreputation,and(ii)the reputation adjustment and payback to an employee should be based on their contribution in the currenttask. Therefore,our user incentive scheme should solve the following problems: 1) How to define and manage the user reputation? 2) How to recruit employees based on their reputations? 3) How to evaluate an employee’s contribution? 4) How to quantify and maximize an employee’s payback according to its contribution? III. THE CHEATING-RESILIENTINCENTIVESCHEME Fig. 1: System Architecture Inthis section,we developthe Cheating-ResilientIncentive (CRI) scheme to address the four problems outlined in Sub- providing a baseline for user contribution evaluation. section II-B. Before the construction of CRI, we first provide • Through extensive trace-driven simulations, we evaluate the formal definitions of both user reputation and employee the performance of CRI. The simulation results validate payback.Then,we presenta mechanism for reputation-driven the effectiveness of CRI with respect to both quality- employeerecruitment.Afterthat,weaddresstheissue ofhow driven user stimulation and cheating behavior resistance. toevaluateanemployee’scontributionandhowtoquantifyand The remainder of the paper is organized as follows: In maximizeanemployee’spaybackaccordingtoitscontribution. Section II, we present the MCS model and the description of the user incentive problem. In Section III, we present the A. Definitions CRIschemeindetail.InSectionIV,wepresenttheevaluation In this paper, we formally define user reputation and em- results of our designed scheme via extensive trace-driven ployee payback as follows. simulations. Finally, we conclude this paper in Section V. User Reputation: Intuitively, we treat the reputation as a user’s credibility for offering quality reports. In fact, once II. SYSTEMMODEL AND PROBLEMDEFINITION recruited, the user is responsible for providing an actual contributiontothe sensingtask inproportiontoits reputation. A. MCS Architecture For any mobile user i ∈P, the reputation of i, denoted as As shown in Figure 1, a general MCS consists of a r (06r 61), will be adjusted by s wheneveri participates i i cloud server s and a set of registered mobile users P = in a sensing task τ by {1,2,...,M}, where M > 2. Any user i ∈ P can com- c municate with s via either cellular or WiFi access points. r =αr′ +(1−α) i, (1) i i c¯ For a specific sensing target, s can announce a task τ to i all users in P. Receiving the announcement, any user i ∈ P where r′ denotes the reputation of i before it participates in i who is interested in τ can reply with its reputation r as an τ, c¯ denotes i’s expected contribution to τ (estimated by s i i application,which reflectsits behaviorin historicaltasks. Ac- considering r′, discussed latter), c denotes i’s actual contri- i i cording to all applications received, s can determine the total bution to τ (evaluated by s, discussed latter), and 06α61 reward R and the set of applicants E = {1,2,...,N} ⊆ P determines the sensitivity of i’s reputation adjustment. After to be the final employees of τ. All employees ∈ E then the user’s registration, s issues any i ∈ P a value r as its 0 conduct sensing obligations required by τ and report their initial reputation. observations O ={o ,o ,...,o } to s. Based on all reports Employee Payback: Intuitively, it is natural to determine 1 2 N received,s can discoverτ’s sensing truth o , and evaluatethe an employee’s payback according to its contribution. Also, τ contributionC ={c ,c ,...,c }ofallemployeesseparately, considering that the reputation of an employee only denotes 1 2 N which determines their paybacks G = {g ,g ,...,g } for τ the probability of its good behavior, we take the potential 1 2 N and corresponding reputation adjustments. quality risk into account for the payback determination. Inspired by the sensing time-driven model in [14], for an B. User Incentive Problem employee i ∈ E of task τ, we define its reputation-driven payback g as The purposeof incentivemechanismsin MCSs is to stimu- i late mobile user activity throughpaybacksfor participatingin c 1−r g = i R− ic , (2) sensing tasks. As the decision maker, s prefers to maximize i c r i paybacks of the most contributive users as stimulation, while Pj∈E j i minimizing the effect of potential cheating behaviors (e.g. where R > 0 denotes the total reward of τ. For a fixed R, intentionally reporting random data just for the payback), to gi is subjectedto boththe ratio of i’s contribution, Pjc∈iEcjR, achieve accurate sensing. and the potential quality risk for recruiting i, 1−ririci. Algorithm 1: Reputation-Driven Employee Recruitment Input: ROu:trpeuput:tations of ∀i∈P; cg¯¯ii==qPjr∈c¯iiERc¯Pj1Rj−∈Er−i\{i1}−rc¯irjic¯−i P>j0∈.E\{i}c¯j >0; (4) E: recruited employees;  C¯: expected contributions of ∀i∈E. To allow s to autonomously recruit as many credible users as possible with a fixed total reward, we developAlgorithm1 1 put ∀ri ∈R in descending order; basedontheNEcomputationalgorithmintheSTDgame[14] 2 C¯←∅, E ←{1,2}, i←3; to determine final employees just based on their reputations. 3 while i6|P| && ri > P |1E−|r−j1+|E|−1 do Here,allemployeesrecruitedwillhavethemaximumpayback j∈E rj only if they contribute as expected. 4 E ←E ∪{i}; According to [14], we have Propositions 1 and 2 for 5 i++; Algorithm 1 as follows. 6 for ∀i∈P do Proposition1.Anyi∈P thatisnotrecruitedbyAlgorithm 7 if i∈E then 1 gets the maximum payback by not participating in τ. 8 c¯i = P(|jE∈|E−11−)rRjrj (1− (P|Ej|−∈E1)11−r−rjrirji); 1 cParnopoobstiatiinonth2e.mAanxyimiu∈mEpatyhbaatckisornelcyruifiteitdcboyntArilbguotreisthams 9 else expected in τ. 10 c¯i =0; Because of the page limitation, please refer to Theorem 1 11 C¯←C¯∪c¯i; and 2 in [14] for specific proofs. Until now, s can recruit the most credible employees E 12 return E, C¯; and compute their expected contributionsC¯only based on all receivedapplicationsR,andC¯istreatedasoneofthemetrics for the payback determination. Based on the definitions above, we develop CRI, which C. Contribution Evaluation consists of three components: (i) Employee Recruitment, (ii) Contribution Evaluation, and (iii) Payback Determination. In the following, we address the issue of how to evaluate an employee’s contribution. After employee recruitment, s B. Employee Recruitment announces a detailed task description of τ to all i ∈ E. The employees then reply corresponding reports containing their We now address the issue of how to recruit employees observationsO ={o ,o ,...,o }1. Based on O, s discovers based on their reputations. After the announcementof task τ, 1 2 N the sensing truth o , and evaluates the actual contributions receiving all applications R = {r ,r ,...,r } (see Subsec- τ 1 2 M C ={c ,c ,...,c } of all i∈E. tion II-A),spreferstorecruitemployeeswithtopreputations. 1 2 N Because there is no ground truth in our MCS scenario, s Nonetheless, according to Equation (2), it is possible that an needs to discover the sensing truth o , based on O, as the applicant could obtain no payback if the total reward budget τ baseline to evaluate employees’ actual contributions. Consid- is low. Therefore, s needs to determine the final employees considering their expected paybacks G¯ = {g¯ ,g¯ ,...,g¯ }, ering the potential conflict in reported observations and the which are determined by their expected cont1ribu2tions C¯M= difference in employee reputations, we develop Algorithm 2 based on the general truth discovery framework in [15] to {c¯ ,c¯ ,...,c¯ }. 1 2 M allow s to compute o and to evaluate actual employee For effectivestimulations,it is necessaryfor s to maximize τ contributions C at the same time. theexpectedpaybackthatawell-behavingemployeecanhave. AccordingtoAlgorithm2,eachactualcontributionc ∈C is According to Equation (2), it is easy to get that g is second- i i subjected to both the distance between o and o , and i’s rep- ordercontinuousdifferentiableonc¯,andthesecondderivative i τ i utation r . We treat observations from employees with higher of g on c¯ is: i i i reputations as more credible, and an employee’s contribution will be higher if its observationis closer to o . C is treated as 2R c¯ τ g¨(c¯)=− Pj∈P\{i} j. (3) another metric for the payback determination. i i ( c¯ )3 Pj∈P j D. Payback Determination When c¯ > 0, there is g¨(c¯) < 0, and g (c¯) is a concave i i i i i We now answer the question about how to quantify and function.Therefore,g (c¯)hasauniquemaximumvaluewhen i i maximize an employee’s payback according to its contri- |P| > 2. We call such a value g¯ as i’s expected payback, i bution. Based on the expected contributions C¯ and the ac- which can be calculated whenever g˙ (c¯) = 0,c¯ > 0 has a i i i tual contributions C, s can determine the final paybacks solution. Therefore, from the perspective of s, any i ∈ P that is 1Wetreateachemployee’staskobservationasasingle-dimensionvaluefor finally recruited should satisfy following restrictions: concision, whichcanbeexpanded indifferent sensingscenarios. Algorithm 2: Weighted Sensing Truth Discovery [16]. In the following,we firstpresentthesimulation settings, Input: and then show the evaluation results. O: observations of ∀i∈E; R : reputations of ∀i∈E; A. Simulation Settings E ǫ: convergencethreshold; According to Rometrace, we constructed an MCS with a Output: cloud server and 366 registered users. All users possessed o : discovered sensing truth; τ outdoor temperature data opportunistically collected within C: actual contributions of ∀i∈E. 24 hours. The server spontaneously announced temperature 1 compute the standard deviation of O: stdO; sensing tasks to the users. After receiving an announcement, 2 initialize oτ as a random value; a user who possessed data collected within ±60 seconds 3 initialize ∀ci ∈C as 0; autonomously applied for the task, and then uploaded cor- 4 repeat responding report if it was recruited. The server provided 5 for ∀i∈E do paybacks and updated employee reputations based on CRI (oj−oτ)2 during the simulation. 6 ci =log(Pj∈(oEi−osτtd)2Orj ); Fortheparametersettings,wesettheinitialreputationr0 = stdOri 0.5 and α = 0.5 in Equation (1) for reasonable reputation 7 o′τ =oτ; bootstrapping and management. In addition, we set ǫ = 0.1 c o in Algorithm 2 as the truth discovery convergence threshold. 8 oτ = Pj∈E jc j; Again,accordingtoRometrace,eachroundofsimulationlasts 9 until |oτ −Poj′τ∈|E<jǫ; for 86400 simulation seconds. We collected the following four metrics to evaluate the 10 for ∀ci ∈C do ci impact of cheating behaviors2 on the MCS performance: 11 ci = ; Pcj∈Ccj • DiscoveredTruth(DT)referstothesensingtruthdiscov- 12 return oτ, C; ered in a task, whose cumulative distribution reflects the sensing accuracy. Ideally, CRI should be able to prevent cheating behaviors from disrupting DT; • Reputation (REP) refers to the user reputation, whose G = {g ,g ,...,g } for all i ∈ E, and update employees’ 1 2 N cumulative distribution reflects the user’s behavior in reputations R according to their behaviors in τ. E historicaltasks.Ideally,CRIshouldbeabletodowngrade For all i∈E, to compare c¯ and c , we set the total reward i i a cheater’s REP in proportion to its cheating intensity; R = P(j|∈EE|−11−r)jrj to guarantee that c¯i is within the range of • Payback(PB)3 refersto whata userreceivesforaccom- [0,1]. plishing sensing tasks, which reflects the motivation of According to Equation (2), we set the user participatingin future tasks. Ideally,CRI should be able to reduce the PB that a user can get if it cheats; c¯ 1−r  i R− ic¯i, if ci >c¯i, • Task Count (TC) refers to the number of sensing tasks c¯ r gi =Pjc∈E j 1−ir (5) accomplished by a user, which reflects the popularity i R− ic , if c <c¯. of the user. Ideally, CRI should be able to limit the i i i c¯ r Pj∈E j i probability of a cheater participating in MCS tasks. Also, according to Equation (1), s updates all ri ∈RE as: For comparison, we ran a round of simulation without any cheating behavior as the baseline (i.e. the no cheating αr′ +(1−α), if c >c¯,  i i i scenario).Then,weanalyzedtheimpactofcheatingbehaviors ri =αri′ +(1−α)cc¯i, if ci <c¯i. (6) introduced by users with different properties. In following i subsections, we depict the simulation results using either the  As demonstrated, CRI guarantees that (i) s can recruit a Cumulative Distribution Figure (CDF) or the Time-Variance proper number of the most credible applicants for sensing Figure (TVF) for a distinct demonstration. tasks, and (ii) recruited users can only obtain the maximum paybackswhentheycontributenolessthanexpected.Cheating B. Impact of General Cheating Intensity behaviors will reduce their paybacks and opportunities of In this set of simulations, to study the impact of general being recruited in future tasks. cheating behaviors with different intensities, we set all users IV. EVALUATION in the MCS to introduce cheating behaviors with different To validate the performance of CRI in real-world MCSs, we conducted extensive trace-driven simulations based on 2Weconsidereduser’scheatingbehaviorinsimulationsasreportingaran- domobservation withintherangeof[2◦C,24◦C](according toRometrace). OMNeT++4.6,usingreal-worldoutdoortemperaturedatacol- 3The payback of each task was normalized within the range of [0,1] for lectedbytaxisinRome(hereinafterreferredtoasRometrace) effective comparisons. 1 1 1 1 No Cheating No Cheating No Cheating No Cheating 10% Cheating 10% Cheating Consistent Cheating Consistent Cheating 0.8 15% Cheating 0.8 15% Cheating 0.8 Intermittent Cheating 0.8 Intermittent Cheating 20% Cheating 20% Cheating Probability00..46 Probability00..46 Probability00..46 Probability00..46 0.2 0.2 0.2 0.2 00 5 10 15 20 25 00 0.2 0.4 0.6 0.8 1 0 5 10 15 20 00 0.2 0.4 0.6 0.8 1 Temperature / oC Reputation Temperature / oC Reputation (a)Discovered Truth(CDF) (b)Reputation (CDF) (a)Discovered Truth(CDF) (b)Reputation (CDF) 1 1 35 50 No Cheating No Cheating 30 Consistent Cheating Consistent Cheating 0.8 0.8 Intermittent Cheating 40 Intermittent Cheating 25 Probability00..46 Probability00..46 Payback1250 Task Count2300 No Cheating No Cheating 10 0.2 10% Cheating 0.2 10% Cheating 10 15% Cheating 15% Cheating 5 20% Cheating 20% Cheating 00 5 10 15 20 25 30 00 10 20 30 40 01 2 3 4 5 6 0 2 3 4 5 Payback Task Count Time / simulation second x 104 Time / simulation second x 104 (c)Payback(CDF) (d)TaskCount(CDF) (c)Payback(TVF) (d)TaskCount(TVF) Fig. 2: Impact of General Cheating Intensity Fig. 4: Impact of Cheating Behavior of TopP User 1 No Cheating 1 No Cheating 1 No Cheating 1 No Cheating Consistent Cheating Consistent Cheating Consistent Cheating Consistent Cheating 0.8 Intermittent Cheating 0.8 Intermittent Cheating 0.8 Intermittent Cheating 0.8 Intermittent Cheating Probability00..46 Probability00..46 Probability00..46 Probability00..46 0.2 0.2 0.2 0.2 0 5 Temp1e0rature / oC15 20 00 0.2 R0.e4putati0o.6n 0.8 1 0 5 Temp1e0rature / oC15 20 00 0.2 R0.e4putati0o.6n 0.8 1 (a)Discovered Truth(CDF) (b)Reputation (CDF) (a)Discovered Truth(CDF) (b)Reputation (CDF) 1102 NCInoote nCrsmhiseittetaentnint tgC Chheeaatitningg 1102 NCInoote nCrsmhiseittetaentnint tgC Chheeaatitningg 3305 NCInoote nCrsmhiseittetaentnint tgC Chheeaatitningg 50 NCInoote nCrsmhiseittetaentnint tgC Chheeaatitningg Payback 468 Task Count 468 Payback122505 Task Count234000 10 2 2 10 5 00 Tim2e / simul4ation seco6nd x 1048 00 Tim2e / simul4ation seco6nd x 1048 0 2 Time / 3simulatio4n secon5d x 104 02 Tim3e / simulat4ion secon5d x 104 (c)Payback (TVF) (d)TaskCount(TVF) (c)Payback(TVF) (d)TaskCount(TVF) Fig. 3: Impact of Cheating Behavior of TopR User Fig. 5: Impact of Cheating Behavior of TopC User probabilities (i.e. 10%, 15% and 20%)4 in all sensing tasks. on the average DT no more than 3.65% (i.e. 0.4◦C) is The simulation result is illustrated in Figure 2. introduced by general cheating behaviors with an intensity According to Figure 2(a), compared with the baseline sce- up to 20%. Such an impact is nearly negligible considering nario, the cumulative distribution of DT remains almost the the practical temperature sensing requirement. CRI manages same in all cheating scenarios. We can see that a disturbance to effectivelyrestrictthe impactofgeneralcheatingbehaviors on DT in both realistic and even harsher scenarios. 4Thesettingofthesecheatingintensities shouldbereasonableconsidering According to Figure 2(b), (c), and (d), when the cheating the well accepted fact that the MCS is a relatively good community with a limited ratioofmalicious behaviors (e.g.4%in[17],or10%in[18]). intensity increases, user’s reputation, payback, and task count are correspondingly downgraded for at least 1.64%, 3.44% analysis, we demonstrated the feasibility and correctness of and 2.20%, respectively.CRI managesto reducethe cheater’s our design. To evaluate the performance of CRI in practical probability of being recruited in future tasks by reducing its MCSs, we conducted extensive simulations based on real- reputation and payback autonomously, which will inherently world crowdsensingdata. The results show that CRI manages restrict user’s cheating intention. to guarantee the sensing accuracy under realistic cheating intensities (up to 20% of total reports). Meanwhile, cheating C. Impact of Cheaters with Different Properties behaviors from users with selected advantages (i.e. higher Inreal-worldMCSs,cheatingbehaviorsofmoretrustworthy reputations, more received paybacks, and more accomplished oractiveusersmayposedeeperimpactsontheMCS’sperfor- tasks) can be effectively resisted as well. Our future work is mance.Inthissetofsimulations,dependingonthesimulation todevelopaprivacy-preservingCRI,whichencourageshonest resultofthebaselinescenario,weseparatelysetauserthat(i) userbehaviorswithoutjeopardizingsensitiveuserinformation hadthehighestreputation(referredtoastheTopRcheater),(ii) including identities, locations, and living patterns. received the most paybacks (referred to as the TopP cheater), REFERENCES and(iii) accomplishedthe mosttasks(referredto as the TopC cheater) to introduce cheating behaviors either consistently [1] N. D. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, and A. T. Campbell, “A survey of mobile phone sensing,” IEEE Commun. Mag., (i.e.cheatwitha100%probability)orintermittently(i.e.cheat vol.48,no.9,pp.140–150,2015. with a 50%probability).The simulation results are illustrated [2] R.Gao, M.Zhao,T.Ye, F.Ye, Y.Wang, K.Bian, T.Wang,andX.Li, in Figures 3, 4, and 5. “Jigsaw: Indoor floor plan reconstruction via mobile crowdsensing,” in Proc.ACMMobicom, 2014,pp.249–260. TopR Cheater: According to Figure 3, neither the consis- [3] L. Capezzuto, L. Abbamonte, S. De Vito, and E. Massera, “A maker tent nor the intermittent cheating of the TopR cheater caused friendly mobile and social sensing approach to urban air quality moni- obvious impact on DT (introduced disturbances of 0.18% toring,”inProc.IEEESensors,2014,pp.12–16. [4] M.Bakht,M.Trower,andR.H.Kravets,“Searchlight: won’tyoubemy and 0.46% on the average DT, respectively). Nonetheless, neighbor?” inProc.ACMMobicom, 2012,pp.185–196. in comparison with the baseline scenario, REP of the TopR [5] H. Gao, C. H. Liu, W. Wang, J. Zhao, Z. Song, X. Su, J. Crowcroft, cheaterwas downgradedas longas therewas cheatingbehav- and K. K. Leung, “A survey of incentive mechanisms for participatory sensing,”IEEECommun.Surveys Tuts.,vol.17,no.2,pp.1–1,2015. ior (1.04%and92.71%lower,respectively).Correspondingly, [6] J.S.LeeandB.Hoh,“Sellyourexperiences:amarketmechanismbased bothPB(1.19%and99.98%less,respectively)andTC(8.33% incentive for participatory sensing,” in Proc. IEEE PerCom, 2010, pp. and33.33%less, respectively)of theTopRcheaterdecreased. 60–68. [7] H.Wang,Y.Zhao,Y.Li,K.Zhang,N.Thepvilojanapong, andY.Tobe, TopP Cheater: According to Figure 4, neither consistent “Anoptimizeddirectionaldistributionstrategyoftheincentivemechanism norintermittentcheatingbehaviorsoftheTopPcheatercaused in senseutil-based participatory sensing environment,” in Proc. IEEE obviousimpactonDT (introduceddisturbancesof 0.73%and MSN,2013,pp.67–71. [8] T. Luo and C. K. Tham, “Fairness and social welfare in incentivizing 0.82% on the average DT, respectively). Meanwhile, REP of participatory sensing,”inProc.IEEESECON,2014,pp.425–433. the TopP cheater (10.34% and 14.94% lower, respectively) [9] J. Sun and H. Ma, “Heterogeneous-belief based incentive schemes for was significantly downgraded. Similarly, it’s PB (18.18% crowd sensing in mobile social networks,” Elsevier J. Netw. Comput. Appl.,vol.42,no.4,pp.189–196,2014. and 25.93% less, respectively) and TC (32% and 26% less, [10] D. Zhao, X. Y. Li, and H. Ma, “How to crowdsource tasks truthfully respectively) declined dramatically because of cheating. without sacrificing utility: Online incentive mechanisms with budget TopC Cheater: According to Figure 5, DT was obviously constraint,” inProc.IEEEInfocom, 2014,pp.1213–1221. [11] X.Zhang,Z.Yang,Z.Zhou,H.Cai,L.Chen,andX.Li,“Freemarket affectedbyneitherconsistentnorintermittentcheatingbehav- ofcrowdsourcing:Incentivemechanismdesignformobilesensing,”IEEE iors of the TopC cheater (introduced disturbances of 0.91% Trans.ParallelDistrib.Syst.,vol.25,no.12,pp.3190–3200, 2014. and 0.37% on the average DT, respectively). In turn, it’s [12] Y. Gao, Y. Chen, and K. J. R. Liu, “On cost-effective incentive mechanisms in microtask crowdsourcing,” IEEE Trans. Comput. Intell. REP (58.51% and 42.55% lower, respectively) was severely AIinGames,vol.7,no.1,pp.3–15,2013. downgraded in cheating scenarios. Also, PB (53.05% and [13] B.Faltings,J.J.Li,andR.Jurca,“Incentivemechanismsforcommunity 50.35% less, respectively) and TC (39.22% and 41.18% less, sensing,”IEEETrans.Comput.,vol.63,no.1,pp.115–128,2014. [14] D.Yang,G.Xue,X.Fang,andJ.Tang,“Crowdsourcingtosmartphones: respectively) decreased significantly. incentive mechanism design for mobile phone sensing,” in Proc. ACM Accordingto the results above, it is well demonstrated that Mobicom,2012,pp.173–184. CRImanagestoencourageuserstoreporthonestly(withtheir [15] C. Miao, W. Jiang, L. Su, Y. Li, S. Guo, Z. Qin, H. Xiao, J. Gao, andK.Ren,“Cloud-enabled privacy-preserving truthdiscoveryincrowd best efforts) in sensing tasks for higher payback, reputation, sensingsystems,”inProc.ACMSenSys,2015,pp.183–196. and recruiting opportunities in practical MCSs. [16] M. A. Alswailim, H. S. Hassanein, and M. Zulkernine, “CRAWDAD dataset queensu/crowd temperature(v.2015-11-20),” Downloaded from V. CONCLUSION http://crawdad.org/queensu/crowd temperature/20151120,2015. [17] H.Shen,Y.Lin,K.Sapra,andZ.Li,“Enhancingcollusionresiliencein In this paper, we developed CRI for MCSs to guarantee reputation systems,”IEEETrans.ParallelDistrib. Syst.,pp.1–1,2015. crowdsensing accuracy by encouraging mobile users to pro- [18] X. Li, F. Zhou, and X. Yang, “Scalable feedback aggregating (sfa) overlay for large-scale p2p trust management,” IEEE Trans. Parallel videqualitydatawithoutcheatingforthemaximumpaybacks. Distrib.Syst.,vol.23,no.10,pp.1944–1957, 2012. To be specific, CRI enables MCS server to autonomously recruit as many as credible users as task employees, and employees will obtain the maximum payback only if they contributehonestlyastheirreputationsindicate.Viatheoretical

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.