first_imgNorth Carolina’s only federally recognized American Indian tribe could soon offer sports and horse wagering to patrons at its two casinos. The General Assembly gave final approval Monday night to a measure that would give the Eastern Band of Cherokee Indians the authority to offer the additional betting. The House voted for the measure that had already cleared the Senate three months ago. The bill now goes to Gov. Roy Cooper’s desk. A Cooper spokesman says the bill will be reviewed before he makes a decision on whether to sign it. The sports-book option took shape after the U.S. Supreme Court struck down a federal law last year that made most sports gambling illegal. State law already lets the Eastern Band offer live poker, slot machines and video-style games.last_img read more

first_imgDidn’t get a chance to participate on Instagram? Weigh in with your thoughts here. Want to be able to participate in real time? Give our Instagram page a follow. Share This!One aspect of TouringPlans is that we use field-tested methods to minimize your wait in lines around the parks. One side effect of that is that TouringPlans fans don’t tolerate waiting in long lines–if the line is too long, they skip it for the moment. Over on our Instagram feed, Angela asked how long you would wait in line for a ride. Here’s your comments.last_img read more

first_imgRelated Posts curt hopkins Tags:#E-Books#web A Web Developer’s New Best Friend is the AI Wai… Top Reasons to Go With Managed WordPress Hostingcenter_img 8 Best WordPress Hosting Solutions on the Market Why Tech Companies Need Simpler Terms of Servic… The definitive dictionary of the English language, the Oxford English Dictionary, may well never see the light of day again, only the light of a monitor. Nigel Portwood, chief executive of Oxford University Press, which publishes the OED, told the London Sunday Times that dictionaries sales have been falling off at a rate greater than 10% a year for the last few years. So the next edition may be online only. Such a move might be financially reasonable. After all, the current online edition gets two million hits monthly at $400 per user, and more people are favoring compact, universally-retrievable sources of information. But is finance all we should consider? Over at GigaOm, Matthew Ingram asks if we should care. His response, if I’m reading him right, is yes. I yearned for an opportunity to disagree, dramatically, just pro forma. But I can’t. I’ll go into a bit more detail on why a “hardcopy” of the dictionary (I favor the neo-logism, “book”) is still desirable. The idea of small, lightweight, online, retrievable sources of reference materials is fantastic. I use more than I use my Websters. (Though Websters gets money either way.) So why not the OED? After all, the twenty-volume mega-book is, at almost $1,600 hellishly expensive and, if you’re sub-Ferrigno, immovable. Because while some books are repositories of information, others are experiences. Although the OED is not a narrative, not scripture, not poetry, it is, nonetheless, transportive. The idea of flipping from one entry to another, following a line of inquiry (especially etymological inquiry) from one page to another, even one volume to another, is a sensual experience. I don’t mean it’s sexy (it is), but rather that it’s an experience that encompasses sight, sound and touch and even hearing (the rustle of pages, the thump of the volume hitting the desk) to create the context for comprehension. I agree with Matthew that it doesn’t need to be a commercial production, with loads of books run out and sent by plane and truck to book stores. It may become something of a bespoke tradition – created at user request. Although the Oxford University Press said it hadn’t made a hard-and-fast decision as to whether they’d print again (the next edition probably won’t be ready for a decade), they should make a hard-and-fast decision never to stop printing, even if they have to change the way they print. If scifi has been in some way a guide to our future, let’s remember that Picard read manifests on a PADD but Shakespeare in a book; and further, each new technology does not push all previous technologies out. Nor should it. So come on, Nigel. Make a commitment. Give us that sweet must of a real book when we need the experience of language, not just the data. What do you think? Does the Book matter, or is it only a vehicle for the experience of reading? For more discussion of the online reading experience, read Richard MacManus’s posts where he examines the pros and cons; and mine, where I ask whether e-books are the new paperbacks. last_img read more

first_imgIn Pt1 of this blog post I looked at a SQL Query and data set to run in Hadoop and in Pt2 wrote the Map function to extract the relevant fields from the data set to satisfy the query. At this point however we still have not implemented any of the aggregate functions and still have a large key and value intermediate data set. The only data eliminated so far has been the lines examined where the date was not less than or equal to 11-AUG-98. On the test data set out of the initial 600037902 lines of data we now have 586996074 lines remaining, to complete the query we now need to write the reduce phase. The Reduce method will extend the Reducer class. This needs to accept the intermediate key value pairs output by the mapper and therefore will receive as input the key which is fields 9 and 10  concatenated and the DoubleArrayWritable containing the values. For every key we need to iterate through the values and calcuate the totals required for the SUM(), AVG() and COUNT() functions. Once these have been calculated we can format the output as text to be written to a file that will give us exactly the same result as if the query had been processed by a relational database. This reduce phase will look something as follows by simply adding all of the values in the array for the SUM() functions and then dividing by the COUNT() value to calculate the result of the AVG() functions.nfor (DoubleArrayWritable val : values) {x = (DoubleWritable[]) val.toArray();sum_qty += x[0].get();sum_base_price += x[1].get();sum_discount += x[2].get();count_star += x[3].get();sum_disc_price += x[4].get();sum_charge += x[5].get();        }avg_qty = sum_qty/count_star;avg_price = sum_base_price/count_star;avg_disc = sum_discount/count_star;/* Format and collect the output */Text tpchq1redval = new Text(” “+sum_qty+” “+sum_base_price+” “+sum_disc_price+” “+sum_charge+” “+avg_qty+” “+avg_price+” “+avg_disc+” “+count_star);       context.write(key, tpchq1redval);       }  }nCoupled with the Map phase and a Job Control section (this will be covered in the next post on running the job) this Job is now ready to run. However as we have noted previously just for our 100GB data set the map phase will output over 58 million lines of data which will involve a lot of network traffic and disk writes. We can make this more efficient by writing a Combiner.The Combiner also extends the Reducer and in simple cases but not all (as we will cover in a moment) can be exactly the same as the Reducer. The aim of the combiner is to perform a Reducer type operation on the subset of data produced by each Mapper which will then minimise the amount of data that needs to be transferred throughout the cluster from Map to Reduce. The single most important thing about the Combiner is that there is no certainty that it will run. It is available as an optimization but for a particular Map output it might not run at all and there is no way to force it to run. From a development perspective this has important consequences, you should be able to comment out the line in the Job Control section that calls the Combiner and the result produced by the MapReduce Job stays exactly the same. Additionally the input fields for the Combiner must be exactly the same as expected by the Reducer to operate on the Map output and the Combiner output must also correspond to the input expected by the Reducer.  If you Combiner does not adhere to these restrictions your job may compile and run and you will not receive an error, however if not implemented correctly your results may change on each run from additional factors such as changing the imput block size. Finally the Combiner operation must be both commutative and associative. In other words the Combiner operation must ensure that both changing the order of the operands as well as the grouping of the operations you perform does not change the result. In our example the SUM() function is both commutative and associative, the numbers can be summed in any order and we can perform the sum operation on different groups and the result will always remain the same. AVG() on the other hand is commutative but not associative. We can calculate the average with the input data in any order, however we cannot take an average of smaller groups of values and then take the average of this intermediate data and expect the result to be the same. For this reason the Combiner can perform the SUM() operation but not the AVG() and can look as follows producing the intermediate sum values only for the Reducer.nfor (DoubleArrayWritable val : values) { x = (DoubleWritable[]) val.toArray();sum_qty += x[0].get();sum_base_price += x[1].get();sum_discount += x[2].get();count_star += x[3].get();sum_disc_price += x[4].get();sum_charge += x[5].get();  }outArray[0] = new DoubleWritable(sum_qty); outArray[1] = new DoubleWritable(sum_base_price); outArray[2] = new DoubleWritable(sum_discount); outArray[3] = new DoubleWritable(count_star); outArray[4] = new DoubleWritable(sum_disc_price);outArray[5] = new DoubleWritable(sum_charge);DoubleArrayWritable da = new DoubleArrayWritable();da.set(outArray);context.write(key, da);     }  nAt this stage we have written the Mapper, Reducer and Combiner and in Pt4 will look at adding the Job Control section to produce the completed MapReduce job. We will then consider compiling and running the job and tuning for performance.last_img read more