Welcome to the May 11, 2012 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
MirageTable: Microsoft Presents Augmented Reality Device
BBC News (05/09/12)
Microsoft is developing an augmented reality system that allows users in different locations to work together on a tabletop and share and handle objects. Demonstrated at a conference in Austin, Texas, the MirageTable deceives the eye of users into believing they are using a seamless three-dimensional (3D) shared task space. MirageTable uses a 3D-video projector to beam images onto a sheet of curved white plastic placed in front of the user, and at each end, one of Microsoft's Kinect depth camera sensors is used to track the direction of each user's gaze, as well as the shape and appearance of objects placed on the surface and the participant sitting behind them. Users must wear shutter glasses to see the projected image in 3D, and the experience is powered by two computers linked by a network connection. Microsoft calls the projector/depth camera system a significant improvement on current videoconferencing technologies, and notes that it could be used to create a single-person gaming experience. "The unique benefit of this setup is that two users share not only the 3D image of each other, but also the tabletop task space in front of them," says the team.
Project Moon: One Small Step for a PC, One Giant Leap for Data
Wired News (05/08/12) Robert McMillan
Virginia Tech researchers launched the MapReduce On Opportunistic Environments (Moon) project five years ago with the goal of turning the university's Math Emporium, which contains 550 Apple computers, into a type of supercomputer that is based on the same technology that Google developed to power its search engine. The Project Moon researchers' paper on the system was recently named one of the most important distributed supercomputing papers in the past 20 years. "We're going through technology transfer and trying to figure out how much more we might need to do to package it if people want to license it or to spinoff a company off of it," says Virginia Tech researcher Wu-chun Feng. Project Moon is based on Hadoop, the open source version of Google's MapReduce platform, and it is one of many efforts to apply the platform to more than just Web services. The Project Moon researchers used Hadoop to turn each Apple computer into a node on a supercomputer, with each machine helping to solve complex data-analysis problems. In theory, the 550 Apple computers in the Math Emporium could be transformed into a supercomputer capable of performing 6.6 trillion mathematical operations per second.
Computer Scientists Develop an Interactive Field Guide App for Birders
UCSD News (CA) (05/08/12) Ioana Patringenaru
University of California, San Diego (UCSD) researchers have developed Visipedia, an iPad app that can identify most North American birds with minimal help from the user. Visipedia is an interactive field guide that uses computer-vision algorithms to analyze user-submitted pictures and provide information about a bird species that is a likely match. "We chose birds for several reasons: There is an abundance of excellent photos of birds available on the Internet, the diversity in appearance across different bird species presents a deep technical challenge and, perhaps most importantly, there is a large community of passionate birders who can put our system to the test," says UCSD's Serge Belongie. The Cornell Lab of Ornithology has contributed its database of images showing the male, female, and juveniles of more than 500 North American birds. The goal is to have 100 images identified by experts for each species. Belongie hopes Visipedia will start a movement among other enthusiasts, who will create similar apps for other organisms, such as flowers, butterflies, lichens and mushrooms. The app will ask users specific questions about the image if it cannot come up with a match based on the original image alone.
Physicists Go Totally Random
Science News (05/06/12) Alexandra Witze
ETH Zurich researchers have developed a method that guarantees complete randomness in a flow of information. The researchers, led by Roger Colbeck and Renato Renner, wanted to better understand the role of randomness in quantum theory. The researchers calculated what might happen to a stream of partially random information, in which some of the bits are correlated with other variables, thus making the bits non-random. The researchers used these bits to choose measurements on pairs of entangled particles sent down two different paths, which produced an outcome that is independent of any other variables, according to Colbeck and Renner. "Imagine an adversary who can guess at what choice you're going to make at a given time," Colbeck says. "Now we've shown there's a procedure where you can make it so the adversary cannot guess at all, within certain limits. In principle, one could use this to make better quality random numbers." The ETH Zurich researchers may next explore precisely how non-random the information flow can be and still be transformed into a purely random stream. "The fact that the quality of randomness can be improved is new and surprising," says University of Geneva physicist Nicolas Gisin.
Google Gets License for Driverless Car
InformationWeek (05/08/12) Thomas Claburn
Nevada has issued a license that permits Google to test its experimental self-driving cars on state roads. Google, which provided demonstrations of its autonomous cars on state freeways, highways, and roads in Carson City and Las Vegas to Nevada's Autonomous Review Committee, also received special red license plates bearing an infinity symbol. "We're excited to receive the first testing license for self-driving vehicles in Nevada," says a Google representative. "We believe the state's framework--the first of its kind--will help speed up the delivery of technology that will make driving safer and more enjoyable." The Nevada Department of Motor Vehicles says automakers also have expressed interest in testing autonomous vehicles in the future. Autonomous vehicle legislation was introduced in the California State Assembly in March, and Arizona, Hawaii, and Florida also are in the process of considering legislation. Meanwhile, Google recently acquired a Federal Communications Commission permit to operate automatic cruise control radar units in the 76.0-77.0 GHz band for driverless car navigation.
Kinect Cameras Watch for Autism
New Scientist (05/08/12) Niall Firth
University of Minnesota researchers are using Microsoft Kinect sensors and computer-vision algorithms to detect behavioral abnormalities and automate the early diagnosis of autism in children. The researchers, led by Guillermo Sapiro and Nikolaos Papanikolopoulos, equipped a nursery with five Kinect depth-sensing cameras to monitor groups of 10 children as they play. The cameras identify and track children based on their shape and the hue of their clothes, and this data is fed to three computers, which run software that logs each child's activity level and plots it against the room's average. "The idea is not that we are going to replace the diagnosis, but we are going to bring diagnosis to everybody," Sapiro says. "The same way a good teacher flags a problem child, the system will do automatic flagging and say, 'Hey, this kid needs to see an expert.' " By studying video footage of children interacting with a psychiatrist, computer-vision algorithms learn to identify behavioral markers as designated on the Autism Observation Scale for Infants. The system measures traits such as a child's ability to follow an object as it passes in front of the eyes, and notes certain mannerisms or postures that are classified as being early signs of a possible Autism Spectrum Disorder.
Georgia Tech/Microsoft Study Shows Bandwidth Caps Create Uncertainty, Risky Decisions
Georgia Tech News (05/07/12) Michael Terrazas
Researchers at Georgia Tech and Microsoft have found that Internet pricing models that cap monthly residential broadband usage trigger uneasy user experiences that could be mitigated by better tools to monitor data usage through their home networks. Home users typically manage their capped broadband access against three uncertainties--invisible balances, mysterious processes, and multiple users, which have predictable impacts on household Internet use and can force difficult choices on users, according to the study. "People's behavior does change when limits are placed on Internet access--just like we've seen happen in the smartphone market--and many complain about usage-based billing, but no one has really studied the effects it has on consumer activity," says Georgia Tech's Marshini Chetty. The researchers focused on South African Internet usage because that country had universal broadband caps until February 2010. "We were surprised to learn that many of the households we studied chose not to perform regular software updates in order to manage their cap," Chetty says. She suggests the frequency of such hazardous behaviors among the broader population of metered/capped Internet users should be evaluated through follow-up scientifically representative surveys.
DARPA System to Blend AI, Machine Learning to Understand Mountain of Text
Network World (05/04/12) Michael Cooney
U.S. Defense Advanced Research Projects Agency (DARPA) researchers are developing an automated system that will enable analysts to better understand large volumes of text documents. The system will use artificial intelligence, computational linguistics, machine learning, and natural language technologies. "Sophisticated artificial intelligence of this nature has the potential to enable defense analysts to efficiently investigate orders of magnitude more documents so they can discover implicitly expressed, actionable information contained within them," DARPA says. The Deep Exploration and Filtering of Text program has developed technology that is expected to provide the capability to identify and interpret both explicit and implicit information from highly ambiguous and vague narrative text. "We want the ability to mitigate ambiguity in text by stripping away filters that can cloud meaning and by rejecting false information," says DARPA's Bonnie Dorr. DARPA also has developed the Anomaly Detection at Multiple Scales program, the Machine Reading program, the Programming Computation on Encrypted Data research effort, the Video and Image Retrieval and Analysis Tool program, and the XDATA program, all of which aim to make sense of large volumes of data.
Despite State Oversight, Vote-Counting Errors Abound
Palm Beach Post (05/08/12) Pat Beall; Adam Playford
For the past 10 years, Florida's vote counting systems have been regularly beaten by different kinds of technology, despite a state certification process designed to guard against such problems. In Union County in 2002, voting machines read both Democratic and Republican ballots as Republican. In Broward County in 2004, under certain circumstances, the voting software could not count beyond about 32,000 votes in a precinct. After hitting that total, it started counting backward. In Sarasota County in 2006, approximately 17,800 undervotes triggered allegations of problems with ES&S' iVotronic machines. In Hillsborough County in 2008, the voting systems server crashed when early voting results were fed into it. "The best software written by the wealthiest companies is buggy because software is really hard to get right," says University of Iowa professor Douglas W. Jones. Florida's Division of Elections certifies voting equipment as sound before counties can use it. "I will tell you that Florida is one of the toughest states to get a voting system certified in, and we make no apologies for that," according to former Secretary of State Kurt Browning. However, neither independent testing nor the state's stamp of approval is a surefire way to find problems in the system.
More Search Could Be Crowdsourced
PC World (05/07/12) Joab Jackson
Massachusetts Institute of Technology (MIT) and Microsoft researchers determined that search engines could use crowdsourcing to expand the range of answers they provide for users. Most Web search engines use computer-run page ranking algorithms to generate results for user submitted queries. However, this range of answers could be radically expanded through some data mining techniques and crowdsourced editing, according to MIT researcher Michael Bernstein, who presented the group's work at the Association for Computing Machinery's Conference on Human Factors in Computing Systems. "Our findings suggest that search engines can be extended to directly respond to a large new class of queries," Bernstein says. The range of answers could be radically expanded with a relative minimal additional cost by harnessing the power of crowdsourcing and contracting people to identify the answers to simple but frequently asked questions. "We are focusing on a set of queries that are somewhat popular," Bernstein says. The researchers used data mining software to analyze 75 million search queries from Microsoft's Bing search engine, looking for those queries that resulted in a click through to a single site. The researchers identified those queries that could be quickly answered and contracted workers to craft simple answers and proofread the work through Amazon's Mechanical Turk. By automating this process as much as possible, search engines can keep their costs minimal.
Picking the Brains of Strangers Improves Efforts to Make Sense of Online Information
Carnegie Mellon University (05/07/12)
Researchers from Carnegie Mellon University and Microsoft Research recently presented their findings on distributed sensemaking at CHI 2012. The team recruited 21 Microsoft employees for a study, and found that the quality of their work was better when they used a digital knowledge map that had been created and improved upon by several previous users. Digital knowledge maps provide a means of representing the thought processes used to make sense of information gathered from the Web. "Collectively, people spend more than 70 billion hours a year trying to make sense of information they have gathered online," said professor Aniket Kittur in Carnegie Mellon's Human-Computer Interaction Institute. "Yet in most cases, when someone finishes a project, that work is essentially lost, benefiting no one else and perhaps even being forgotten by that person." Distributed sensemaking can save time and lead to a better understanding of information gathered online. Using eye tracking, the team discovered that participants often focused more on the organization of knowledge maps, rather than any specific content, as they are modified successively by multiple users. New users spend less time focused on specific content elements, shifting a greater balance of their attention to structural elements such as labels.
Email 'Vacations' Decrease Stress, Increase Concentration
University of California, Irvine (05/03/12) Janet Wilson
University of California, Irvine (UCI) researchers have shown that eliminating the constant distractions of work email significantly reduces stress and allows users to focus better. The researchers attached heart rate monitors to computer users while software sensors detected how often they switched windows. Users with email changed screens twice as often and worked in a state of steady high alert with more constant heart rates. Those users without email for five days experienced more natural, variable heart rates. "We found that when you remove email from workers' lives, they multitask less and experience less stress," says UCI professor Gloria Mark. The study participants were computer-dependent civilian employees at the Army's Natick Solider Systems Center. Those with no email reported feeling better able to do their jobs and stay on task, with fewer stressful and time-wasting interruptions. Participants with email switched windows an average of 37 times per hour, while those without email changed screen about 18 times an hour. The findings could be useful for boosting productivity and suggest that controlling email login times, batching messages, or other strategies could be helpful, according to Mark.
Bringing Open, User-Centric Cloud Infrastructure to Research Communities
CORDIS News (05/04/12)
European researchers working on the VENUS-C project have developed an open, scalable, and user-centered cloud computing infrastructure, highlighting an attempt to implement a user-centric approach to the cloud. Cloud computing empowers researchers "in a number of different ways, enabling them not only to do better science by accelerating discovery but also new science they could not have done before," says VENUS-C project director Andrea Manieri. The new infrastructure integrates easily with users' working environments and provides on-demand access to cloud resources as and when needed. "Our approach to the interoperability layer tackles current challenges with our users firmly in mind," Manieri says. The researchers used the VENUS-C infrastructure on Microsoft's Windows Azure platform to run BLAST, a data-intensive tool used by biologists to find regions of local similarity in amino-acid sequences of different proteins. The VENUS-C infrastructure made the experiment cost less than 600 euros and take just a week to process the data that normally would have taken more than year. "The advantage of using VENUS-C BLAST compared with renting cloud resources and deploying high-performance computing or high-throughput versions of BLAST is that deployment efforts are minimized and client impact is also minimal, since users don’t have to log-in on a different machine," says VENUS-C's Ignacio Blanquer.
Abstract News © Copyright 2012 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.