Welcome to the August 30, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Please note: In observance of the Labor Day holiday, TechNews will not be published on Monday, Sept. 2. Publication will resume Wednesday, Sept. 4.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
U.S. Appetite for Internet User Data Not Unique
Computerworld (08/30/13) Jaikumar Vijayan
Despite recent concerns about information collection by the United States, law enforcement agencies in Europe and other nations appear to gather equal or greater volumes of data, according to a whitepaper released this week by law firm Hogan Lovells. The whitepaper is based on a review of transparency reports from Google, Microsoft, Twitter, LinkedIn, and Skype that provide details about user data requests from law enforcement agencies worldwide. "Many in Europe right now are under the impression that U.S. law enforcement and intelligence agencies have a greater appetite for data and access more data than anyone else in the world," says whitepaper author Christopher Wolf. However, he notes that adjusted for population size and the number of Internet users, data on U.S. demands for customer information are not unusual. Google's transparency reports indicate that between 2010 and 2012, U.S law enforcement agencies averaged 51.3 requests per 1 million Internet users, while Hong Kong averaged 59.05 requests, France had 50.24 requests, and the United Kingdom had 49.9 requests. Wolf points out that although the U.S. has taken steps to regulate the collection of user data, "we haven't seen similar procedural protections in Europe or elsewhere."
ADEPT Emphasizes Energy-Efficient Parallelism
HPC Wire (08/29/13) Tiffany Trader
The European Union-funded ADdressing Energy in Parallel Technologies (ADEPT) project is exploring the energy-efficient use of parallel technologies. ADEPT aims to help high-performance computing (HPC) software developers exploit parallelism for performance, and to assist embedded systems engineers with managing energy usage. The goal is to create a tool that will help users model and predict the power consumption and performance of their code. ADEPT researchers say the project combines the talents of the HPC and embedded communities, drawing on the strengths of each sector. "The strength of the HPC world lies primarily in software application parallelization: concurrent computation is used to speed up the overall time an application requires run to completion," according to the project website. The program highlights the new energy-aware HPC paradigm. ADEPT also aims to provide a better understanding of how parallel software and hardware use power. "The strengths of one sector are the relative weakness of the other: power management and power efficiency in HPC are in their infancy, but they are becoming increasingly important with HPC systems requiring more and more power," the website says.
Syria, Iran Capable of Launching a Cyberwar
Washington Times (08/28/13) Shaun Waterman
The United States is concerned about the growing ability of Syria and Iran to launch cyberattacks, which are playing an increasingly large role in modern warfare. Syria has proven cyberattack capabilities and could retaliate against Western military strikes over Syria's suspected chemical weapons attack on civilians. "It's foreseeable that [Syrian] state-sponsored or state-sympathetic hackers could seek to retaliate" against U.S., Israeli, or Western interests, says former Homeland Security secretary Michael Chertoff. The Syrian Electronic Army has claimed credit for hacking networks used by U.S. media outlets, including this week's attack on The New York Times website and an earlier hack of the Twitter account of the Associated Press. In addition, Islamic hackers believed to have ties to Iran have been staging cyberattacks against large U.S. bank websites for almost a year. However, hackers can also hijack critical infrastructure by breaking into computer-control systems that operate transportation networks as well as chemical, electrical, and water and sewage treatment plants. "An aggressor nation or extremist group could gain control of critical switches and derail passenger trains, or trains loaded with lethal chemicals," warns former CIA director Leon E. Panetta. The U.S. Cyber Command says it can access hostile networks to defend against attacks, and experts note U.S. responses to cyberattacks might not be visible.
DARPA Creates Cloud Using Smartphones
InformationWeek (08/28/13) Patience Wait
The U.S. Defense Advanced Research Projects Agency (DARPA) is testing new software-based approaches for creating cloud-like computing networks using smartphones and radios. "With 64 gigabytes of storage in a single smartphone, a squad of nine troops could have more than half a terabyte of cloud storage," says DARPA Content-Based Mobile Edge Networking (CBMEN) program manager Keith Gremban. Using CBMEN software uploaded to smartphones and military radios, soldiers can communicate with each other even when they are unable to reach higher-level headquarters units. The software converts each user's mobile device into a server, so content is generated, maintained, and distributed hyperlocally. "CBMEN puts secure, private collaboration and cloud storage in your pocket," Gremban says. DARPA is currently in the second phase of testing, during which it is developing ways to improve the efficiency of the information exchange and strengthen security. DARPA wants to reduce the number of transmissions and amount of bandwidth needed, which also will save power.
MIT Develops 110-Core Processor for More Power-Efficient Computing
IDG News Service (08/27/13) Agam Shah
Massachusetts Institute of Technology (MIT) researchers have developed the Execution Migraine Machine, a 110-core chip that looks for power-efficient ways to boost performance in mobile devices, PCs, and servers. The processor tries to determine ways to reduce the traffic inside chips, which enables faster and more power-efficient computing, according to MIT Ph.D. candidate Mieszko Lis. The chip also can predict movement trends, which reduces the number of cycles required to transfer and process data. The benefits of power-efficient data transfers could apply to mobile devices and databases, Lis notes. For example, he says data-traffic reduction will help mobile devices efficiently process applications such as video while saving power. During testing, the researchers have seen up to 14 times the reduction in on-chip traffic, which significantly reduces power dissipation. According to internal benchmarks, the performance was 25 percent better compared to other processors, Lis says. He notes the chip is based on a custom architecture designed to handle large data sets and to make data migration easier.
Crowdsourcing Creates a Database of Surfaces
Cornell Chronicle (08/27/13) Bill Steele
Cornell University researchers have developed OpenSurfaces, a database of more than 25,000 annotated images that can be used by architects, designers, and home remodelers to visualize their work. The researchers say the database also could be a valuable resource for computer graphics and computer-vision researchers looking for ways to recognize materials or synthesize images of them. The researchers note that the images were collected from the real world rather than from laboratory samples. "This catches real materials that show up in the world, including wear and tear and weathering," says Cornell professor Kavita Bala. The researchers started by collecting about 100,000 images from Flickr. They then used the Amazon Mechanical Turk service to build a workforce of about 2,000 people from around the world to select surfaces displayed in the photos, identify the material in the selection, and add comments on the context and how the surface reflected light. The researchers are now developing an application to modify images by changing one material into another. Bala says that in the future devices such as Google Glass could use the image database to identify materials in the field.
3-D Mapping in Real Time, Without the Drift
MIT News (08/28/13) Jennifer Chu
Researchers at the Massachusetts Institute of Technology (MIT) and the National University of Ireland at Maynooth have written a mapping algorithm that creates real-time, detailed three-dimensional (3D) maps of indoor and outdoor environments. Using videos taken with a Kinect camera of the halls and stairways of MIT’s Stata Center, the team used a mapping method to develop 3D maps. Significantly, the algorithm was able to quickly merge images to "close the loop" when the camera returned to its starting point, forming a continuous, realistic 3D map and solving a major problem in robotic mapping known as "loop closure" or "drift." The problem occurs when a camera, for example, pans across a room and introduces slight errors in the estimated path taken, perhaps shifting a doorway to the right or elongating a wall. These errors compound over long distances, creating maps with walls and stairways that fail to line up. The new mapping technique determines how to connect a map by tracking a camera's position in space throughout its route, and when a camera returns to a place it has already been, the algorithm determines which points within the 3D map to adjust. The technique could help guide robots through potentially hazardous or unknown environments.
Nissan Plans to Offer Affordable Self-Driving Cars by 2020
Computerworld (08/27/13) Lucas Mearian
Nissan says it is on track to begin selling self-driving cars by 2020. The automaker is working with the Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, Oxford University, and the University of Tokyo to develop the autonomous driving technology. Self-driving cars use cameras and sensors to detect roadway lanes and objects around them in order to guide themselves without human intervention. Nissan plans to demonstrate the self-driving technologies in a Nissan LEAF later this year. The technologies include laser scanners, artificial intelligence, and actuators. Nissan has installed these features in LEAFs to enable them to negotiate what the automaker calls real-world driving scenarios. "Nissan's autonomous driving will be achieved at realistic prices for consumers," the company says. "The goal is availability across the model range within two vehicle generations." Nissan also reports that it is already constructing an autonomous-driving proving ground in Japan for this purpose. The company plans to demonstrate its autonomous-driving technology for the first time at the Nissan 360 test drive event, as well as on the streets of Orange County, Calif.
A*STAR Researchers Develop Better HDDs
EE Times Asia (08/29/13)
Researchers at the A*STAR Data Storage Institute (DSI) have developed a computational algorithm that should improve efforts to research the properties of the slider in a hard drive. DSI's Wei Hua says the algorithm enables dynamic simulations of thermal effects to take place in an hour, compared to days with existing algorithms. Controlling the motion of the slider housing the read/write heads is crucial because the hard disk could be destroyed if the slider crashes. "The slider housing the read/write head flies on the fast-rotating hard drive disk, owing to a very thin layer of air," Hua says. The researchers expanded the DSI's ABSolution air bearing simulation software for faster and more precise modeling. Instead of dividing the hard drive slider into a structured rectangular mesh typically used to aid calculations, the team used an unstructured triangular mesh that accurately captures the geometry of the read/write head. The algorithm also better implements the dynamic effects that occur in drive heads. Hua says the modeling software should help with the future development of drive heads, and the researchers plan to use the algorithm to model slider properties that were almost impossible to simulate using earlier iterations.
IT Gender Salary Gap Not as Dramatic as You Think
CIO (08/27/13) Sharon Florentine
The wage disparity between men and women in the technology industry might not be as significant as previously thought, according to a new Payscale report. "To the average person, it does look like there's a large wage gap," says Payscale's Katie Bardaro. "But if you take into account other pieces of the puzzle--education, experience, job title, and industry, for example--it's not quite as dramatic as it's currently reported." Examining 150 technology job titles, the Payscale report found no significant wage discrepancies after data was controlled for compensable factors such as education, experience, and job responsibilities. Dice.com's most recent salary survey also found equal pay for men and women with equal titles, experience, and education. In the Dice.com survey, men in technology jobs did earn more, with average annual income of $95,929 compared to $87,527 for women, but this is attributable to the fact that men and women usually hold different job types. Men typically enter more technical positions in software engineering, programming, and architecture, while women lean toward project management and administration. Although technology offers high pay and numerous other benefits such as remote work, there are fewer women with college majors in technology, math, and science than there were 10 years ago, notes Dice.com CEO Scot Melland.
Strands Project Develops Robot Security Guards
BBC News (08/28/13)
The European Union's Seventh Framework Program is funding the Strands project, which is developing software for robot brains that can be used as security guards and care assistants. University of Birmingham researcher Nick Hawes is coordinating work at eight sites taking part in the project. The project also involves developing robots that can learn from their experiences. In addition, the robots will be able to create four-dimensional maps of their environment and detect changes in unusual situations. "Where a security robot could be really helpful is that it could notice some really small change, some really small detail, a change in the 3D structure of the environment that a human wouldn't necessarily recognize," says University of Lincoln professor Tom Duckett. At the end of the project, the research team will demonstrate the systems at science museums, public events, and trade shows. "We will see this technology come more and more into our everyday environments in the future as time progresses, I'm sure of that," Duckett says.
Carnegie Mellon Developing Driverless Car of the Future Now
Pittsburgh Post-Gazette (08/25/13) Michael A. Fuoco
Engineers at the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab at Carnegie Mellon University (CMU) are developing an experimental autonomous vehicle they hope will be capable of driving on highways more safely than humans. "Humans are extremely smart but can be rather stupid as well," says CMU professor and co-director of the autonomous driving research lab Raj Rajkumar. "If we can take the basic human emotional and physical problems out of the [driving] equation, we expect injuries and fatalities will go down." In addition to safety benefits, autonomous vehicles will increase mobility for elderly and disabled people and enable passengers to be more productive during the time they spend sitting in traffic, Rajkumar says. The autonomous 2011 Cadillac SRX has the appearance of a typical vehicle, but is equipped with six lasers and six radars providing 360-degree views around the vehicle. The car also has cameras in the front and back, and four computers in a compartment beneath the cargo area to process information. The computers have 500,000 lines of code and can make calculations in 10 milliseconds to safely control the vehicle's speed and direction of travel, and to determine lane markings, traffic light status, and speed and location of other vehicles.
A Robot to Beat Humans at Foosball
Swiss Federal Institute of Technology in Lausanne (08/26/13)
Students from the Swiss Federal Institute of Technology in Lausanne have developed a foosball-playing robot. The machine uses a computer to control the mechanical movement of its robotic arm and another computer to provide information about the position of the ball. The team used transparent material for the bottom of the foosball table, and placed a high-speed camera on the ground to film the game board. Image-processing algorithms analyze the movement of the ball in real time, and the information is transmitted to the computer that controls the movement and positioning of the arm. Although the robot cannot perform complex moves, the students plan to continue their development and they say the robot should eventually become more accurate, faster, and more strategic than any player. "Potentially, the computer can simultaneously analyze many more parameters than a human and process information faster," says project head Christophe Salzmann. "It could simultaneously analyze the location of all players and the exact trajectory of the ball after it ricocheted off the edges. All that remains is to develop a strategy. Ultimately, we could imagine organizing games between interposed robots."
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.