Welcome to the February 26, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Syria War Stirs New U.S. Debate on Cyberattacks
The New York Times (02/24/14) David E. Sanger
The Obama administration has been debating whether cyberarms should be used like ordinary weapons, or whether they should be rarely used, secretive tools saved for only the most important situations. President Barack Obama has considered the use of cyberattacks in Syria, but has so far decided against the measures. Although cyberattacks could offer a less-expensive and less-destructive alternative to traditional warfare, the tactics also could change the nature of warfare. In 2012, Obama signed a classified presidential directive relating to cyberoperations that established principles and processes so cybertools are integrated with the full arsenal of national security tools, says National Security Council spokesperson Caitlin Hayden. In the past, officials say the United States has used cyberattacks only a handful of times and has tried to keep such efforts covert, for example, with the Stuxnet worm that attacked Iran's nuclear facility. However, a technological glitch in the summer of 2010 publicly revealed Stuxnet to be the work of the U.S. National Security Agency and Unit 8200 of Israel, eliminating Obama's hopes of keeping the initiatives clandestine. Some experts contend Syria offers the United States an opportunity to change the global perspective on cyberattacks by showing how they can serve a humanitarian purpose.
MIT News (02/25/14) Larry Hardesty
Massachusetts Institute of Technology (MIT) professor Armando Solar-Lezama is constantly refining his Sketch programming language, which enables programmers to leave out some coding details, with Sketch filling in the blanks. Solar-Lezama and a group of his students this year described a way to enable Sketch to handle complex synthesis tasks much more efficiently. In tests on an automated student code grading system, the new version of Sketch was able to correct code in milliseconds in situations in which the previous version would time out. The new version is able to handle significantly more complex problems because it requires programmers to specify that only certain properties are required. The researchers tested Sketch with MIT undergraduates with only a semester's worth of programming experience, and the students were able to generate working code. However, the missing code sometimes required an unacceptably long time to synthesize due to student inexperience in describing problems. Solar-Lezama says Sketch requires further refinement before it is useful to commercial software developers.
Cockpit for Apps: Minority Report Meets Google Glass
New Scientist (02/20/14) Paul Marks
A "personal cockpit" would provide a good way for people to interact with their apps in the near future, according to Barrett Ens and colleagues at the University of Manitoba. The researchers say the technology could enable people to immerse themselves in a three-dimensional (3D) sea of virtual screens and control their digital lives, similar to the way a pilot controls a plane. They would be able to pull what they need into view when they need it and move it to the 3D virtual space behind them, as well as push it aside into their peripheral vision or even pin it out of reach on a wall. The team tested the idea as it would appear to someone wearing 3D head-tracking gesture-controlled glasses, and users did not find it to be distracting. Participants could clear the front view, like a cockpit window, and pull task screens in and out of their line of sight as they saw fit. "Each display is set just within reach, half a meter away, which we found to be the ideal distance to balance both viewing and touch," Ens says. The findings will be presented at the ACM CHI Conference on Human Factors in Computing Systems in Toronto in April.
Huge Potential of High-Performance Computing Showcased at HiPEAC Conference
CORDIS News (02/18/14)
HiPEAC, a European Union-funded project designed to facilitate research and innovation in high-performance computing, recently demonstrated how new solutions constantly emerge in the field. For example, silicon photonics could be a technology that can be used in the future to build advanced computing systems. Silicon photonics integrates a photonic layer with electronic circuits and provides low-latency and low-energy cost for on-chip communications, higher bandwidth, and low manufacturing costs. Silicon photonics' importance was confirmed at the recent HiPEAC conference, which brought together scientists, academics, and industry leaders from the European computing systems community. "The main advantages of silicon photonics for computing systems are the use of standard tools and foundry--which means wafer scale co-integration for low manufacturing costs--alongside high integration, low energy consumption and high bandwidth," says HiPEAC organizer Jose M. Garcia. Another emerging technology highlighted at the conference was that of a memristor, a circuit element that stores information in resistors and which holds great potential for high-density storage devices. The HiPEAC conference incorporated a unique model in which the main track only contained presentations based on papers published in ACM TACO - Transactions on Architecture and Compilation.
Faster Computer Performance Targeted By Memory Consortium
IDG News Service (02/25/14) Agam Shah
The Hybrid Memory Cube Consortium (HMC), which aims to replace standard DDR3 memory modules found in conventional computers, has proposed a faster and more power-efficient specification, called Gen2, for its emerging Hybrid Memory Cube memory technology. The HMC Gen2 specification doubles the throughput of the previous specification, which could deliver 15 times the bandwidth of a standard DDR3 module, while consuming 70 percent less energy, according to the consortium. The new specification also could accelerate calculations in supercomputers, improve in-memory computing for databases and other applications, and aid in providing faster response times to Web requests. HMC's development was led by Samsung and Micron Technology, and Microsoft, Xilinx, Altera, and ARM also are members of the consortium. "System designers are looking for access to very high memory bandwidth [and Gen2] allows them to get access to memory bandwidth with fewer pins," says Micron's Mike Black. He says HMC also could be used in the networking sector in systems that need more memory to buffer high-volume traffic.
Meet Your Match: Using Algorithms to Spark Collaboration Between Scientists
University of Cambridge (02/21/14)
University of Cambridge scientists have developed an algorithm that matches scientists in research partnerships, using genes as a model. The method matches academics at conferences based on pre-established criteria, resulting in unexpected research opportunities that can encourage renowned scientists to look beyond their usual circles. In addition to conference matches, the algorithm can arrange "would-like-to-meet" matches across disciplines and knowledge areas. "We wanted to avoid the usual pattern that happens at conferences, especially at interdisciplinary meetings, of like sticking with like," says Cambridge University's Rafael Carazo Salas. "Then we came up with an idea--what if we treated the delegates like we treat genes, and used mathematical algorithms to build a connectivity picture that could enable new links to be made?" Prior to the conference, attendees submitted information about their research areas, and wrote a wish list of areas about which they would like to know more. Attendees then met four other scientists in a speed-dating-like session, with matches between people unknown to one another and with different knowledge bases. In the second round, matches utilized the wish lists to pair people with skills in a particular area with those who wanted to learn more about it. The researchers say their method was very successful, resulting in new collaboration opportunities that otherwise would not have occurred.
Syrian Web Censorship Techniques Revealed
Technology Review (02/25/14)
Researchers at the French Institute for Research in Computer Science and Automation (INRIA) used a leak of Syrian Web censorship data to illustrate for the first time the exact steps an authoritarian regime takes to restrict traffic. Hacker and activist group Telecomix in October revealed logs, comprised of 600 GB of data representing 750 million requests, showing the ways in which Syrian authorities were monitoring and filtering Internet traffic. The INRIA team analyzed the logs to show precisely how traffic was filtered, which IP addresses and websites were blocked, and which keywords were targeted for filtering. The data are from seven Blue Coat SG-9000 proxies, which are typically placed between a monitored network and the Internet backbone to intercept traffic. The logs show the Syrian government was actually censoring less than 1 percent of Internet traffic, with 93.28 percent of Web requests allowed and 5.37 percent denied due to network errors. The censored traffic fit four main criteria, including URL-based filtering, keyword-based filtering, destination IP address, and a custom category-based censorship. The URL-based filtering focused on instant messaging software such as Skype, and content pertaining to Israel also was heavily censored, the analysis shows.
From a Distance: New Technique for Repair Work
Saarland University (02/20/14)
Companies that operate globally could use a new platform developed at Saarland University to enable local engineers to collaborate with experts abroad in real time to address any issues with production. The system offers live broadcasting over a computer, making it easier for designers of complex production facilities in other countries to do maintenance or repair work from their local office. The researchers say the approach is comparatively inexpensive, considering the equipment used includes a three-dimensional (3D) camera, a Webcam, and a computer. The camera can capture a machine in need of repair work, enabling the platform to provide several views in parallel on the screen, including a 3D model, which users in both locations can operate interactively. The system also can provide key sensor data, such as temperature or pressure, that can offer clues about the cause of a problem. The locations can be easily connected for videoconferencing via the company network. "We connect sensor and environmental data, video signals and computer graphics within just one application," says the Intel Visual Computing Institute's Thorsten Herfet.
SDSC Team Develops Multi-scale Simulation Software for Chemistry Research
UCSD News (CA) (02/19/14) Jan Zverina
Researchers at the University of California, San Diego's San Diego Supercomputer Center (SDSC) have developed software that expands the types of multi-scale mixed quantum and molecular mechanical (QM/MM) simulations of complex chemical systems that scientists can use to design new drugs and better chemicals. The researchers say multi-scale QM/MM computational methods are crucial to advancing the understanding and solution to problems in the chemical sciences. "Our software enables QM/MM simulations with a variety of advanced quantum mechanical models, and by integrating it with the popular AMBER molecular simulation package, which is used by hundreds of academic and industrial research labs, we can reach a very large user base," says SDSC researcher Andreas W. Goetz. In QM/MM simulations, an accurate but computationally complex quantum mechanical model is used to identify important features of the electronic structure of a chemically relevant region. The researchers relied on SDSC's Trestles and Gordon supercomputers to run many tests to validate the software in various stages. "QM/MM simulations are computationally very demanding compared to purely classical MM simulations," says SDSC professor Ross C. Walker. "Access to SDSC's Trestles and Gordon supercomputers and their fast turnaround times were essential to our work."
DARPA Wants to Automate Big Data Findings to Solve Complicated Problems
Network World (02/21/14) Michael Cooney
U.S. Defense Advanced Research Projects Agency (DARPA) researchers are developing the Big Mechanisms program, which seeks to gather all exiting data about a specific topic, keep it up to date, and develop new conclusions or research directions. "Having big data about complicated economic, biological, neural, and climate systems isn't the same as understanding the dense webs of causes and effects--what we call the big mechanisms--in these systems," says DARPA program manager Paul Cohen. "Unfortunately, what we know about big mechanisms is contained in enormous, fragmentary, and sometimes contradictory literatures and databases, so no single human can understand a really complicated system in its entirety." He says the Big Mechanism program could lead to new ways of understanding complicated systems. As part of the Big Mechanism program, "every publication would immediately become part of a public, computer-maintained, causal model of a complicated system--a big mechanism--and every aspect of a big mechanism would be tied to the data that supports it or contradicts it," Cohen says. The Big Mechanism program will develop technology to read research abstracts and papers to extract fragments of causal mechanisms, assemble those fragments into more complete causal models, and reason over these models to produce explanations.
Consortium Aims to Boost Minority Faculty in STEM Fields
Stanford University (02/13/14) Gretchen Kell
The University of California, Berkeley, Stanford University, the University of California, Los Angeles, and the California Institute of Technology have formed the California Alliance for Graduate Education and Professoriate, a partnership that aims to solve the problem of having too few minority Ph.D. students in the science, technology, engineering, and math (STEM) fields. The group launched the initiative with a $2.2-million U.S. National Science Foundation (NSF) grant to provide increased diversity in mathematics, physical science, and computer science. Together, the four schools are creating a new, cross-institutional community of underrepresented minority Ph.D. students, postdoctoral scholars, and faculty members. The alliance "draws on the strength of the institutions involved and is developing a model for moving the needle in this area," says NSF program director Mark Leddy. Stanford professor Page Chamberlain also notes "given the importance of faculty in mentoring graduate students, one critical piece for increasing the needed diversity will be to train and promote talented STEM students in science and engineering to move into the professoriate." The California Alliance plans to create a community of practice to unite underrepresented doctoral students from all four universities who share educational backgrounds. In addition, the alliance will nurture social-professional networks of Ph.D. students, faculty, and research scientists.
Want Your Computer to Go Faster? Just Add Light
Northeastern University News (02/19/14) Angela Herring
Two Northeastern University professors say they have developed a series of devices that make computer processors more efficient. Professor Swastik Kar studied graphene, while professor Yung Joon Jung focused on the mechanics of carbon nanotubes. The two researchers combined their efforts to discover a phenomenon that could lead to a new generation of highly efficient electronics. "People believe that the best computer would be one in which the processing is done using electrical signals and the signal transfer is done by optics," Kar says. He says the devices, which are the first to integrate electronic and optical properties on a single electronic chip, represent a critical breakthrough in making this type of computer a reality. The researchers were able to develop three devices based on the technology. The first device, called an AND-gate, requires both an electronic and an optical input to generate an output. The second device, called an OR-gate, can generate an output if either of two optical sensors is engaged. The researchers also built a device that functions like the front-end of a camera sensor. "Jung's method is a world-class technique," Kar says. "It has really enabled us to design a lot of devices that are much more scalable."
Vibration Energy the Secret to Self-Powered Electronics
University of Wisconsin-Madison (02/20/14) Renee Meiller
Researchers at the University of Wisconsin-Madison, University of Minnesota Duluth, and Yat-sen University say they have developed a promising solution for charging smartphone batteries on the go, without the need for an electrical cord. Their solution involves the use of a nanogenerator that could harvest and convert vibration energy from a surface into power for the phone. "We believe this development could be a new solution for creating self-charged personal electronics," says University of Wisconsin-Madison professor Xudong Wang. The nanogenerator utilizes a common piezoelectric polymer material called polyvinylidene fluoride (PVDF), and the researchers incorporated zinc oxide nanoparticles into a PVDF thin film to start the formation of the piezoelectric phase that enables it to harvest vibration energy. The researchers then etched the nanoparticles off the film, resulting in interconnected pores that cause the material to behave somewhat like a sponge. "The softer the material, the more sensitive it is to small vibrations," Wang says. He says the nanogenerator could become an integrated part of an electronic device and automatically harvest energy from ambient vibrations to power the device directly.
Abstract News © Copyright 2014 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.