Welcome to the January 17, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Please note: In observance of the Martin Luther King, Jr. Day holiday, TechNews will not be published on Monday, Jan. 20. Publication will resume on Wednesday, Jan. 22.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Researchers Aim to Revolutionize 3D Printing, Global Manufacturing
Computerworld (01/17/14) Sharon Gaudin
Lawrence Livermore National Laboratory (LLNL) researchers are developing new materials to be used for additive manufacturing, also known as 3D printing, and are working on a technique for building multiple materials into the same product. LLNL's Eric Duoss says the research is going to revolutionize manufacturing because it is about creating the ability to tailor properties and achieve property combinations that would have been previously impossible to create. The researchers want to enable manufacturers to build more using additive manufacturing, and to be able to build things that are impossible to build using existing methods. "Hopefully, it will be a new way of manufacturing with a lot more possibilities and less cost, time, and real estate needed to manufacture things," Duoss says. The researchers are using data-mining techniques and a computer cluster with 160 processors to develop algorithms and software to study the process at a microscopic level. The researchers also are working to alter the materials used in additive manufacturing. "We can take the same base material...and by changing the architecture of it, we can make it stronger, more lightweight, and make it react differently to heat," Duoss says. Creating a wider array of materials to use in additive manufacturing would be a key development, says IDC's Robert Parker.
Workshop Report on Opportunities in Robotics, Automation, and Computer Science Released
CCC Blog (01/15/14) Kenneth Hines
The Computing Community Consortium (CCC) on Wednesday released "Workshop on Opportunities in Robotics, Automation, and Computer Science," a report about an October 2013 workshop at the White House Conference Center in Washington, D.C. At the event, 28 participants from industry, academia, and government discussed opportunities in advanced manufacturing for robotics, automation, and computer science. The workshop aimed to outline concrete problems that could direct academic basic and applied research to support advances in manufacturing, possibly through the National Robotics Initiative. The report describes challenges and obstacles including the difficulty and expense of deploying an automation system, and the lack of standardized perception modules that would allow for faster deployment without customization to individual applications. High performance, end-effector technology; co-robot safety, modularity and standardization, and simulation also present difficulties. In addition, the report identifies three important action areas, including the need to "automate the automation" by streamlining the design of assembly lines and robot deployment; provide middleware that can help replicate solutions across different manufacturing equipment and products, and develop collaboration models to provide academia with more immediate access to relevant challenges in manufacturing automation.
Cyberwar Surprise Attacks Get a Mathematical Treatment
Science (01/13/14) John Bohannon
University of Michigan professors Robert Axelrod and Rumen Iliev have written a mathematical model that could help predict the timing of cyberattacks. The model reworks Axelrod's 1979 work on the prisoner's dilemma game theory problem, which centered on the idea that the element of surprise is a strategic resource and modeling its costs and benefits can lead to counterintuitive actions. Like physical attacks, cyberattacks can be optimally timed based on changes in risks, costs, and benefits over time; the target’s vulnerabilities, and the element of surprise. Axelrod and Iliev applied the model to several recent cyberattacks, such as the Stuxnet, an advanced computer worm allegedly created by the U.S. and Israeli governments to sabotage Iran's nuclear centrifuges. Stuxnet and other recent cyberattacks have been almost optimally timed, likely as the result of hacker intuition, but the model could help plan future attacks, the researchers say. In addition, the model could be used defensively to predict when future cyberattacks might occur. The researchers note the market for information on computer system vulnerabilities known as "zero-day exploits" is growing, which will accelerate the timing of cyberattacks and prompt hackers to act sooner. "The work provides a solid logical foundation for fresh thinking in the cybersecurity field," says Naval Postgraduate School defense analyst John Arquilla.
Google Is Developing a Smart Contact Lens
Computerworld (01/16/14) Sharon Gaudin
Google researchers are developing a smart contact lens that uses tiny chips, sensors, and antennas to continuously test diabetics' blood sugar levels. The technology uses wireless chips and miniaturized glucose sensors to measure glucose levels in the user's tears. "At GoogleX, we wondered if miniaturized electronics--think chips and sensors so small they look like bits of glitter, and an antenna thinner than a human hair--might be a way to crack the mystery of tear glucose and measure it with greater accuracy," according to the project's founders. "We're testing prototypes that can generate a reading once per second." The researchers also are studying the potential of the lenses to serve as an early warning for wearers when glucose levels get too low. "This type of 'in-eye' technology is the precursor to having Google Glass directly in our eyes," says analyst Patrick Moorhead. "To many, this is fascinating and inspiring. To others it is creepy and scary." He notes the project could offer insights into the future development of Google Glass. "If you project this forward a few years and add a flexible display, a display controller, and a radio that can talk to your smart watch, then you have Google Glass of the future."
How the Friendship Paradox Makes Your Friends Better Than You Are
Technology Review (01/14/14)
The friendship paradox applies to happiness and wealth, according to researchers at the University of Toulouse and Aalto University. The friendship paradox is the empirical observation that your friends have more friends than you do. Network scientists Young-Ho Eorn and Hang-Hyun Jo have evaluated the properties of different characteristics on networks and worked out the mathematical conditions that determine whether the paradox applies to them or not. Eorn and Jo examined two academic networks in which scientists are linked if they have co-authored a scientific paper together. Every scientist is a node in the network and the links arise between scientists who have been co-authors. Eorn and Jo found the paradox in these networks as well, in that a scientist's co-authors will have more co-authors, more publications, and more citations than he or she has. The researchers term this the generalized friendship paradox, and they say that when it manifests itself as a result of the way nodes are linked together, any other properties of these nodes exhibit the same paradoxical nature, as long as they are correlated in a certain manner.
Feeling Mad? New Devices Can Sense Your Mood and Tell--or Even Text--Others
The Washington Post (01/13/14) Arthur Allen
Microsoft Research cognitive psychologist Mary Czerwinski is an affective computing expert who creates technology that monitors a person's mood and stress level. For example, Czerwinski and her colleagues, including Microsoft senior research designer Asta Roseway, developed a butterfly-shaped set of wires called Mood Wings that attaches to a sensor wristband and beats at a rate corresponding to the wearer's stress level. The team also created a jacket with bendable, wired "leaves" that flap when the wearer is happy, or elevate in the back when the user is angry. Czerwinski says the technology could especially benefit users with communication difficulties, such as individuals with autism or post-traumatic stress disorder, by alerting family members about emotional states. To help parents of children with attention-deficit hyperactivity disorder respond constructively to challenging situations, Czerwinski and her colleagues created a wrist sensor that sends signals of parental stress to a network, which responds with text messages that suggest helpful behaviors. Another tool tracks a person's mood throughout a day at work using a wristband, chair sensors, facial-recognition technology, and voice recorders, revealing micro-patterns that might not otherwise be apparent. "In the future, we may not even see the sensors we're wearing," Roseway says. "It'll be integrated into our person."
New Patent Mapping System Helps Find Innovation Pathways
Georgia Tech News Center (01/14/14) John Toon
Georgia Institute of Technology researchers have developed a patent-mapping system that considers how patents cite one another and could help researchers better understand the relationships between technologies. "We take data on research and development, such as publications and patents, and we try to elicit some intelligence to help us gain a sense for where things are headed," says Georgia Tech professor Alan Porter. The researchers also found that studying the relationships between different areas can help suggest where the innovation is occurring and what technologies are facilitating it. "The goal for this research was to create a new type of global patent map that was not tied into existing patent-classification systems," says former Georgia Tech researcher Luciano Kay. The current International Patent Classification system organizes new technologies into very broad categories, often including innovations that have very little in common. The new Patent Overlay Mapping system instead considers the similarity between technologies by noting connections between patents. "We think our map gets closer to measuring the ideas of technological similarity and distance," says Georgia Tech professor Jan Youtie.
ICT Industry Requires More Skills to Take Off
CORDIS News (01/13/14)
The European Union has embarked on several major initiatives to ensure workers have the necessary skills to take information and communications technology (ICT) to the next level. Horizon 2020 will provide 15 million euros for ICT research, and Erasmus+ will enable students to participate in teaching programs across Europe. Meanwhile, the Grand Coalition for Digital Jobs is expected to make ICT careers more attractive by providing training packages co-designed with the industry; offering more aligned degrees and curricula; improving cross-country skill recognition; reducing job market mismatches, and boosting digital entrepreneurship. Still, there are questions whether this will be enough, considering Microsoft recently highlighted the skills shortage it faces in Ireland as well as issues with the current ICT certification system. The European Commission (EC) says Europe will soon have a shortfall of up to 900,000 ICT workers if nothing is done, and this year will be key to determining whether the issue can be addressed quickly. The goal is to close the digital skills gap by 2020. "The Commission will do its bit but we can't do it alone; companies, social partners, and education players--including at national and regional level--have to stand with us," says EC commissioner Neelie Kroes.
Coming Soon: Control Your Computer With Your Brain via Open Source
The team behind the OpenBCI project hopes to release the third iteration of the OpenBCI board sometime around April 2014. Joel Murphy and Conor Russomanno, a Brooklyn, NY--based designer and engineer, respectively, are working on prototypical technology that will enable people to control computers with their brains. The latest version includes Bluetooth LE connectivity and other improvements. As part of the OpenBCI project, they are developing an affordable electroencephalography (EEG)-reading headset device that would provide access to high-quality, raw EEG data with minimal power consumption, without the use of blackbox algorithms or proprietary hardware designs. The open source software used to drive the board will be available with interfaces for multiple languages and several existing open source EEG signal-processing applications. Other key parts of the project include the OpenBCI controller, which uses the Texas Instruments ADS1299 Analog Front End IC, while the headgear, which is worn by the user, is open design and can be configured on a per-user basis.
Computers Slowly Getting in Touch With Our Feelings
San Diego Union Tribune (01/13/14) Gary Robbins
Computer scientists are quickly developing new technology involving sensors, applications, and wearable devices to monitor and interact with people. In an interview, Rajesh Gupta, chair of the University of California, San Diego's (UCSD) department of computer science and engineering, discussed the progress being made in machine learning and artificial intelligence. "Your smartphone knows who you are, where you are, where you've been, and a lot about what you're doing," Gupta says. In addition, he says computers are beginning to elicit emotional responses from users. For example, UCSD researcher Javier Movellan is combining information from voice, images, skin, and device-usage patterns to determine users' moods. However, despite all of these advances, Gupta says operating systems will have to behave more like people if they are going to be widely accepted and used. "That means scientists will have to improve the computer's ability to synthesize verbal and nonverbal emotional cues, like a person's laughter or a shrug, or their posture," he notes. Smartphone technology also still has plenty of room to grow in terms of voice, facial, and gesture recognition. Another key issue is trust, according to Gupta. "We can live with giving away some information, but we do care about who is collecting it," he says.
Spin: The Quantum Twist Coming to a Computer Near You
New Scientist (01/13/14) Jon Cartwright
Although conventional data storage technologies are effective, they can be very time-consuming to write or retrieve, which is why a lot of time and money have been dedicated to developing alternative data storage technologies. One of the most promising technologies is based on electronic spin, in which magnetoresistive RAM (MRAM) uses tiny spin valves to form magnetic bits themselves. Spin-torque MRAM can fill the roles of traditional RAM and permanent data storage, saving time and energy, and also can help advance computer processors. Unlike currents, spins do not need sustaining, an idea that makes operating transistors with electron spins, rather than electrical currents, an attractive one. Spin technology could even be used to mimic the human brain, according to some researchers. The spin memristor is a device whose resistance can be set with a spin current to any value. A single spin memristor could function as a synapse, with its programmable resistance offering a way to set the strengths of connections between neurons, according to Unite Mixte de Physique CNRS/Thales researcher Julie Grollier. "These spin devices have very nearly a one-to-one connection with biological neurons and synapses," notes Purdue University researcher Kaushik Roy.
The Search for the Lost Cray Supercomputer OS
GigaOm.com (01/14/14) Signe Brewster
In an effort to preserve an important piece of computing history, hobbyists Chris Fenton and Andras Tantos are recreating the renowned Cray-1 supercomputer on a desktop scale. The Cray-1 debuted in 1976 as a 5.5-ton C-shaped tower, designed by computer architect Seymour Cray. The replica began in 2010 when Fenton, an electrical engineer who works on modern supercomputers, decided to physically reconstruct the Cray-1 at one-tenth its original size. Because the Cray-1's hardware was documented in detail online, Fenton was able to replicate the design. He then found a board option that could mimic the original Cray computational architecture. However, to make his replica operational, Fenton needed software. None of the code from the original operating system was available online, and neither the Computer History Museum nor the government had analog copies. Eventually, a former Cray employee heard about the project and offered a disk pack containing the final version of the Cray OS, written for the Cray X-MP. Tantos then wrote recovery tools to turn the information on the disk pack into working software, and devised a simulator for the software and peripheral equipment. The Cray OS is now operational, and Fenton is upgrading his desktop system to be compatible with the Cray X-MP OS.
Abstract News © Copyright 2014 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: email@example.com
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.