Association for Computing Machinery
Welcome to the November 11, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Carbon Nanotubes in a Race Against Time to Replace Silicon
Computerworld (11/10/15) Lamont Wood

IBM Research and Stanford University are rushing to make carbon nanotubes (CNTs) a viable replacement for silicon in computer chips by the time the industry can no longer uphold Moore's Law. "We feel that CNTs have a chance to possibly replace silicon transistors sometime in the future--if critical problems are solved," says IBM Research's Supratik Guha. He cites the still-unaddressed challenge of laying down chip circuits with CNTs that match the size of silicon elements. Five or six parallel CNTs are required for a single link, and they must be laid 6 nm or 7 nm apart to minimize interference. Another issue for Guha is the CNT fibers' purity, as circuit building demands single-walled CNTs, and tubes with two or more walls have differing electrical properties and their presence constitutes impurities. Stanford graduate student Max Shulaker says the chief hindrance to CNT commercialization is the need to augment contact resistance, while better doping of the tubes to be used as transistors also is required. Guha says there is a critical time factor in perfecting CNTs, as he expects improvements in silicon to halt within perhaps three or four more chip generations. "We need to demonstrate the practicality of CNT technology in the next two to three years, or the window of opportunity will close and the technology will not be there when needed," Guha says.


System Recognizes Objects Touched by User, Enabling Context-Aware Smartwatch Apps
Carnegie Mellon News (PA) (11/09/15) Byron Spice

Researchers at Carnegie Mellon University and Disney Research have developed EM-Sense, technology designed to enable smartwatches to automatically recognize what kind of objects users interact with and touch. EM-Sense uses the human body as an antenna to detect the electromagnetic noise emitted by electrical and electronic devices. The technology is able to use this noise to identify what object is being touched with a high degree of accuracy. It can tell if a person is touching a laptop or a food processor, a power tool or an electric door lock, and can even distinguish between different models of cell-phone. The technology also can work with large, non-powered conductive objects such as doors and ladders. The researchers note the technology is relatively simple, with a proof-of-concept sensor assembled using off-the-shelf components, and could easily be integrated into smartwatches. EM-Sense could enable a smartwatch to track activity with more granularity and could enable context-aware applications such as starting timers or activating certain apps when certain appliances are touched, or even acting as an authentication token when using a laptop or other device. The researchers presented EM-Sense Monday at the ACM Symposium on User Interface Software and Technology (UIST 2015) in Charlotte, NC.


Amplifying--or Removing--Visual Variation
MIT News (11/05/15) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers presented two studies at last week's ACM SIGGRAPH Asia conference in Kobe, Japan, detailing methods for either amplifying or removing digital image defects. The first study presents an algorithm that seeks repeated forms within an image, and then either irons out variations to produce idealized but still natural-looking pictures, or magnifies them so they are more evident. The software compares patches of the source image at different scales, identifies those that appear to be visually similar, and averages out all the visually similar patches and uses the averages to build a new, highly regular version. The algorithm then identifies a mathematical function that manipulates the source image's pixels to generate the best possible approximation of the target image. A back-and-forth iteration process constructs ever more natural-looking target images and ever more regular mathematical conversions, until the two intersect and end with a regular image that can be inverted when distortion is called for. The second study describes an algorithm that exaggerates divergences from ideal geometries by identifying the geometric shapes indicated by color gradations in an image. It removes a narrow band of the image that traces the curve defining each of those shapes, and then straightens out the bands. The software uses local color variations to build a new, more erratic curve, which it can exaggerate and then reinsert into the image.


Markets for Science
Harvard University (11/09/15) Leah Burrows

Harvard University professor Yiling Chen contributed to an international research team that used prediction markets to calculate the reproducibility of 44 experiments published in prestigious psychology journals. "This research shows for the first time that prediction markets can help us estimate the likelihood of whether or not the results of a given experiment are true," Chen says. "This could save institutions and companies time and millions of dollars in costly replication trials and help identify which experiments are a priority to re-test." In prediction markets, investors forecast future events by purchasing shares in the event's outcome, and the market price indicates what the crowd thinks the likelihood of the event is. In partnership with the Reproducibility Project: Psychology, the researchers set up markets for each experiment and provided a pool of psychologist-testers with $100 to invest. The prediction markets correctly anticipated reproducibility in 71 percent of the cases studied, while 61 percent failed to replicate the original results. "Our research shows that there is some 'wisdom of the crowd' among psychology researchers," says University of Virginia professor Brian Nosek. "Prediction accuracy of 70 percent offers an opportunity for the research community to identify areas to focus reproducibility efforts to improve confidence and credibility of all findings."


Toyota's A.I. Research Efforts Could Mean Cars That Anticipate Traffic, Pedestrian Moves
Computerworld (11/11/15) Sharon Gaudin

Toyota is making high-profile investments in artificial intelligence (AI) research and development that could yield many benefits in human-machine interaction. In a recently announced partnership with Stanford University and the Massachusetts Institute of Technology, Toyota will give each institution $25 million over five years to set up AI research centers. Stanford AI lab executive director Steve Eglash says these efforts could lead to cars that function more safely on city streets and in inclement weather, as well as robotic assistants for the elderly and infirmed. He says Toyota contributes not only financial support, but also "a unique perspective on the future of the AI industry and robotics." Data is another important ingredient Toyota brings, which Eglash says can be applied toward making more contextual and human-centered AI. With the car industry having already introduced self-parking autos and other driving-assistive innovations, Eglash thinks in a few years cars will be able to predict traffic and road conditions minutes before the vehicle arrives. He also expects the research to lead to cars that can anticipate cyclists and pedestrians' actions and take precautionary measures. Carnegie Mellon University professor Manuela Veloso sees such initiatives as the beginning of "the reality of AI in the physical world."


Dartmouth Researchers Create Automated Tool for Dialect Analysis
Dartmouth College (11/09/15) John Cramer

Dartmouth College researchers have developed Dartmouth Linguistic Automation (DARLA), an automated, open access Web application that automatically generates transcriptions of uploaded data using speech recognition, filters out noisy tokens, and measures and plots formant frequencies. Two people with different accents might produce their vowels with very different resonance frequencies, which give linguists a precise, quantitative way to characterize accents. "Fully automated vowel extraction methods still have a long way to go, but as [automatic speech recognition] technologies continue to improve, we believe the DARLA system will be useful for more and more sociolinguistic research questions," says Dartmouth professor Jim Stanford. DARLA is based on an earlier program called Forced Alignment & Vowel Extraction (FAVE), developed by University of Pennsylvania researchers. FAVE automatically aligns a transcript with the speech and measures the formant values. However, FAVE created a bottleneck because speech transcripts still needed to be provided by humans. DARLA was developed in an attempt to alleviate this bottleneck and quickly analyze speech recordings. "We anticipate that a large amount of sociolinguistic research in the future will eventually use fully automated methods like DARLA for measuring vowel data, and so our work helps take a step in that direction," Stanford says.


Can Robots Come to Your Rescue in a Burning Building?
USC News (11/06/15) Sam Corey

Researchers at the University of Southern California Viterbi School of Engineering are developing robots that could help rescue people from fires. The researchers are focusing on the networking and movement of autonomous robots as well as developing algorithms the robots could use to communicate with each other. If successful, the robots could accurately communicate the details of a room with the least human input possible. The researchers are testing whether mobile robots can use radio communication and sensors to help each other move around a room full of obstacles. "The vision is the robots would notify firefighters where to go and where not to go," says Ph.D. student Jason Tran. "These robots could detect a survivor's location or determine if the temperature and atmospheric conditions of a specific room may be too much for a human to handle." Fellow Ph.D student Pradipta Ghosh says rescue robots may eventually help with firefighting in dangerous situations. Tran and Ghosh hope the mobile robots, along with the wireless networks needed for communication, will be functional by next year, while Ghosh says the biggest hurdle is linking the robots together so they work as a unit. Doctoral student Shangxing Wang and colleagues are focusing on developing algorithms to withstand anticipated wireless communication problems, such as a collapsed ceiling blocking the relay of a signal.


Improve Individual Skills Supported by Big Data
University of Tsukuba (Japan) (11/06/2015)

Running is a popular sport, but most runners do not receive formal training. University of Tsukuba professor Shinichi Yamagiwa and his colleagues have developed a system for improving running skills based on big data analysis. Yamagiwa, along with Osaka University professor Yoshinobu Kawahara and Mizuno Corporation, jointly developed a technology that teaches ideal running motions based on big data collected by sensors and videos. The research team analyzed the running motion data of about 2,000 runners gathered by Mizuno using an artificial intelligence (AI) technique and expressed them in numerical skill values. The researchers found the movements of the elbows, knees, and ankles differed between high-ranking marathon runners and beginners. Based on these findings, a technology called "skill grouping" was developed that displays the effects of movements in easy-to-understand scores. The researchers note skill grouping also can be used for time-sequential healthcare and motor-capacity control, such as during conditioning and rehabilitation. It converts movements into objective values, enabling information devices that have been previously difficult to generalize. The researchers say skill grouping is expected to realize a new system of AI supporting the transmission of traditional skills.


Queen's University Professor to Unveil Self-Levitating Displays, Allowing Physical interactions With Mid-Air Virtual Objects
Queen's University (Canada) (11/05/15) Chris Armes

Queen's University researchers have developed BitDrones, an interactive swarm of flying three-dimensional (3D) pixels they say could revolutionize how people interact with virtual reality. The system enables users to explore virtual 3D information by interacting with physical self-levitating building blocks. The researchers say BitDrones are the first step toward creating interactive self-levitating programmable matter using swarms of nano quadcopters. The researchers created thee types of BitDrones, each representing self-levitating displays of distinct resolutions. PixelDrones are equipped with one light-emitting diode and a small dot-matrix display. ShapeDrones are slightly bigger, and include a lightweight mesh and a 3D-printed geometric frame, and serve as building blocks for complex 3D models. DisplayDrones have a curved, flexible high-resolution touchscreen, a forward-facing video camera, and an Android smartphone board. All three BitDrones include reflective markers, enabling them to be individually tracked and positioned in real time via motion-capture technology. In addition, the system tracks the user's hand motion and touch, so users can manipulate the voxels in space. "We call this a real reality interface rather than a virtual reality interface," says Queen's University professor Roel Vertegaal.


Vanderbilt's Medical Capsule Robots' Hardware, Software Goes Open Source
Research News @ Vanderbilt (11/05/15) Heidi Hall

Vanderbilt University researchers have made their capsule robot hardware and software open source. The capsule robots are small enough to be swallowed, and could be used for preventative screenings and to diagnose and treat a range of internal diseases. The researchers compare the capsules to Lego bricks. "We wanted to provide the people working in this field with their own Lego bricks for their own capsules," says Vanderbilt professor Pietro Valdastri. The decision to open source the technology will enable other research groups with hypotheses about how to use the capsules to not have to redesign boards and interfaces from scratch, which means they can get to the prototyping stage faster. The medical capsule robots can be manipulated to perform internal tasks instead of just passing through the body and recording video. The hardware modules handle computation, wireless communication, power, sensing, and actuation. In addition, each module is designed to interface with new modules developed by other research groups. "Our focus is the design environment, not the software per se, with the goal of easing the learning curve for new researchers and engineers who start in this field," says Vanderbilt professor Akos Ledeczi.


South Korean Researchers Develop User-Friendly, 3D Printing Tech
ZDNet (11/05/15) Philip Igauler

South Korea Electronics and Telecommunications Research Institute (ETRI) researchers say they have developed a handheld three-dimensional (3D) scanner that generates data required in 3D printing. The researchers say the technology is a breakthrough that could lead to new user-friendly applications for the general public. The 3D printing technology includes new features such as simulation tools, 3D scanning, and content creations. The technology relies on graphical user interface design functions and conventional tools such as a scroll bar, height and width attributes, and specified target models. In addition, the handheld 3D scanner is equipped with geometric correction between multiple cameras and line lasers, precise real-time detection of scanner positions, and a real-time preview of scanned results. The device's simulation feature checks an object's durability and stability before it is fabricated. The ETRI researchers also leveraged the technology into the development of a low-cost 3D scanner for mobile devices. "We plan to make more mobile apps and cloud services for non-professionals in order to make 3D printing an everyday resource for people at school or work," says an ETRI spokesperson.


Shields Up: CSU Researchers Are Making the Internet More Secure
SOURCE (CO) (11/03/15) Anne Ju Manning

A multi-disciplinary Colorado State University (CSU) research team is developing a new line of defense against distributed denial-of-service (DDoS) attacks, with $2.7 million in support from the U.S. Department of Homeland Security. The team, which consists of researchers in the fields of computer science, statistics, and computer information systems, is developing a defense service that can identify and protect against large-scale DDoS attacks. The Network Membrane (NetBrane) project leverages evolving cybersecurity capabilities with the goal of forming a deployable "shield" against DDoS attacks. NetBrane filters Internet traffic at a speed of 100 Gbps and makes use of rapidly expanding cloud resources that allow for flexibility in diverting traffic when under attack; for example, by sending traffic to virtual machines on the cloud. NetBrane also is using software-defined networking to deploy very fine control of switches and routers across the Internet. CSU co-principal investigators Stephen Hayne and Haonan Wang are designing algorithms for anomaly detection in Internet traffic. By applying statistical analyses and parallel cloud-based analytics, they are forming automated techniques to both predict and detect attacks within seconds.


Teaching Machines to Learn on Their Own
Scientific American (11/10/15) Larry Greenemeier; Steve Mirsky

In an interview, Xerox Palo Alto Research Center CEO Stephen Hoover discusses the swift changes machine learning is undergoing. He says computers' growing ability "to understand in much deeper ways what it is that we're asking and trying to do" is starting to be incorporated into products, such as the Nest thermostat. Nest, for example, has a built-in agent that learns from user behavior and infers context so it can anticipate how to operate. Hoover says machine learning involves the machine deducing the right answer from data input and programming itself, instead of the programmer breaking down a task into a series of steps. "You're going to show the computer a bunch of instances and you're going to label it, and it's going to learn how to do it," he says. "There's a core code which is that learning algorithm, and then that's applied to multiple contexts.' Hoover credits Moore's Law with enabling continued advances in machine learning. "Hardware not only begets the capability to create new kinds of software like machine learning, but also is creating new ways to sense, measure, and control the world," he says. "And that feedback loop is again one of the big changes that we're going to see coming."


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe