Association for Computing Machinery
Welcome to the September 30, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Computer Algorithm Created to Encode Human Memories
Financial Times (09/29/15) Clive Cookson

Researchers at the University of Southern California (USC) and Wake Forest Baptist Medical Center have spent 10 years developing an implant to help a human brain encode memories, using an algorithm to emulate the electrical signaling needed for translating short-term memories into permanent ones. The implant enables circumvention of a damaged or diseased brain area, which USC's Ted Berger describes as "like being able to translate from Spanish to French without being able to understand either language." The translation algorithm is being applied to nine patients with epilepsy who had electrodes embedded within their brains to treat seizures. As the subjects performed simple tasks, the scientists read the electrical input and output signals in their brains so the algorithm could be refined to the point where it could anticipate neural signal translation with 90-percent accuracy. The next stage of the project will be to transmit the translated signal back to the brain of a subject with hippocampal damage to determine if a bypass of the damaged region is possible. The U.S. Defense Advanced Research Projects Agency is funding the project because of interest in new methods for helping soldiers recover from memory loss. The researchers also think the results of the trials could eventually help in the treatment of neurodegenerative disorders.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


euRathlon 2015 Announces Grand Challenge Winners
Robohub (09/29/2015) Marta Palau Franco

The euRathlon consortium has announced the winners of its 2015 Grand Challenge after several days of competition earlier this month in Piombino, Italy. Formed in the wake of the Fukushima nuclear disaster and similar to the Robotics Challenge run by the U.S. Defense Advanced Research Project Agency, euRathlon offered a combined land, sea, and air challenge to robotics teams from across Europe. Six teams competed in this year's Grand Challenge, which involved three missions that included objectives such as finding lost rescue workers, surveying a damaged building, and inspecting pipes both on land and in the sea. The teams had 100 minutes to complete all three missions. The first place winner was a multi-domain team composed of teams from Cobham Mission Systems, University of Girona in Spain, and ISEP/INESC TEC in Portugal. The Cobham and ISEP/INESC TEC teams won the Land + Air sub-challenge, while Cobham and the University of Girona won the Sea + Land sub-challenge. The multinational ICARUS team won the IEEE Autonomy and IEEE Multi-Robot Collaboration prizes, while the bebot-team (Bern University of Applied Sciences), the University of Girona, and ICARUS won the Texas Instrument Special Prizes for Innovation. The University of Girona team also won the Marine Trials students prize.


Stanford Computer Scientist Christopher Re Named MacArthur Fellow
Stanford Report (09/30/15) Bjorn Carey

Stanford University computer science professor Christopher Re has been named one of the 2015 recipients of a John D. and Catherine T. MacArthur Foundation fellowship, commonly referred to as a "genius grant." The fellowship includes a five-year, $625,000 stipend. Among the top honors in academia and the creative arts, the fellowship was awarded to Re in recognition of his contributions to data science. The MacArthur Foundation described Re's achievements in "democratizing big-data analytics through open source data-processing products that have the power of machine-learning algorithms but can be integrated into existing and applied database systems." Re helped to develop the DeepDive data inference system, which has been used to analyze large numbers of genetic and medical studies to help advance drug development. The platform also has been used by the U.S. Defense Advanced Research Projects Agency to sift data on the Dark Web in order to identify and break up human trafficking networks. Re says he is excited by the award and eager to put it to use pursuing various projects. "Every academic has ideas in his or her drawer that can't get funding because maybe it's too crazy, even though the outcome will be big," Re says.


Google Tries to Make Its Cars Drive More Like Humans
The Wall Street Journal (09/28/15) Alistair Barr; Mike Ramsey

Google is training its driverless cars to behave more like humans by cutting corners, edging into intersections, and crossing double-yellow lines as it moves toward commercializing its self-driving technology. The vehicles are "a little more cautious than they need to be," says Google's Chris Urmson, who leads the company's effort to develop driverless cars. "We are trying to make them drive more humanistically." Among the hazards the engineers are attempting to address is the tendency for the cars to be rear-ended because of their habit of braking to avoid real, but small, risks. Nvidia CEO Jen-Hsun Huang says this problem could be rectified via deep-learning methods that help computers identify images and objects, and then improve over time. In the case of improving the cars' turning behavior to be more instinctive and human-like, Google studied people's turning patterns and embedded the habit of taking corners more directly and turning earlier into its algorithms. Another enhancement sought to remedy the vehicles' refusal to cross a double-yellow line, in keeping with its rules-based programming. Because this led to the Google autos stopping indefinitely when parked cars were blocking the road, Google revised the program so the cars could drive over double-yellow lines in such scenarios.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


A New Map Traces the Limits of Computation
Quanta Magazine (09/29/15) John Pavlus

At a recent symposium, Massachusetts Institute of Technology (MIT) researchers presented a mathematical proof that the current best algorithm is optimal, which means it is not mathematically possible to find a more efficient way to compute edit distance; but this is only true if the strong exponential time hypothesis (SETH) proves to be valid. Stanford University professor Ryan Williams thinks SETH is false, and is attempting to refute it. However, Williams contends SETH is useful as a tool to plot out the topography of computational complexity, regardless of whether it is true or not. SETH is a hardness assumption about Boolean satisfiability (SAT), or whether a generic problem is at all solvable. Most computer scientists believe the only general-purpose technique for finding the solution to a SAT problem involves individually trying all possible settings of the variables. SETH's ramification is that finding a better general-purpose SAT algorithm is an impossibility, and the MIT researchers demonstrated a link between the complexity of edit distance and that of k-SAT, which implied edit distance would take perhaps a millennium to run when applied to actual tasks such as genome comparisons, where variables run into the billions. "If I want to refute SETH, I just have to solve edit distance faster," Williams says.


CCC Whitepaper--Systems Computing Challenges in the Internet of Things
CCC Blog (09/28/15) Helen Wright

The Computing Community Consortium's (CCC) Computing in the Physical World Task Force, led by CCC council member Ben Zorn from Microsoft Research, recently released a study examining the core research challenges presented by the Internet of Things (IoT). The study also provides recommendations for addressing the limitations in existing systems, practices, tools, and policies. The study found IoT will have dynamic membership and operate in unknown and unpredictable environments, which include, by assumption, adversarial elements. The study recommends investment in research to facilitate the construction and deployment of multi-component systems with complex and dynamic dependencies. The task force also calls for the support of research into the unique challenges and opportunities in IoT security, including minimal operating systems to create IoT devices with smaller attack surfaces, new ways to detect and prevent anomalous network traffic, and high-level policy languages for specifying permissible communication patterns. Among other suggestions, the group says there should be investment in research in cyber-human systems that reflect human understanding and interaction with the physical world and semi-autonomous systems. Architectures and solutions that transcend specific application domains are another recommended area of study cited by the task force.


Smaller, Faster, Cheaper, Over: The Future of Computer Chips
The New York Times (09/26/15) John Markoff

Computer chip speed and energy efficiency upgrades cannot be sustained for much longer, with technologists expecting to reach a physical limit to shrinking semiconductor size and increasing transistor density by 2025. The end of Moore's Law, as these researchers predict, could have a negative impact on the computing industry--or any industry that depends on highly reliant, low-cost electronics. "The most fundamental issue is that we are way past the point in the evolution of computers where people auto-buy the next latest and greatest computer chip, with full confidence that it would be better than what they've got," says former Intel engineer Robert P. Colwell. Avoiding the barrier of Moore's Law--and possibly even bypassing it--may involve chip companies using software or designs that extract more computing power from the same number of transistors. Replacing silicon with more exotic materials is another possibility, leading to faster and smaller transistors, new types of memory storage, and optical communications links, according to Efficient Power Conversion Corp. CEO Alex Lidow. The ability to manufacture smaller wires and chip features might be enabled by extreme ultraviolet lithography, if its use in commercial production can be demonstrated. The end of this decade should witness the emergence of ultra-low-power chips, which may not even need batteries, and this will likely force product designers to rethink their approaches.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


NASA, Google Team on Quantum Computing for AI Research
FierceCIO (09/28/15) Robert Bartley

Google, the U.S. National Aeronautics and Space Administration (NASA), and the Universities Space Research Association (USRA) have reached an agreement with D-Wave to give the three organizations joint access to the latest D-Wave quantum computing technology as it becomes available over the next seven years. The D-Wave systems will be used by Google, NASA, and USRA to conduct research into artificial intelligence and machine learning at the Quantum Artificial Intelligence Lab located at NASA's Ames Research Center. The partner organizations have been working with D-Wave at the Ames center since 2013. The collaboration has included using D-Wave hardware to research issues in Web search, speech recognition, planning and scheduling, and air-traffic management, as well as space exploration and support operations. D-Wave has been one of the first companies to put quantum computing systems on the market, although some experts debate whether or not these systems represent a meaningful step up from traditional systems. "The new agreement is the largest order in D-Wave's history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers," said D-Wave CEO Vern Brownell.


A Light Touch: Embedded Optical Sensors Could Make Robotic Hands More Dexterous
Carnegie Mellon News (PA) (09/28/15) Byron Spice

Carnegie Mellon University (CMU) researchers have developed a three-fingered soft robotic hand with multiple embedded fiber-optic sensors, as well as a new type of stretchable optical sensor. The researchers used fiber optics to embed 14 strain sensors into each of the fingers in the robotic hand, enabling it to determine where its fingertips are in contact with an object and to detect forces of less than a tenth of a newton. The researchers think the material could potentially be used in a future soft robotic skin to provide even more feedback. "If you want robots to work autonomously and to react safely to unexpected forces in everyday environments, you need robotic hands that have more sensors than is typical today," says CMU professor Yong-Lae Park. All of the sensors in each of the fingers of the robotic hand are connected with four fibers, although a single fiber could theoretically accomplish the task. Each of the hand's fingers mimic the skeletal structure of a human finger, with a fingertip, middle node, and base node connected by joints. The hand's "bones" are three-dimensionally printed hard plastic and include eight force-detecting sensors. Each of the three finger sections is covered with a soft silicone rubber skin embedded with six sensors that detect where contact is made.


In the Mind of a Student
Inside Higher Ed (09/25/15) Jacqueline Thomsen

University of Wisconsin-Madison (UW-Madison) researchers are using a combination of psychology and computer science to determine how best to optimize teaching for individual students. The research aims to enable teachers and professors to immediately know what subjects students are struggling with and be able to address those needs, instead of teaching a whole class of students with different ranges of difficulties. The researchers call this technique "machine teaching," which involves an equation that represents a student's mind and tells the teacher the student's specific learning style and needs in the classroom. "What really happens is we sit on top of [that equation], so anybody willing to come forward and say, 'hey, here's how we think the human mind is computing,' we're going to take that with an educational goal and the actual machine teaching, trying to come up with the best lesson," says UW-Madison professor Jerry Zhu. The researchers are focused on the theoretical side of the technology, examining how to create an optimal lesson for a student if the correct question is used. The team is using cognitive models created by psychologists that determine how children add up simple numbers to test the machine-teaching technique. Machine teaching also could be used in higher education, especially in small group discussions or labs, according to the researchers.


UAB Research Finds Automated Voice Imitation Can Fool Humans and Machines
UAB News (09/25/15) Katherine Shonesy

University of Alabama at Birmingham (UAB) researchers have found automated and human verification for voice-based user authentication systems are vulnerable to voice impersonation attacks. They used an off-the-shelf voice-morphing tool to create an attack attempting to penetrate automated and human verification systems. The research explores how an attacker in possession of audio samples from a victim's voice could compromise the victim's security, safety, and privacy. "Because voice is a characteristic unique to each person, it forms the basis of the authentication of the person, giving the attacker the keys to that person's privacy," says UAB professor Nitesh Saxena. As a case study, the researchers investigated the results of stealing voices in two important applications and contexts that rely upon voices as the basis for authentication. The first application is a voice-biometrics system, which uses the unique features of an individual's voice to authenticate that individual. The researchers also examined the implications stealing voices had on human communication. "The attacker could post the morphed voice samples on the Internet, leave fake voice messages to the victim's contacts, potentially create fake audio evidence in the court, and even impersonate the victim in real-time phone conversations with someone the victim knows," Saxena says.


Researchers Tout Technology to Make Electronics Out of Old Tires
Network World (09/25/15) Michael Cooney

Discarded tires could be used to create electrodes for supercapacitors, and researchers have developed the process for using old tires at the U.S. Department of Energy's Oak Ridge National Laboratory and Drexel University. The process involves soaking crumbs of irregularly shaped tire rubber in concentrated sulfuric acid, washing it and putting it into a tubular furnace under a flowing nitrogen gas atmosphere, and gradually increasing the temperature from 400 degrees Celsius to 1,100 degrees. In addition, the material is mixed with potassium hydroxide, baked and washed with deionized water, and then oven-dried. The treatment leads to a material that can be mixed with polyaniline, an electrically conductive polymer, to create the finished product--flexible polymer carbon composite films. "Supercapacitors with this technology in electrodes saw just a 2-percent drop after 10,000 charge/discharge cycles," the researchers note. They say the devices could be used on the electrical grid, in cars, or in other electronics applications. "Tires will eventually need to be discarded, and our supercapacitor applications can consume several tons of this waste," says research team leader Parans Paranthaman. "Combined with the technology we've licensed to two companies to convert scrap tires into carbon powders for batteries, we estimate consuming about 50 tons per day."


Three Questions for Facebook CTO Mike Schroepfer
Technology Review (09/25/15) Rachel Metz

In an interview, Facebook chief technology officer Mike Schroepfer discusses a vision of mainstream virtual reality (VR) technology adoption, to the point where it is used as a social tool. "This is...the thing that will take the longest to develop, because to have a socially engaging product you have to have both people and the technology," he says. "I think you may see the equivalent of [local-access network] parties and other things." Schroepfer imagines a slate of VR-enabled entertainment experiences that are mainly "a foil to give something for people to focus on and have a conversation," as well as more direct and interactive applications eventually. Although the high cost of VR equipment currently is a stumbling block to mass-market adoption, Schroepfer is confident the hardware will become more affordable and more readily accessible over time. For example, he thinks the PC for rendering the Oculus Rift, which currently costs about $1,000, will likely be half as expensive within two years. It remains uncertain whether Facebook will release a version of Gear VR that interoperates with many different smartphones, instead of only one Samsung smartphone, according to Schroepfer. "It is hard to build something generic that is actually a good experience," he says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe