Association for Computing Machinery
Welcome to the January 22, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Completion Rates Aren't the Best Way to Judge MOOCs, Researchers Say
The Chronicle of Higher Education (01/22/14) Jennifer Howard

Researchers at Harvard University and the Massachusetts Institute of Technology (MIT) say that completion rate alone is not the best way to measure the success of massive open online courses (MOOCs). "Course certification rates are misleading and counterproductive indicators of the impact and potential of open online courses," according to the researchers, who recently released the first in a series of papers on MOOCs. The papers draw on data taken from 17,000 MOOCs offered by Harvard and MIT in 2012 and 2013. The researchers found that just 5 percent of the more than 840,000 students who signed up for the 17 MOOCs earned a certificate of completion, while 35 percent never viewed any of the course materials. They also found that 66 percent of all registrants already held a bachelor's degree or higher, 29 percent of all registrants were female, and 3 percent of all registrants were from underdeveloped countries. However, the researchers say that examining these types of statistics is a bad way of evaluating MOOCs. Harvard professor Andrew Dean Ho notes that some students who register for MOOCs have no intention of completing them, and some instructors do not emphasize completion as a priority.


Robots as Platforms?
CORDIS News (01/20/14)

The European Union is funding a research project launched in December 2013 that will develop a platform for enabling robots to deliver smart, user-friendly robotic applications. The Robotic Applications for Delivering Smart User Empowering Applications (RAPP) project will work on the computational and storage capabilities of robots, and enable machine-learning operations, distributed data collection and processing, and knowledge-sharing among robots. The research partners in the project will build the infrastructure that will help developers create robotic applications (RApps) that can meet the individual needs, capabilities, and expectations of the elderly and other people who require support in their daily lives, including privacy and autonomy. The project will build a repository that will enable robots to download RApps and upload useful information as needed. The partners also will develop ways to improve knowledge transfer and reuse between humans and robots, and among other artificial systems. The initiative could encourage the adoption of small home and service robots as helpers and companions. The three-year RAPP project has nearly 2.5 million euros in funding.


Understanding Collective Animal Behavior May Be in the Eye of the Computer
NYU-Poly News (01/16/14)

New York University Polytechnic School of Engineering professor Maurizio Porfiri led an international team of researchers in a study that applies machine learning to the understanding of social behavior in animal species. The researchers created a framework to apply humans' innate ability to recognize behavior patterns to machine-learning techniques. The team used an existing machine-learning method called isometric mapping (ISOMAP) to determine if the algorithm could analyze video footage of a flock of flying birds, register the aligned motion, and embed the information on a low-dimensional manifold to visually display the properties of the behavior. "We wanted to put ISOMAP to the test alongside human observation," Porfiri says. "If humans and computers could observe social animal species and arrive at similar characterizations of their behavior, we would have a dramatically better quantitative tool for exploring collective animal behavior than anything we've seen." Using video of five social species, including ants, fish, frogs, chickens, and humans, the team compared human rankings to the ISOMAP manifolds, and found that results were highly similar. The team believes the work represents a breakthrough in understanding social animal behaviors, and will conduct further research on more subtle aspects of collective behavior, such as the chirping of crickets.


Google After-School Program Encourages Study of Computer Science
eWeek (01/20/14) Todd R. Weiss

In July 2013, Google launched CS First, a pilot program at its South Carolina data center to work with students to encourage their interest in computer science. The program, which is especially aimed at gaining the interest of minorities and females, aims to help students develop a positive attitude toward computer science and computers, as well as develop the confidence and curiosity to seek out new computing experiences. "With these goals in mind, we began pilot programs in Berkeley, Charleston, and Dorchester counties [in South Carolina], exposing students, with a focus on underrepresented minorities and girls, to the most promising existing content and tools," says CS First program leader JamieSue Goodman. CS First so far has conducted 31 after-school programs for 4th through 12th grades, with more than 450 students participating. "Of those students, 53 percent were girls, and 66 percent qualify for free or reduced [price] lunch," Goodman notes. "The ultimate goal of CS First is to provide proven teaching materials, screencasts, and curricula for after-school programs that will ignite the interest and confidence of underrepresented minorities and girls in CS and to scale these programs through a network of teacher sponsors, volunteers, and national organizations."


The Promise of DNA-Based Storage
HPC Wire (01/16/14) Tiffany Trader

Researchers at Harvard Medical School and the European Bioinformatics Institute (EBI) have shown that DNA-based data storage is both effective and efficient. The major benefits of DNA as a storage mechanism are that it is electricity-free, incredibly dense, and stable, while the technology to read and write DNA has existed since bacteria were first genetically engineered in 1973, according to EBI's Ewan Birney. In 2003, Pacific Northwest National Laboratory researchers transferred encrypted text into DNA by converting each character into a base-4 sequence of numbers, each corresponding to one of the four DNA bases. However, the fast replication rates of live DNA threaten to compromise data over long periods of time, and the researchers propose using naked DNA instead because living cells are not necessary for DNA to remain intact. The researchers encoded 739 kilobytes of unique data into naked DNA code, synthesized the DNA, sequenced it, and reconstructed the files with more than 99-percent accuracy. Although the technology is not yet ready for mass storage, it is already economically viable for very long-term applications, such as nuclear site location data and other governmental, legal, and scientific archives that need to be kept long-term but are infrequently accessed.


Meet the Man Google Hired to Make AI a Reality
Wired News (01/16/14) Daniela Hernandez

Geoffrey Hinton, a pioneer in a branch of artificial intelligence known as deep learning, is working for Google to apply the technology to voice recognition, image tagging, and other online tools. Hinton develops artificial neural networks from interconnected layers of software modeled after neuron columns in the brain's cortex. "I get very excited when we discover a way of making neural networks better--and when that's closely related to how the brain works," he says. Hinton's artificial neural networks can collect information and develop an understanding of a subject. The technology is making advances in understanding what a group of words mean when combined, without asking a person for labels. The neural networks are quick, flexible, and efficient, scaling well across a growing number of machines. However, Hinton and his colleagues faced a difficult path to reach this point, and many in the artificial intelligence community turned away from neural networks in the 1980s in favor of shortcuts that did not try to mirror the brain's functioning. In 2004, Hinton founded the Neural Computation and Adaptive Perception program, inviting leading computer scientists, biologists, electrical engineers, neuroscientists, physicists, and psychologists to create computing systems that mimic organic intelligence.


How Information Flows During Emergencies
Technology Review (01/15/14)

Beijing Jiaotong University network scientist Liang Gao used mobile phone records to study human behavior during disasters and found that patterns of communication, and therefore information flows, change in significant ways. Liang and colleagues studied the metadata from voice calls and texts of 10 million people over four years in an unidentified European country. The team identified emergencies that occurred in the region for the given time frame and then examined calls in the proximity of that time. The researchers studied people close enough to the emergency event to be directly influenced by it, as well as the group of people called by those directly involved. Rather than calling others to spread the news of the emergency, the second group of people typically made their next call back to the person close to the emergency. The team concludes that contrary to normal communication patterns, in an emergency the need to correspond with eyewitnesses is more critical than spreading situational awareness. The researchers say the findings reveal the way information spreads during unusual events and could impact authorities' response to emergencies.


Academics Devise Formula to Gauge How Well U.S. Regulators Listen
Reuters (01/15/14) Sarah N. Lynch

Three academics have developed RegRank, an algorithm that measures how well public feedback is received and incorporated into the rules for Wall Street traders. RegRank was recently unveiled in a new paper authored by Andrei Kirilenko, the U.S. Commodity Futures Trading Commission's former chief economist; University of Maryland professor Shawn Mankad, and University of Michigan professor George Michailidis. The academics note that what makes the algorithm unique is how it can measure "regulatory sentiment" and test the impact of the public comment process on rule-making. Kirilenko says they realized that all of the legal jargon often used in the comments could be condensed to clusters of "basic words." RegRank works by mining regulatory text, which can span hundreds of pages, searching for certain clusters of key words that are deemed "pro-regulation" or "anti-regulation." The algorithm keeps track of how often these pro- or anti-regulatory words appear, which enables researchers to compare how a proposed rule evolves into a final one and whether the public's comments were taken into account. "The results show that the government listens, but more research would be needed for deeper insights," Mankad says.


BYU's Smart Object Recognition Algorithm Doesn't Need Humans
BYU News (UT) (01/15/14) Todd Hollingshead

Brigham Young University (BYU) researchers say they have developed an algorithm that can accurately identify objects in images or video sequences without human calibration. "With our algorithm, we give it a set of images and let the computer decide which features are important," says BYU's Dah-Jye Lee. The algorithm can set its own parameters and it does not need to be reset each time a new object is to be recognized. Instead of telling the computer what to look at to distinguish between two objects, the researchers feed it a set of images and it learns on its own. During testing, the researchers say the algorithm performed as well or better than other top object recognition algorithms to be published. The researchers fed their object-recognition program four image datasets from CalTech and found 100-percent accurate recognition on every dataset. The results show the algorithm could be used for many applications, according to Lee. "It's very comparable to other object-recognition algorithms for accuracy, but, we don't need humans to be involved," he says.


Prof. Ozgur Baris Akan of Koc University Receives 1.8 Million Euro Grant by European Research Council
Newswire.ca (01/02/14)

Koc University professor Ozgur Baris Akan recently received a grant from the European Research Council (ERC) to further the development of theoretical communication models of the nervous system, a nanonetwork simulator to validate the theoretical models, and communication-capable neuro-implants for the information and communication technology-based treatment of spinal cord disorders. The research project, "MINERVA: Communication Theoretical Foundations of Nervous System Towards Bio-Inspired Nanonetworks and ICT-Inspired Neuro-Treatment," is the first to receive ERC's Consolidator Grant. "With the MINERVA project, we aim to understand the fundamentals of the nervous system, the most advanced communication network in the human body, via the elegant theories of information and communication technology (ICT)," says Akan, director of Koc's Next-generation and Wireless Communications Laboratory. The ERC grant is different from other European Union grants because it is awarded to a single researcher, rather than a research group, organization, or consortium. Akan says MINERVA "will enable the development of ICT-inspired solutions to be used in the diagnosis and treatment of neurological disorders caused by communication failures in the nervous nanonetwork...[and] will bridge the gap between communication engineering and life sciences, and create important collaboration opportunities."


Superconducting Spintronics Pave Way for Next-Generation Computing
University of Cambridge (01/15/14)

University of Cambridge researchers say they have achieved a breakthrough in the field of spintronics. The researchers note that spintronics has the potential to create a new generation of super-fast computers capable of processing vast amounts of data in an energy-efficient way. They say their breakthrough provides the first evidence that supercomputers could be used as an energy-efficient source for spin-based devices. The research shows the natural spin of electrons can be manipulated and detected within the current flowing from a superconductor. The researchers made both superconductivity and spin possible simultaneously by adding an intervening magnetic layer of the rare earth element holmium. With this layer, the magnetism rotates and forms a non-collinear interface with the magnetic layers. The researchers say the next step is to create a prototype memory element based on superconducting spin currents, and to look for new material combinations that could increase the effectiveness of their method. "Much fundamental research is now required in order to understand the science of this new field, but the results offer a glimpse into a future in which supercomputing could be far more energy-efficient," says Cambridge's Jason Robinson.


Probing Bitcoins
UCSD News (CA) (01/14/14) Ioana Patringenaru

University of California, San Diego (UCSD) researchers, led by Ph.D. student Sarah Meiklejohn, recently conducted a study examining how Bitcoins have been used since their introduction in early 2009. The researchers documented more than 16 million transactions and more than 12 million public keys, which are the addresses Bitcoin users use for their transactions, through April 13, 2013. "Once you do something with that currency, we can learn more and more about who you are and who you interact with," says UCSD research scientist Kirill Levchenko. Most users either play games or engage in some form of currency speculation by moving Bitcoins from mining pools, where they are created, to exchanges where they can be converted to dollars, according to the researchers. The researchers also were able to trace back a large number of public keys to specific clusters. UCSD's Marjori Pomarole created a visualization of the Bitcoin user network, including vendors, gambling services, mining pools that create the currency, fixed-rate exchanges that process transactions, wallets where the currency is stored, and investment schemes. After analyzing the virtual network, the researchers conducted 344 transactions, purchasing everything from silver quarters, to coffee, to a calculator and a used CD.


DARPA in 2014: Director Arati Prabhakar Looks Ahead
Federal Computer Week (01/17/14) Amber Corrin

In an interview, U.S. Defense Advanced Research Projects Agency (DARPA) director Arati Prabhakar says the agency has remained focused on its mission of innovating cutting-edge military technology since its inception in 1958. "Every day we live in a world filled with technologies that trace their roots back to DARPA investments," Prabhakar says. "It's an old story, but a good and important story about investments we make because of national security needs and priorities that eventually blossom into the commercial world." Prabhakar also notes that geopolitical context impacts DARPA's execution of its mission. For example, in 2014, DARPA's mission execution is likely to evolve due to the military drawdown following over a decade of war. Although traditional programs will continue to receive attention, other areas, such as traumatic brain injury (TBI) research and treatment, are expected to grow as a focus. DARPA is working on technology to restore active memory damaged by TBI, as well as deep brain stimulation and neuro-psychological research and potential treatment known as SUBNETS. Prabhakar says the three factors that will drive DARPA's missions in 2014 include the complicated national security environment, technology globalization and the need for agility, and a financial situation that will require DARPA to maximize its flat budget.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe