Association for Computing Machinery
Welcome to the March 18, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


JavaScript Most Popular Language
eWeek (03/17/16) Darryl K. Taft

JavaScript is the most popular programming language, according to Stack Overflow's annual survey of 56,033 developers. More than 85 percent of full-stack developers said JavaScript was their most-used programming language, while 32.2 percent of respondents cited Angular as their preferred technology, and 27.1 cited Node.js. "Technologies that make it easy to program in multiple locations, like JavaScript, are becoming more important," says Sencha's Shikhir Singh. "That's one of the great things about JavaScript is that you can code on the front end and the back end." In addition, the average developer regularly uses four to five major programming languages, frameworks, and technologies, with the most common two-technology combination being JavaScript and SQL. Moreover, Stack Overflow says the Swift programming language's popularity is booming, with its expansion outpacing any other technology last year. The survey also found Rust to be the most-loved programming language by developers, who disliked using Visual Basic the most. Another key finding was female coders have two years less experience than males on average, which may suggest the percentage of female developers is on the rise.


Secure, User-Controlled Data
MIT News (03/18/16) Larry Hardesty

A joint Massachusetts Institute of Technology (MIT)-Harvard University project has developed Sieve, an application enabling Web users to store all of their personal data in the cloud. The data is encrypted, so any app that wants to use specific data can send a request and get a secret key that only decrypts the requested items; Sieve can re-encrypt the data with a new key if the user wants to rescind access. "There's one type of security and not 10 types of security," notes MIT's Frank Wang. "We're trying to present an alternative model that would be beneficial to both users and applications." Sieve employs attribute-based encryption and key homomorphism methods, and the researchers envision the app bypassing the slowness of the first technique by bundling certain types of data together under a single attribute. Sieve also features tables that track the sites at which grouped data items are stored in the cloud. Each table is encrypted under a single attribute, but the data they direct to is encrypted via standard encryption algorithms, so decryption is more efficient because the size of the data item encrypted remains unchanged. With key homomorphism, the cloud server can re-encrypt the data it is storing without decrypting it first, or without transmitting it to the user for decryption, re-encryption, and re-uploading.


Machines Are Teaching Themselves to Grapple With the Real World
New Scientist (03/16/16) Aviva Rutkin

Breakthroughs in machine-learning applications are accelerating, with a key area of research being interaction with physical objects. Google researchers recently demonstrated a robot claw's ability to grip household objects using rudimentary hand-eye coordination learned by trial and error, while Facebook disclosed how one of its artificial intelligences (AIs) intuitively gained understanding about physical objects by studying videos of wooden towers falling. The same techniques enabling Google's robo-claw to handle objects were used to teach Google's AlphaGo software to master the game of Go in last week's victorious match against a human champion. The method involves using feedback from successes and failures to enhance machines' dexterity. The experiments represent a migration away from the standard supervised learning approach and toward a model in which the algorithm self-learns. "Currently, we need to take the computer by the hand when we teach it and give it a lot of examples," says the University of Montreal's Yoshua Bengio. "But we know that humans are able to learn from massive amounts of data, for which no one tells them what the right thing should be." AIs also will have to learn to be versatile by mastering the ability to perform multiple tasks well, a milestone Allen Institute for Artificial Intelligence CEO Oren Etzioni speculates is decades away.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Building a Brand-New Internet
TechCrunch (03/13/16) Menny Barzilay

A paradigm shift is necessary to counter the trend of vulnerabilities outgrowing cybersecurity capability, and Tel Aviv University's Menny Barzilay offers the concept of the Alternative Global Network (AGN) as a possible solution. He writes AGNs would provide an upgradeable and more trustworthy Internet via a more manageable networking connectivity model enabled by "worldwide wireless Internet access." AGNs would facilitate fast, simple upgrades to the network operating system and protocol stack by the network provider and eliminate the need for new security products, because once an initial attack has occurred and been analyzed, a network-wide update can be undertaken in seconds. Network virtualization is another advantage of AGNs Barzilay cites, enabling any given set of devices to be seamlessly networked regardless of the nature of their physical connections. Making this workable will require the elimination of wide-area and local-access networks, Barzilay notes. A third benefit AGNs offer is an identification-by-default solution that eases user authentication while also enhancing security and trust through refined network interface control. Barzilay says this entails boosting users' accountability via a new kind of Network Interface Controller with a unique private key that enables the network provider to set up a Network Access Prevention solution. Barzilay notes the acceptance of AGNs hinges on the provider deploying a secure gateway.


SXSW Highlights Bright and Dark Tech Futures
Computerworld (03/15/16) Lamont Wood

Both hopeful and despairing visions of the future enabled by advancing technology competed at last week's South By Southwest Interactive (SXSW) festival, with panelists in one session seeing a need for clear ethical and regulatory guidelines. "The landscape is rapidly changing, we don't know what to regulate, we don't know how to regulate it, regulation may not be the best tool, we don't know our end goals, and we have no mandate," said former White House policy adviser Nicole Wong. Researcher Ashkan Soltani said current data collection regulations have no effect on big data, which can infer sensitive information about users from innocuous questions. He suggested firms may be required to attest their websites offer unbiased outcomes, while Microsoft Research's Kate Crawford said technology is so powerful now a code of practice should be considered. "A Hippocratic Oath for programmers might be good--and then there is malpractice," agreed ProPublics reporter Julia Angwin. Another session hosted a more optimistic outlook of machine-learning innovation, including superhuman machines that obtain knowledge in the overarching context of the universe with benign outcomes for the future of civilization, according to Hanson Robotics founder David Hanson. Cogbotics founder Eric Shuss urged instilling compassion and understanding within machine intelligence, while a third session took a dim view of the sluggish pace of progress with natural-language interfaces.


Machine-Learning Algorithm Identifies Tweets Sent Under the Influence of Alcohol
Technology Review (03/16/16)

Nabil Hossain and colleagues at the University of Rochester have trained a machine to spot alcohol-related tweets, determine whether tweeters are drinking at the time, and ascertain whether those tweeting under the influence were drinking at home or somewhere else. The team collected geolocated tweets from July 2013 to July 2014 sent locally and from New York City, filtered the tweets for alcohol-related words, and used Amazon Mechanical Turk crowdsourcing workers to analyze them for more detail. For example, the team drew up a list of words and phrases people are likely to use when tweeting from home, such as "Finally home!" or "bath," and used the crowdsourced workers to confirm the results. The researchers used the data to train a machine-learning algorithm to identify other patterns associated with home-based tweets. The algorithm looked to see how home location correlates with other indicators such as location of the last tweet of the day, the most popular location of a tweet, and the percentage of tweets from a certain location. The algorithm found a lower proportion of tweets associated with alcohol in suburban Monroe County, but a higher proportion of people who drink at home in New York City. The team believes the technique offers an affordable and fast way to identify alcohol consumption patterns.


New Technique Wipes Out Unwanted Data
Lehigh University (03/15/16) Kurt Pfitzer

Researchers at Lehigh and Columbia University have developed a machine-learning method that involves making such systems forget the data's "lineage" so they can remove the data and undo its effects and allow future operations to run as if the data never existed. Although the concept of "machine unlearning" is well-established, the researchers have developed a way to do it faster and more effectively than can be done using current methods. Effective machine-unlearning techniques can help improve the privacy and security of raw data. The new machine-unlearning method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch. The approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other; the learning algorithms depend only on the summations and not on individual data. The method enables machine-learning systems to unlearn a piece of data and its lineage without rebuilding the models and features that predict relationships between pieces of data. Recomputing a small number of summations would remove the data and its lineage completely, and is much faster than by retraining the system.


Robot Learning Companion Offers Custom-Tailored Tutoring
National Science Foundation (03/14/16)

Researchers from Tel Aviv University and the Massachusetts Institute of Technology (MIT) have developed a socially assistive robot that can serve as a one-to-one peer for learning in the classroom or for play. The researchers say Tega robot is unique in that it can interpret emotional responses and create a personalized motivational strategy based on those emotional cues. Developed for long-term interactions with children, the furry, brightly colored robot uses an Android device to process movement, perception, and thinking, and responds appropriately to behaviors. A second Android phone containing software developed by Affectiva enables Tega to interpret the emotional content of facial expressions. The researchers tested Tega in a Boston preschool classroom last year and the system showed it can learn and improve itself in response to the unique characteristics of students. "What is so fascinating is that children appear to interact with Tega as a peer-like companion in a way that opens up new opportunities to develop next-generation learning technologies that not only address the cognitive aspects of learning, like learning vocabulary, but the social and affective aspects of learning as well," says MIT professor Cynthia Breazeal. The work was supported by a five-year, $10-million Expeditions in Computing award from the U.S. National Science Foundation.


This New Discovery Could Put Quantum Computers Within Closer Reach
IDG News Service (03/16/16) Katherine Noyes

Researchers at Florida State University's National High Magnetic Field Laboratory (MagLab) have found a way to dampen quantum bits' (qubits) susceptibility to magnetic disruptions, or "noise," using atomic clock transitions. The researchers say this breakthrough could help remove a major obstacle to workable quantum computers. The MagLab team utilized carefully designed tungsten-oxide molecules containing a single magnetic holmium ion, and successfully maintained the coherent operation of a holmium qubit for 8.4 microseconds. MagLab Dorsa Komijani says that duration is potentially long enough for performing useful computational tasks. Stephen Hill, director of the MagLab's Electron Magnetic Resonance Facility, says the next step is to use the same or similar molecules within devices that enable manipulation and read-out of an individual molecule. Once that is achieved, Hill says schemes must be produced that can address multiple qubits on an individual basis and switch the coupling between them on and off so quantum logic operations can be implemented. "It is this same issue of scalability that researchers working on other potential qubit systems are currently facing," he notes.


Mapping the Brain's Cortical Columns to Develop Innovative Brain-Computer Interfaces
CORDIS News (03/11/16)

The European Union's COLUMNARCODECRACKING project has successfully used ultra-high fMRI scanners to map cortical columns, a process that could lead to new applications, such as brain-computer interfaces (BCIs). The project focused on cortical columnar-level fMRI, which has already contributed to a deeper understanding of how the brain and mind work by analyzing the fine-grained functional organization within specialized brain areas. The project also has stimulated a new line of research called "mesoscopic" brain imaging, which is gaining increasing momentum in the field of human cognitive and computational neuroscience. Mesoscopic brain imaging complements conventional macroscopic brain imaging, which measures activity in brain areas and large-scale networks. "On the one hand, this provides a challenging testbed for our newly acquired knowledge about coding principles in brain areas," says Maastricht University professor Rainer Goebel. "On the other hand, this research could lead to novel applications for some patients, such as those suffering locked-in syndrome, despite the limited availability of UHF scanners." The project has conducted several fMRI studies to learn whether it is possible to create BCIs that exploit information at the level of columnar-level features. Goebel says the research "could indeed pave the way for highly advanced BCIs that could not only help treat neurological disorders but also significantly upgrade humankind's ability to integrate and connect organically with high-powered computer systems."


Experiments Show Magnetic Chips Could Dramatically Increase Computing's Energy Efficiency
Berkeley News (03/11/16) Sarah Yang

University of California, Berkeley researchers demonstrated for the first time that magnetic chips can operate at the lowest fundamental energy dissipation theoretically possible under the laws of thermodynamics. The researchers say their findings prove dramatic reductions in power consumption are possible, and could shrink as low as one-millionth the amount of energy per operation used by conventional transistors. The researchers experimentally tested and confirmed the Landauer limit, using an innovative technique to measure the tiny amount of energy dissipation that resulted when they flipped a nanomagnetic bit. They then used a laser probe to carefully follow the direction the magnet was pointing as an external magnetic field was used to rotate the magnet from "up" to "down" or vice versa. The researchers determined it only takes 15 millielectron volts of energy to flip a magnetic bit at room temperature, effectively demonstrating the Landauer limit. The researchers note the significance of their demonstration is that modern computers are far from the fundamental limit, and future dramatic reductions in power consumption are possible.


The Key to Cybersecurity
The UC Santa Barbara Current (03/09/16) Sonia Fernandez

University of California, Santa Barbara (UCSB) researchers are developing a new layer of certainty for modern encryption standards. The researchers, led by UCSB cryptographer Stefano Tessaro, will use principles from theoretical computer science, applied mathematics, and information theory to develop a foundation from which more secure encryption algorithms may emerge. They will focus on symmetric algorithms, a commonly used type of encryption that relies on both parties having a key to encode and decode communications between them. "From the theory side, we would like to provide validation, and provide proofs that these methods are really sound," Tessaro says. He notes one of the major hurdles is developing algorithms that are secure, and also fast enough to be practical. Encryption algorithms must run many times per second to secure even the simplest of communications. "The theoretical problems that arise are actually more difficult than the traditional problems we encounter in theoretical cryptography, where usually we have an additional degree of freedom, and if we can't solve problems we can make things slower," Tessaro says. He notes the project aims to solve the problem of security while accounting for the existing constraints of the requirements of speed.


Solving Silicon Valley's Gender Problem
Insights (03/15/2016) Lee Simmons

In an interview, former Stanford University Graduate School of Business doctoral students Julie Oberweis and Monica Leas discuss the genesis of a survey detailing Silicon Valley's inhospitable culture toward women. Leas says although a lack of enough women in the technology industry pipeline is a legitimate issue, a larger problem lies with recruiting practices and retention. "There's unconscious bias; there's blatant bias and harassment," she notes. The survey also found under-reporting of such bias by women, who saw disclosure as a possible impediment to their careers. Leas says 47 percent of female respondents reported being asked to do menial tasks, 60 percent encountered unwanted sexual advances, 33 percent feared for their safety at some point, and 66 percent felt exclusion from networking opportunities. Subtle slights and prejudices instead of overt discrimination appear to be the norm, with Oberweis noting collectively these slights constitute "death by a thousand cuts." Above all, Oberweis and Leas urge an open dialogue about gender discrimination, and Leas sees hope in companies that are targeting specific numbers of female recruits. "I do think there's a tipping point at which the similarity bias in hiring goes away and diversity becomes self-sustaining," she says.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe