Association for Computing Machinery
Welcome to the August 21, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Standardized Tests May Be Holding Back the Next Generation of Computer Programmers
The Washington Post (08/20/15) Hayley Tsukayama

Educating students to pass standardized tests, which command most school administrators' time, leaves little room for computer science classes to train the next generation of coders and scientists, according to a Google/Gallup study published this week. "It was the number one problem that principals gave," says Gallup's Brandon Busteed. "They're overwhelmed by what they need to be tested on" and do not have the resources to teach non-core curriculum subjects. The study found about 60 percent of students in grades 7-12 said their schools offer dedicated computer science classes, while 52 percent said computer science is taught as part of other classes. However, those statistics differ significantly when accounting for race and income. For example, black students have less exposure to computer science, and only 48 percent of students from households that make less than $54,000 a year said they have access to dedicated computer science classes. Under-representation also was uncovered in schools with computer science on the curriculum, as three out of four principals said most classes focus on computer graphics instead of coding. Google's Hai Hong says educators and industry should reach out to schools more and demonstrate the diversity of computer science professionals' backgrounds.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


How Social Bias Creeps Into Web Technology
The Wall Street Journal (08/20/15) Elizabeth Dwoskin

Predictive and decision-making Web technologies are susceptible to the unconscious social biases of their designers and programmers, to the degree they can reflect those prejudices in their functions. An example is Google's ad-targeting system, which gave male users a higher probability of being shown ads for high-paying jobs than female users, according to a recent study. "Computers aren't magically less biased than people, and people don't know their blind spots," notes data scientist Vivienne Ming. Machine-learning software is especially vulnerable to bias, according to Andrew Selbst, co-author of an upcoming study on the phenomenon. Such programs learn from a limited set of training data and then refine their knowledge based on real-world data and experience, adopting and often amplifying biases in either data set. Selbst says compounding the difficulty of tracing bias to the source so it can be corrected is the proprietary nature of most software and the complexity of the algorithm used by the computer. Data scientists offer a way to minimize software bias by building more diversity into statistical models. Meanwhile, Selbst and other researchers are leading efforts that seek to localize and mitigate underlying causes of software bias.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Quantum Computer Firm D-Wave Claims Massive Performance Boost
New Scientist (08/21/15) Jacob Aron

D-Wave Systems, which makes computers with some quantum properties, announced its latest device, the D-Wave 2X, which the company claims is up to 15 times faster than regular PCs. However, as with many of D-Wave's claims about its computers, many experts are skeptical. The company says the 2X features more than 1,000 qubits, double that of its previous model, as well as refinements that have reduced noise and increased performance. D-Wave claims to have tested the 2X against specialized optimization software running on an ordinary PC and its new computer was able to find answers between two and 15 times faster. However, some experts have challenged whether this test is an accurate measure of the 2X's capabilities. ETH Zurich professor Matthias Troyer points out D-Wave only tested the 2X against a single core of an eight-core processor and the test was structured to favor the 2X. University of California, Berkeley professor Umesh Vazirani says to truly test the 2X, its performance should be compared to that of top-of-the-line graphical-processing units, which increasingly are used in large-scale parallel computing. Troyer says the way to prove the 2X is really a working quantum computer is a runaway performance boost on larger and larger problems.


CCC Uncertainty in Computation Workshop Report
CCC Blog (08/19/15) Helen Wright

The results of the Computing Community Consortium's "Quantification, Communication, and Interpretation of Uncertainty in Simulation and Data Science" report underscored substantial shortcomings in the way people process, present, and interpret uncertain data, according to participants in last year's Uncertainty in Computation Visioning Workshop. "There is growing concern that the statistical models currently used to quantify uncertainty in the outputs of simulations won't scale, particularly to large, heterogenous computations models," the report said. "This leads to a critical need to transition research in uncertainty quantification of computational systems from the analysis of components to the analysis of large-scale systems of interacting components." Participants also recommended the creation of generally applicable, easy-to-use software development tools supporting representation and analysis of uncertainty so software developers are less burdened by accounting for uncertainty in decision-support systems. Another recommendation calls for providing generalizable methods for measuring uncertainty in the output of analysis systems, with accompanying curriculum development. Finally, the participants urged better integration of uncertainty quantification and communication to stakeholders to deal with greater fragmentation in such areas as the scale of problems, computational resources, and data continues to grow.


Researchers Can Store Data in DNA; the Hard Part Is Retrieving It
Computerworld (08/19/15) Lucas Mearian

ETH Zurich researchers have announced they can encode information within DNA molecules for at least two millennia, and they are engaged in developing a filing system to ease retrieval. Among DNA's advantages over electronic storage is durability and size, and the researchers estimate less than an ounce of DNA could theoretically store more than 300,000 terabytes (TB). In comparison, the average external hard drive stores 5 TB of data and lasts about 50 years. The team, led by ETH professor Robert Grass, has stored 83 kilobytes of text from several ancient documents in DNA by encapsulating the DNA in silica spheres and heating it to almost 160 degrees Fahrenheit for seven days. No errors were detected when the information was decoded. A filing system that avoids read/write errors is the next challenge for the ETH researchers, and this entails determining how to label specific pieces of information on DNA strands so they are searchable. "In DNA storage, you have a drop of liquid containing floating molecules encoded with information," Grass notes. "Right now, we can read everything that's in that drop. But I can't point to a specific place within the drop and read only one file."


Google Reveals How It Scales Its Network
CIO Journal (08/19/15) Rachael King

Google described its networking-scaling effort in a paper presented at the ACM SIGCOMM 2015 conference in London. The impetus for the detailed disclosure of its network operations is Google's move to open up its infrastructure and offer Google Cloud platform services to others, says Google fellow Amin Vahdat. The initiative dates back to 2005 with Google's less expensive decision to use custom-built instead of vendor-supplied switches, which heralded a wave of advances in software-defined networking. Google reports its current network, Jupiter, is powered by off-the-shelf switches scaled to more than 1 petabit per second of total bisection bandwidth. Like Google's five previous network generations, Jupiter employs centralized control management software boasting a 100-fold improvement in capacity over the first generation. Vahdat notes Google's scale is such that it faces major networking challenges related to infrastructure availability, configuration, management, and predictability, which it wants to share with academia. He also observes the increased uptake of big data and analytics by companies challenges them to deliver an appropriately scalable data center network. "While Google might have faced some of these challenges earlier, everyone is faced with these kinds of issues now," Vahdat says.


DARPA Wants Low-Power Chips That Handle High-Impact Applications
Network World (08/18/15) Michael Cooney

The U.S. Defense Advanced Research Projects Agency (DARPA) has launched the Circuit Realization At Faster Timescales (CRAFT) program, which wants to alter and shorten the design cycle for custom integrated circuits by a factor of 10. DARPA wants to use the CRAFT program to reduce the time it takes to design and fabricate a custom integrated circuit from about 2.5 years to about 30 weeks. The CRAFT program also aims to formulate design frameworks that can be readily recast when next-generation fabrication plans come on line. In addition, the CRAFT program will create a repository so methods, documentation, and intellectual property do not need to be reinvented with each design and fabrication cycle. The program also can fabricate customized, technology-specific circuits using a specialized commercial fabrication infrastructure that produces generic commodity circuits, according to DARPA's Linton Salmon. He notes if the CRAFT program can remove the massive amounts of circuitry dedicated to conventional functions, it would enable the resulting space capability to be devoted to crucial functions. DARPA says it is currently seeking proposals to reduce the barriers facing design teams addressing a range of issues with the technology.


Physicists Unveil First Quantum Interconnect
Technology Review (08/18/15)

A quantum-device linkage methodology that carries photons and entanglement between devices has been constructed and tested by an international team of physicists led by Mark Thompson at the University of Bristol. The technique uses a simple optical fiber and two silicon chips with two sources of photons that travel along overlapping photonic channels. Photons that meet in the overlapping area become entangled and then convey this entanglement along separate routes through the device. Thompson's team has converted the path-entanglement into a different kind of entanglement by enabling the path-entangled photons to interfere with newly created photons in a way that polarizes them. This method also entangles the newly generated photons, which pass into the optical fiber and travel to the second silicon photonic chip. In the second chip, the polarized-entangled photons are transformed back into the path-entangled kind, which then continue into the device as if they had come directly from the first chip. "We demonstrate high-fidelity entanglement throughout the generation, manipulation, interconversion, distribution, and measurement processes, across two integrated photonic circuits, successfully demonstrating the chip-to-chip quantum photonic interconnect," the team says.


Securing Data From Tomorrow's Supercomputers
Queensland University of Technology (08/18/15) Rose Trapnell

A team of cryptographers has developed upgrades to the Internet's core encryption protocol that are intended to help secure Internet communications. Quantum computers could be a reality in a few decades and the powerful machines would be able to break much of the public key cryptography that is used to protect electronic communications. Douglas Stebila from the Queensland University of Technology and colleagues have developed a quantum-proof version of the Transport Layer Security standard, which is used by the encryption technology that is currently protecting Internet communications. The team's version incorporates a technique called the "ring learning with errors problem," which mathematicians believe has the potential to resist quantum attacks. "We've tested our new protocol to encrypt data moving between two PCs--the new techniques are a little slower than existing ones, but the confidentiality of the data is improved," Stebila says. "The speed of the new protocol is now something we will work on, but this is a big step forward, demonstrating the practicality of these new techniques. We're optimistic this will provide a framework for developing effective ways of future-proofing our data in the world of quantum computers."


Inside the Surprisingly High-Stakes Quest to Design a Computer Program That 'Gets' Sarcasm Online
The Washington Post (08/18/15) Caitlin Dewey

A computer program that can reliably detect sarcasm online is highly desired by many people, but instilling reliability within software is a formidable challenge--not even an 85-percent accurate program to detect sarcasm on Twitter developed by University of California, Berkeley professor David Bamman is good enough. A key problem in meeting this challenge is the fact that most cues signaling sarcasm are non-textual. One approach is to input massive volumes of "sarcastic" data into self-learning, pattern-seeking programs that search for recurring words, phrases, and topics people tend to reference when they are being sarcastic. The flaw in this strategy is the programs can misinterpret a statement as sarcastic if it uses a statistically sarcastic word. Stanford University professor Christopher Manning thinks this impediment can only be overcome when computers truly start to understand the lived human experience. Bamman's work with his sarcasm-gauging algorithm is a mark of progress, as it attempts to understand the speaker, his audience, and the relationship between the two by also absorbing contextual data from past tweets and Twitter bios. Another study using Reddit determined the accuracy of sarcasm-detection improves significantly if the program knows not just what was said, but where.


European Consortium Develops New Approaches for Dealing With Big Data
Johannes Gutenberg University Mainz (08/14/15)

Researchers at Johannes Gutenberg University of Mainz are taking part in the BigStorage project being funded by the European Union's (EU) Horizon 2020 initiative. The project seeks to develop new approaches to dealing with big data concepts, from theoretical basic research to the development of complex infrastructures and software packages. The researchers say big data can only be used efficiently through the use of algorithms for understanding the data and their appropriate application in highly scalable systems. The BigStorage project also will play an important role in the training of researchers and developers in the international context as part of the EU's Innovative Training Network. The work being undertaken at the Data Center at Mainz University will focus on the impact of new storage technologies and the convergence of high-performance computing and big data. Several other entities also are taking part in the project, including the Madrid Technical University, the French National Institute for Computer Science and Applied Mathematics, the Foundation for Research and Technology in Greece, Seagate Systems in the U.K., the German Climate Computing Center, the French Alternative Energies and Atomic Energy Commission, and Fujitsu Technology Solutions.


Software Can Automatically Critique Composition of Digital Photographs
Penn State News (08/13/15) Matt Swayne

Pennsylvania State University (PSU) researchers have developed an analytical algorithm to examine the composition of elements in a digital photograph and offer constructive feedback. PSU professor James Wang says the algorithm also provides feedback about the perceived composition of the photo and offers examples of similarly composed images of high aesthetic value. Co-researcher and professor Jia Li notes the software is reliant on psychological theories of human vision and the opinions of people to help classify pictures. The development of the algorithm was assisted by graduate students who labeled numerous images from a collection on photo.net, and categorized each image's composition into horizontal, vertical, centered, diagonal, and textured types. The labels function as the training dataset that educates the algorithm on photo classification. The software conducts a pixel-by-pixel analysis to extract features from a picture and then applies statistical analysis to classify and compare it to highly aesthetic images. The software can be placed on a server that is accessible from a mobile phone. Wang says the project's goal from the outset was to aid photographers, and notes "if you are an amateur photographer then, potentially, a computer can analyze your photograph's composition and help you improve it."


Making Sense of Cyberinfrastructure
HPC Wire (08/17/15) Jan Zverina

Cyberinfrastructure can be both an integrated resource and a way for researchers to expand the limits of their fields. The U.S. National Science Foundation's (NSF) Jim Kurose sees a need for his agency to see cyberinfrastructure as a driver of the national economy and global competitiveness in areas including advanced manufacturing, visualization, drug discovery, and personalized medicine. "There is the notion of this important interplay between industry, federal government, and academia in the area of computing as well as cyberinfrastructure," Kurose told attendees at the 2015 eXtreme Science and Engineering Discovery Environment (XSEDE) conference. He cites reference architectures or models as a solution to the challenge of delivering a way for scientists to exploit existing cyberinfrastructure resources. Kurose says the models are designed to expedite science by helping researchers understand their place in the overall scheme, so they can know what contributions they need to make and where they can reuse others' previous contributions. "Reference models are really all about talking to other people about what you're doing so they can see their place in the larger picture of what you're doing," Kurose says. He notes NSF should consider reference architectures and existing reusable cyberinfrastructure established by XSEDE and similar communities.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe