Association for Computing Machinery
Welcome to the June 22, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


The Future of the Internet Is at Risk Say Global Web Experts
University of Southampton (United Kingdom) (06/21/16)

The Internet has reached a turning point, and action must be taken quickly to ensure its continued openness, security, transparency, and inclusivity via improved governance, according to the Global Commission of Internet Governance's One Internet final report and recommendations issued on Tuesday. "The fundamental question before all of us who want a future that delivers on the promise of the Internet is this: how do we meet the governance challenges the Internet creates, without undermining the very aspects that make it such a powerful platform for economic and social growth?" says University of Southampton professor and former ACM president Wendy Hall. The Commission recommends governments only intercept communications and gather and analyze data over the Internet for legitimate, open, and legal aims that exclude gaining domestic political advantage, industrial espionage, or repression. Another recommendation is for states to coordinate and furnish mutual assistance to restrict damage and deter cyberattacks, and never protect those associated with the perpetration of cybercrimes. In addition, the commission says new technologies should be kept interoperable and based on open standards, adhering to the principles of openness to ensure future innovation. Host governments also should give refugees Internet access, the Commission recommends. "The Commission has built a roadmap towards ensuring the future of the Internet," Hall says. "If the roadmap is adopted, the Internet will continue to be civilization's most important infrastructure."


U.S. to Have 200-Petaflop Supercomputer by Early 2018
Computerworld (06/21/16) Patrick Thibodeau

The U.S. plans to have a supercomputer by early 2018 that will offer about twice the performance of China's Sunway TaihuLight system, which can reach a theoretical peak speed of 124.5 petaflops. The U.S. Department of Energy's (DOE) Oak Ridge National Laboratory is expecting an IBM system, called Summit, which will be capable of 200 petaflops within the next two years. Summit will use IBM and Nvidia graphical-processing units, while the DOE has two other major supercomputers planned for 2018. One system, Sierra, is a planned 150-petaflop IBM system that will be located at the Lawrence Livermore National Lab, and is scheduled to be available by mid-2018. The other supercomputer, a Cray/Intel system called Aurora, is due by late 2018 at the Argonne National Laboratory. The U.S. government is pursuing the National Strategic Computing Initiative, which calls for accelerating the delivery of exascale computing, increasing coherence between the technology base used for modeling and simulation and that for data analytic computing, charting a path forward to a post-Moore's Law era, and building the overall capacity and capability of an enduring national high-performance computing ecosystem. "[The] strength of the U.S. program lies not just in hardware capability, but also in the ability to develop software that harnesses high-performance computing for real-world scientific and industrial applications," the DOE says.


The Inventors of the Internet Are Trying to Build a Truly Permanent Web
Wired (06/20/16) Klint Finley

Google chief Internet evangelist and former ACM president Vint Cerf is trying to tackle the challenge of preserving online knowledge in the face of constant obsolescence of older digital communications formats. "I'm concerned about a coming digital dark ages," Cerf says. He and his fellow Internet inventors are collaborating with a new generation of hackers, archivists, and activists to dramatically reimagine the World Wide Web's core technologies to bolster their long-term resilience. Such a Web would be capable of self-archival and automatic backups, modeled after innovations such as the open source Interplanetary File System. A decentralized Web will need to back up applications and data in addition to Web pages, and several projects are underway to build sites and apps that do not rely on a single server or firm to keep operating, and that are cross-compatible. One issue Cerf and his collaborators are wrestling over is the inability to always know what online content is worth saving. Possible workarounds Cerf suggests include Web publishers' specification of whether other people can automatically archive their sites. Cerf says perhaps the biggest challenge to a decentralized Web is incentivizing people to embrace an open Web by creating sufficiently enticing user experiences.


Ethnic, Gender Imbalances Plague Computer Science Education
CIO (06/20/16) Kenneth Corbin

The U.S. Education Department's Melissa Moritz laments the disproportionate unavailability of computer science education to low-income students, and contends conscious or unconscious prejudices are discouraging many female and minority students from pursuing computer science careers. "Most of the kids who do not get to participate in computer science are kids of color, kids in low-income communities, and girls," Moritz says. "And there's a number of reasons for that. First, it's not offered in their school. Second, we also have to look at who is either encouraged to take it, either explicitly or implicitly." Moritz says the Obama administration's Computer Science for All initiative aims to make computer science education universally available, while the president's latest budget proposal asks for two tracks of funding to support state and district initiatives to expand programming courses in schools. New BSA-sponsored research from the Economist Intelligence Unit found such efforts, in conjunction with Code.org and other nonprofits' work, are economically warranted, with the software industry contributing $1.07 trillion to U.S. gross domestic product in 2014. The study also found software comprises about 9.8 million jobs, while U.S. Department of Labor data estimated software developers earned an average salary of $108,760 in 2014.


Stanford and White House Host Experts to Discuss Future Social Benefits of Artificial Intelligence
Stanford News (06/20/16) Bjorn Carey

Stanford University and the White House on Thursday will host a panel of artificial intelligence (AI) visionaries from academia, government, and industry to discuss the responsible real-world integration of AI, with Stanford professors Russ Altman and Fei-Fei Li co-hosting the dialogue. In an interview, Altman and Li discuss the social benefits AI promises. Altman cites AI's ability to combine health data from several streams to more accurately diagnose patients and prescribe personalized therapy, but stresses, "we need to make sure that these capabilities are distributed in a just way so that it does not just benefit a small fraction of society." Li also advocates for considering the socioeconomic, ethical, and legal ramifications of AI-facilitated advances. "AI applications will exist within different established industries, and each industry will have to assess its preparedness and the degree to which current operating assumptions and procedures are ready to accept AI technology," Altman says. Li doubts a one-size-fits-all AI policy can be realized, since AI encompasses diverse subfields that are massive in their own right. She also cites the lack of understanding of AI in society at large. Li says both the media and societal leaders should attempt to span the communication gap between AI researchers/academics and the general public.


Analog Computing Returns
MIT News (06/20/16) Larry Hardesty

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory and Dartmouth College presented a new analog compiler at last week's ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI 2016) in Santa Barbara, CA. The compiler can translate between high-level instructions written in a language understandable to humans and the low-level specifications of circuit connections in an analog computer. The researchers say the work could clear a path toward efficient and accurate analog simulations of entire organs, and possibly organisms. The program takes differential equations as input, converting them into voltages and current flows across an analog chip. The compiler was tested on five sets of equations commonly employed in biological research. The production time of analog implementation ranged from less than 60 seconds to nearly 60 minutes depending on test set complexity, a much faster rate than could be accomplished via manual design. "With a few transistors, cytomorphic analog circuits can solve complicated differential equations...that would take millions of digital transistors and millions of digital clock cycles," says Dartmouth professor Rahul Sarpeshkar. Once it has a promising algebraic redescription of a set of differential equations, the compiler maps elements of the equations onto circuit elements, which in experiments took 14 to 40 seconds for each equation.


New Technique Improves Accuracy of Computer Vision Technologies
NCSU News (06/20/16) Matt Shipman

North Carolina State University (NCSU) researchers say they have developed a new technique that improves the ability of computer-vision technologies to better identify and separate objects in an image. Computer-vision technologies use algorithms to segment, or outline, the objects in an image by relying on defined parameters. However, even small changes in a parameter can lead to very different computer-vision results. "Some algorithm parameters may work better than others in any given set of circumstances, and we wanted to know how to combine multiple parameters and algorithms to create better image segmentation by computer-vision programs," says NCSU professor Edgar Lobaton. The researchers developed a technique that compiles segmentation data from multiple algorithms and aggregates them, creating a new version of the image. The new image is segmented again, based on how persistent any given segment is across all of the original input algorithms. "Visually, the results of this technique look better than any given algorithm on its own," Lobaton says. The new image-segmenting technique can be used in real time, processing 30 frames a second as most of the computational steps can be run in parallel instead of sequentially.


World's First 1,000-Processor Chip
UC Davis News & Information (06/17/16) Andy Fell

Researchers at the University of California, Davis (UC Davis) have designed a microchip that contains 1,000 independently programmable processors. The idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use, says team leader and UC Davis professor Bevan Baas. The energy-efficient "KiloCore" chip, which contains 621 million transistors, has a maximum computation rate of 1.78 trillion instructions a second. The 1,000 processors can execute 115 billion instructions a second while dissipating only 0.7 watts, which is low enough to be powered by a single AA battery, according to Baas. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor. Baas says applications have been developed for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and data-center record processing. The KiloCore chip was fabricated by IBM using its 32-nanometer complementary metal-oxide semiconductor (CMOS) technology. "To the best of our knowledge, it is the world's first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university," Baas says.


DARPA Goes 'Meta' With Machine Learning for Machine Learning
U.S. Defense Advanced Research Projects Agency (06/17/16)

The U.S. Defense Advanced Research Projects Agency (DARPA) has launched an initiative designed to address the data-science expertise gap. There is an urgent need to develop machine-based modeling for users with no data-science background, according to Wade Shen, program manager in DARPA's Information Innovation Office. He says DARPA believes it is possible to automate certain aspects of data science, and have machines learn from prior example how to construct new models. Shen says the Data-Driven Discovery of Models (D3M) program will free up researchers from having to design their own empirical models. He notes the construction of empirical models currently is a mainly manual process. "Our ability to understand everything from traffic to the behavior of hostile forces is increasingly possible given the growth in data from sensors and open sources," Shen says. "The hope is that D3M will handle the basics of model development so people can apply their human intelligence to look at data in new ways, and imagine solutions and possibilities that were not obvious or even conceivable before." Shen thinks D3M could potentially speed the pace of scientific discovery, deepen intelligence collection, and improve military planning and government logistics.


Scientific Gains May Make Electronic Nose the Next Everyday Device
UT Dallas News Center (06/16/16) Brittany Magelssen

Researchers at the Texas Analog Center of Excellence (TxACE) at the University of Texas at Dallas are creating an electronic nose to collect and analyze chemicals present in the human breath. The device uses complementary metal-oxide semiconductor integrated circuits, modeled after the technology used to manufacture smartphones and tablets. "We have demonstrated that you can build an affordable electronic nose that can sense many different kinds of smells," says TxACE researcher Navneet Sharma. "When you're smelling something, you are detecting chemical molecules in the air. Similarly, an electronic nose detects chemical compounds using rotational spectroscopy." The rotational spectrometer generates electromagnetic waves and then analyzes how the waves react in the presence of certain chemicals. The system can detect low concentrations of chemicals in human breath with greater sensitivity and accuracy than Breathalyzers. Researchers envision the device will be used in industrial and medical settings to diagnose abnormalities in the gastrointestinal tract and blood; if the devices become available to the public, TxACE director Kenneth O says diseases could be diagnosed earlier and the need for lab testing could be reduced. The team plans to start testing a prototype of a programmable electronic nose in early 2018.


Diverting Redirection Spam
ScienceDaily (06/16/16)

Fuzzy logic could be used to protect computer users from being scammed, phished, or opening malicious sites unwittingly, according to researchers in India. They say Web browsers could use fuzzy logic to detect a Web page that has been hacked to redirect computer users to a malicious page. The team has developed a system that could be used in conjunction with the security tools browsers have in place to alert users to the presence of malware on a site they attempt to visit. Amity University's Kanchan Hans says the detection system analyzes the characteristics of a given Web address based on known spammy links and applies fuzzy logic to add a layer of probability to whether or not the suspicious link is likely to be a problem. Fuzzy logic enables a probability to be calculated with looser rules based on the different criteria, so a confidence level can be assigned to a given link as to whether it is safe or spam. Hans says in an actual browser, a red, amber, or green light could let the user know whether they should visit a site. He says the system was highly accurate in flagging safe and spam sites during testing.


Story Time: ONR Researchers Create 'Human User Manual' for Robots
Office of Naval Research (06/15/16) Warren Duffie Jr.

Researchers at the U.S. Office of Naval Research and the Georgia Institute of Technology have created Quixote, an artificial intelligence program that teaches robots to read stories, learn acceptable behavior, and understand successful ways to conduct themselves in diverse social situations. The researchers say Quixote could serve as a "human user manual" by teaching robots values through simple stories. For example, if a robot is tasked with picking up a pharmacy prescription for a human as fast as possible, it could take the medicine and leave, interact politely with the pharmacist, or wait in line. Without value alignment and positive reinforcement, the robot might logically decide that robbery is the fastest and least expensive option. Quixote aims to provide value alignment to reward the robot for waiting in line and paying. In developing Quixote, the researchers crowdsourced stories from the Internet, each of which highlighted daily social interactions and socially appropriate behaviors. They fed the data into Quixote to create a virtual agent that earns points and positive reinforcement for emulating the actions of protagonists in the stories. The researchers ran the agent through 500,000 simulations, and it exhibited proper social interactions more than 90 percent of the time.


Research Proves the Improbable Can Be Made Possible
Texas A&M Today (06/14/16) Rachel Rose

Texas A&M University professor Daniel Jimenez has revolutionized the way research is conducted on microprocessors. He has developed many algorithms that enable the processor to predict whether a branch instruction will cause a change in the flow of control in a program, and most are based on neural learning. Jimenez notes when he first started working on the idea of using neural learning, he found the timing constraints in a microprocessor would be a major obstacle. Jimenez says he spent several years "turning 'totally impractical' and 'almost impossible' into reality by inventing various techniques to solve the timing problem as well as improve accuracy." His branch predictors are among the most accurate in the industry and are used in a range of computing platforms. The U.S. National Science Foundation has awarded Jimenez a CAREER grant for his research in this area. His team in the Texas Architecture and Compiler Optimization lab has developed several dead-block prediction algorithms, as well as improving cache replacement policies for data hazards. Dead-block predictors give a prediction as to whether a block of data will be used again in the near future, which enables the cache controller to determine whether to keep a block or remove it in favor of a block that is more likely to be used soon, improving cache storage capacity and efficiency.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe