ACM TechNews
Association for Computing Machinery
Welcome to the September 26, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Researchers Restore First-Ever Computer Music Recording Generated on Alan Turing's Computer
Agence France-Presse (09/26/16)

Researchers from the University of Canterbury (UC) in New Zealand have restored the first recording of computer-generated music, created in 1951 by Alan Turing. The researchers say the recording, which laid the foundation for the development of synthesizers and modern electronica, shows that Turing was a musical innovator. "Alan Turing's pioneering work in the late 1940s on transforming the computer into a musical instrument has been largely overlooked," they say. When the researchers analyzed the 12-inch acetate disc containing the music, they found the audio was distorted. "The frequencies in the recording were not accurate," says Canterbury professor Jack Copeland. "The recording gave at best only a rough impression of how the computer sounded." They fixed the recording by adjusting the speed of the audio, compensating for a "wobble" in the recording and filtering out extraneous noise. The recording, which was used to generate three melodies, was made 65 years ago by a BBC outside-broadcast unit at the Computing Machine Laboratory in Manchester, U.K. Although Turing programmed the first musical notes into a computer, it was another computer scientist, Christopher Strachey, who was the first to string them together into tunes.


Next Generation of Statistical Tools to Be Developed for the Big Data Age
Lancaster University (09/21/16)

Researchers at Lancaster University's Data Science Institute and the University of Cambridge's Statistical Laboratory in the U.K. are leading a program called StatScale, which is developing a new generation of statistical tools for the purpose of extracting insights from big data. "The ubiquity of sensors in everyday systems and devices...means there is enormous potential for societal and economic benefit if information can be extracted effectively," says Lancaster professor Idris Eckley. "The volume, scale, and structure of this contemporary data poses fundamentally new and exciting statistical challenges that cannot be tackled with traditional methods. Our aim is to develop a paradigm-shift in statistics, providing a new statistical toolbox to tackle, and capitalize on, these huge data streams." Cambridge professor Richard Samworth says the StatScale project will devise the underlying theoretical and methodological bases for next-generation scalable statistical algorithms. Engineering and Physical Sciences Research Council CEO Tom Rodden says the tools stemming from the project are needed to reliably interpret big data to yield economic and societal benefits. The techniques and models that emerge from StatScale will be piloted by industrial partners such as Shell U.K. and the Office for National Statistics so they can be rapidly tested and polished in real-world scenarios.


Germany to Create World's First Highway Code for Driverless Cars
New Scientist (09/21/16) Sally Adee

The first legal framework for autonomous vehicles is outlined in a bill recently proposed by German transport minister Alexander Dobrindt, governing how such cars perform in potentially deadly crashes. The proposal specifies that a driverless car always opts for property damage over personal injury, that it never discriminates between humans based on categories such as age or race, and that if a human driver removes their hands from the steering wheel, the car's manufacturer is liable for a crash. Dobrindt says the bill will level the legal playing field for human motorists and autonomous cars. "The change to the road traffic law will permit fully automatic driving," he says. Dobrindt and others also support a rule requiring a driver to be sufficiently alert to assume vehicle control within 10 seconds, but some critics think that amount of time is insufficient. For example, Natasha Merat at the U.K.'s University of Leeds estimates people can require up to 40 seconds to refocus, depending on what they were doing at the time. Merat believes some automakers will wait until vehicles can be fully automated, without any human input, due to the current lack of clarity.


New Language Expands on Google's Go
InfoWorld (09/23/16) Serdar Yegulalp

Polish developer Marcin Wrochniak has introduced Have, a computer language that transpiles to and expands on Google's Go. Wrochniak developed Have as a hobby project, with the goal of the language becoming a "companion" to Go that addresses some of its common "landmines." One of the most obvious differences between Have and Go is the formatting; Go uses curly braces similar to C/C++, while Have uses block indents like Python. Other differences address idiosyncrasies in Go. For example, the way that variable declaration, structs, and interfaces work have all been modified in Have to be more consistent with each other and to avoid internal inconsistencies that are a common source of bugs. The new language also plans to add generics to Go, which will enable programmers to create constructs in the language that use type parameters and make it possible to extend Have in ways not readily possible in Go. Have also features "specializations," which let generics use different code based on the type in question. Although many of the features Wrochniak wants to provide have not been implemented yet, Go's role as a platform for language innovation is notable.


Vint Cerf's Dream Do-Over: 2 Ways He'd Make the Internet Different
IDG News Service (09/23/16) Katherine Noyes

Google chief Internet evangelist and former ACM president Vint Cerf, considered a father of the Internet, speaking last Thursday at the Heidelberg Laureate Forum in Germany said he would change a few things about its creation if he could do it again. "If I could have justified it, putting in a 128-bit address space would have been nice so we wouldn't have to go through this painful, 20-year process of going from IPv4 to IPv6," Cerf said. He also said he would like to have added public key cryptography if it had been feasible. However, neither idea was feasible at the time Cerf was helping to create the Internet. Cerf said 128-bit address space would not have seemed realistic back then because of the effort's experimental mindset at the time. He noted there was a debate about the possibility of variable-length addresses, but supporters of the idea were ultimately overruled because of the extra processing power associated with them. "Because computers were so expensive back then, we rejected the idea," Cerf said. He also noted the notion of public key cryptography had only recently emerged at the time Internet protocols were being standardized in the late 1970s. "I didn't want to go back and retrofit everything, so we didn't include it," Cerf said.


New Computer Coding Program Boasts No Courses or Professors
U.S. News & World Report (09/22/16) Lauren Camera

A new computer programming educational program called 42 USA, funded by French billionaire Xavier Niel, opened in Silicon Valley last week and enrolled its first 150 students. The program, based at a brand-new 200,000 sq.-ft. facility, offers a free three- to five-year coding curriculum for anyone between the ages of 18 and 30, regardless of their programming experience. However, applicants must pass two rigorous online logic exams to be invited to a month-long, 24-7 orientation during which they must solve a series of increasingly difficult problems. Niel launched the first 42, named after the mysterious number from the book, “The Hitchhiker's Guide to the Galaxy,” three years ago in France. 42 USA's program does not offer any formal courses or retain any professors to instruct students. Instead, enrollees work in groups to complete projects graded by their peers, earning points as they go. Some students are drawn to 42 USA to build real-world experience that could serve them well in landing their first job. One student, Truman State University graduate Nate Engle, says his physics degree proved to be an inadequate job qualifier, and 42 USA's program offers him a less-burdensome alternative to learning coding by himself.


Stealing an AI Algorithm and Its Underlying Data Is a 'High-School Level Exercise'
NextGov.com (09/22/16) Dave Gershgorn

Cornell Tech researchers have demonstrated the ability to remotely reverse-engineer machine-learning algorithms, essentially stealing artificial intelligence (AI) products and using them for free, by accessing an application programming interface (API). In addition, after the algorithm has been copied, it can be coerced into producing examples of the potentially proprietary data on which it was trained. Google, Microsoft, and Amazon permit developers to either upload their algorithms to their cloud or use the cloud firm's proprietary AI algorithms, which are both accessed via APIs. Uploading the algorithm is sensible because the data and labor is done on the cloud company's server, while making proprietary algorithms available in this way enables companies to charge for their use without making the code available. The Cornell Tech team beat this system by making standard requests from the AI algorithm thousands of times through the API, and piecing together its function. "In hindsight, it was just blatantly obvious," says Cornell Tech professor Thomas Ristenpart. "It's kind of a high-school level exercise." To test their ability to recreate the stolen algorithms' training data, the researchers employed the attack on a public series of faces and were able to reconstruct all of them.


The Future of Computing Research: Industry-Academic Collaborations
CCC Blog (09/21/16) Helen Wright

A roundtable discussion of industry and academic experts organized by the Computing Community Consortium (CCC) yielded a report on the industry-academic collaboration landscape and what actions could improve it. Suggested strategies include setting up a way to measure and benchmark industry-academic interactions and establishing a repository of best practices for such interactions. A second approach involves seeing a clear need for career paths that may mix elements of a traditional academic career in a university research and education environment with those entailing tenure within a new or established company, and creating mechanisms that support such career paths. A third action would seek to more broadly make infrastructure--including advanced computing and devices, large datasets, and novel facilities--accessible to the research community for the benefit of industry, academia, and education. A fourth suggestion is to convene a long-term forum or entity around industry-academic engagement. The report has led to a CCC-sponsored program to catalyze and nurture alliances between industry and academic research via mechanisms for early career researchers in academia and industry representatives to interact and investigate collaboration. The CCC has deployed the programs via the four U.S. National Science Foundation-sponsored Big Data Regional Innovation Hubs.


Cache Management Improved Once Again
MIT News (09/21/16) Larry Hardesty

Researchers from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory have updated their breakthrough approach to managing memory on computer chips. A year ago, the team unveiled the Tardis scheme, but it assumed computational behavior that most modern chips do not enforce. Tardis made use of sequential consistency, a standard that does not enforce any relationship between the relative execution times of instructions assigned to different cores. The researchers now have given each core two counters, one for read operations and one for write operations, to enable Tardis to accommodate more relaxed consistency standards. The team says Tardis would use circuit space much more efficiently as chips continue to comprise more and more cores. In chips with hundreds of cores, the scheme could free up 15 percent to 25 percent of on-chip memory, enabling much more efficient computation. "The new work is important because it's directly related to the most popular relaxed-consistency model that's in current Intel chips," says Larry Rudolph, a senior researcher at the Two Sigma hedge fund. "What's really exciting is that Tardis potentially is a model that will span consistency between processors, storage, and distributed file systems."


Reconfigurable Chaos-Based Microchips Offer Possible Solution to Moore's Law
NCSU News (09/20/16) Tracey Peake

Nonlinear, multi-functional integrated circuits could lead to novel computer architectures that can do more with fewer transistors, according to researchers at North Carolina State University (NCSU). As the number of transistors on integrated circuits increases to keep up with processing demands, the semiconductor industry is seeking new ways to create computer chips without continually shrinking the size of individual transistors. The NCSU researchers utilized chaos theory to leverage a circuit's nonlinearity and enable transistors to be programmed to perform different tasks. "In current processors you don't utilize all the circuitry on the processor all the time, which is wasteful," says NCSU researcher Behnam Kia. "Our design allows the circuit to be rapidly morphed and reconfigured to perform a desired digital function in each clock cycle." Kia and NCSU professor William Ditto developed the design and fabrication of the integrated circuit chip, which is compatible with existing technology and utilizes the same processes and computer-aided design tools as existing computer chips. Ditto says the design is nearing commercial size, power, and ease of programming and could be of commercial relevance within a few months.


A New 3D Viewer for Improved Digital Geoscience Mapping
Uni Research (09/20/16) Andreas R. Graven

Researchers from Uni Research in Norway and the University of Aberdeen in the U.K. have collaborated on LIME, software designed for virtual model interpretation and visualization. A high-performance three-dimensional (3D) viewer, LIME enables geoscientists to explore their 3D datasets and perform measurements, analysis, and advanced visualization on different data types. Combining the software with models collected using digital mapping techniques, geologists will be able to study exposed outcrops and rock formations, which are very difficult to access. One of the key features of the software is the ability to integrate images and project them on to the 3D models. "In the end, we can make a very nice visual representation to show the analysis and the project datasets, which is very useful for geoscientists who want to present their results, for example to their collaborating partners and sponsors, to the public, or at conferences," says Uni Research's Simon Buckley. He notes researchers will be able to download LIME and use it on their own laptops.


Hacker-Proof Code Confirmed
Quanta Magazine (09/20/16) Kevin Hartnett

Researchers have developed hack-proof software code using formal verification, a protocol in which each statement follows logically from the next so a complete program can be assessed with the same certainty that mathematicians prove theorems. "You're writing down a mathematical formula that describes the program's behavior and using some sort of proof checker that's going to check the correctness of that statement," says Microsoft Research's Bryan Parno. He says the challenge of creating formally verified code is mathematically rendering a specification or goal in a machine-readable manner. Programming languages and proof-assistant programs such as Coq and Isabelle have done much to mitigate the burden of formal verification, while the push for its adoption has gathered traction with the advent of the Internet and its security weaknesses, says Princeton University professor Andrew Appel. Additional developments in formal verification's favor include the development of new logical systems offering a framework for computers to reason about code, and enhancements to operational semantics. Examples of the technique's practical application include the U.S. Defense Advanced Research Projects Agency's High-Assurance Cyber Military Systems project, a proof-of-concept for rendering military equipment such as helicopters unhackable.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: technews@hq.acm.org

Unsubscribe