Association for Computing Machinery
Welcome to the December 14, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors
The New York Times (12/11/15) John Markoff

A coalition of prestigious Silicon Valley investors and technology companies announced on Friday their intention to establish an artificial intelligence (AI) research center to develop a "digital intelligence" for the betterment of mankind. They plan to invest $1 billion in the nonprofit OpenAI facility, with the long-term goal of creating an "artificial general intelligence" that can perform any intellectual task a human can, according to founding investor Elon Musk. He says one of the motivations behind OpenAI is his concern about AI being used to create machines that might turn against humanity. "We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity," Musk says. OpenAI's founders say the project's development will be funded on a yearly basis. The investor group says it is committed to guaranteeing that advanced artificial intelligence tools remain publicly accessible. "We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible," they note. Google machine-learning expert Ilya Sutskever will serve as OpenAI's research director, while the group initially will consist of seven scientists.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Online Degree Hits Learning Curve
The Wall Street Journal (12/13/15) Melissa Korn

The Georgia Institute of Technology's (Georgia Tech) launch of an inexpensive online computer science master's degree program two years ago has had mixed results, with 2,789 students enrolled this semester and more than 1,300 applying for each new term. However, among the pitfalls the program is experiencing is slower progress by students than anticipated, according to Georgia Tech College of Computing associate dean Charles Isbell Jr. Students enroll in 1.4 courses each term on average, while the less-than-$7,000 price for the degree may be encouraging students who may have only dabbled in a few classes without wanting to earn credit to enroll, and thus bring retention rates down. Nearly 80 percent of the program's students are from the U.S., and many of them already are employed. Meanwhile, the majority of students attracted to the campus-based program are foreign-born. Nevertheless, Isbell is optimistic about the program's future. "It wouldn't surprise me if three years from now we're talking about 10,000 students instead of 3,000 students," he says. "This is sustainable and this is scalable."
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology
Technology Review (12/10/15) Tom Simonite

Facebook is open-sourcing the designs of Big Sur, a new computer server Facebook says can provide more power for artificial intelligence (AI) software. The Big Sur servers are twice as fast as the previous Facebook servers, and they will help the company discover more things in machine learning and AI, says Facebook AI Research's Serkan Piantino. The decision to open source the designs is the latest in a recent set of similar announcements from major technology firms that are open-sourcing AI technology. The trend is seen as a way to accelerate progress in the broader field while also helping technology companies improve their reputations and make important hires. For example, in November, Google opened up TensorFlow software used to power the company's speech recognition and image search. Facebook developed Big Sur with Nvidia, and the hardware can be used to run Google TensorFlow. Facebook decided to open source Big Sur because the social networking company is well placed to capitalize on any new ideas it can unlock, according to Facebook AI Research's Yann LeCun. Open source projects have played a major role in establishing large-scale databases and data-analysis techniques.

States Look to Expand Computer Science Classes
U.S. News & World Report (12/10/15) Lauren Camera

Only 10 percent of U.S. schools currently offer computer science classes, according to the Information Technology and Innovation Foundation. In addition, 90 percent of schools do not offer any type of computer programming coursework, according to To counter these trends, many states are trying to encourage school districts to offer computer science courses by amending graduation requirements to either allow or mandate that they be taken to fulfill math or science course requirements. Fourteen states currently permit students to fulfill a math, science, or foreign language high school credit by completing computer science classes, and Louisiana, Massachusetts, Texas, and Virginia award a special diploma to graduates who have earned certain computer science credits. Although there is a diversity of requirements among the 50 states, as long as there is momentum toward increased computer science education, it should help close the computer science skills gap, according to the Education Commission of the States' Jennifer Zinth. However, a lack of qualified computer science teachers stands in the way of that goal. Microsoft and, among others, are working to help certify computer science teachers via training programs and with the help of computer science professionals.

Computing With Time Travel
National University of Singapore (12/09/15)

About 10 years ago, Google's Dave Bacon showed a time-traveling quantum computer could quickly solve a group of problems, known as NP-complete, which are known for being very difficult. However, Bacon's quantum computer had to travel around "closed time-like curves," which are paths through the fabric of spacetime that loop back on themselves. General relativity shows such paths exist through contortions in spacetime known as wormholes, but physicists argue something must stop such opportunities from arising because it would threaten causality. However, National University of Singapore (NUS) researchers have shown a quantum computer can solve insoluble problems even if it is traveling along "open time-like curves," which do not create causality problems. The researchers found these curves do not allow direct interaction with anything in the object's own past, meaning the time-traveling particles, or the data they contain, never interact with themselves. Nevertheless, strange quantum properties that permit "impossible" computations are left intact, according to the researchers. Quantum particles sent on a timeloop could gain super computational power, even though the particles never interact with anything in the past because some information is stored in the entangling correlation, which is what is being harnessed, according to NUS researcher Jayne Thompson.

Stanford-Led Skyscraper-Style Chip Design Boosts Electronic Performance by Factor of a Thousand
Stanford Report (12/09/15) Ramin Skibba

Stanford University engineers are leading a multi-institution effort to develop a revolutionary high-rise architecture for computing. The processors and memory chips in modern computer systems are laid out like single-story structures in a suburb, but this suburban-style layout wastes time and energy. The research team is pursuing a more city-like design, which involves building layers of processors and memory directly atop one another, connected by millions of electronic elevators that can move more data over shorter distances than traditional wires while also using less energy. The key will be using non-silicon materials that can be fabricated at much lower temperatures than silicon, so processors can be built on top of memory without the new layer damaging the layer below. The team describes the approach as Nano-Engineered Computing Systems Technology (N3XT), and has demonstrated a working prototype of a high-rise chip. N3XT high-rise chips are based on carbon nanotube transistors (CNTs), which are faster and more energy efficient than silicon processors. Moreover, in the N3XT architecture, the CNTs can be manufactured and placed over and below other layers of memory. N3XT systems will "outperform conventional approaches by a factor of a thousand," says Stanford professor H.-S. Philip Wong.

Robot Revolution Raises Urgent Societal Issues Not Yet Addressed by Policy
University of Sheffield (12/10/15) Clare Parkin

A group of more than 20 of the world's leading experts on emerging technology have formed the Foundation for Responsible Robotics (FRR) to promote the responsible use of artificial intelligence (AI) and robotics technologies and warn about their potential dangers. "Despite the disruptive impact of the increasing automation in our work places, our streets, and our homes, only lip service is being paid to the long list of potential societal hazards," warns Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, co-founder of the FRR and chair of its executive board. Sharkey says among the immediate risks automation poses is an increase in joblessness. Recent reports by the Bank of England and the Bank of America have warned new automation technologies could eliminate millions of jobs. "We are rushing headlong into the robotics revolution without consideration for the many unforeseen problems lying around the corner," Sharkey says. "It is time now to step back and think hard about the future of the technology before it sneaks up and bites us when we are least expecting it." The goal of the FRR is to promote robotics and AI research alongside governmental policies that will assure these technologies are used responsibly.

Stanford Team Develops Software to Predict and Prevent Drone Collisions
Stanford Report (12/10/15) Ian Chipman

The Stanford Intelligent Systems Laboratory (SISL) is developing software that could help make it safe to fly unmanned drones in congested areas. The software would alert multiple drones when a collision is possible, and calculate the maneuvers necessary to avoid accidents. The U.S. National Aeronautics and Space Administration (NASA) Ames Research Center is leading an effort to build an unmanned aerial traffic management system, and Stanford's software would provide this largely cloud-based and automated system with automated conflict-avoidance capabilities. A recent paper from Stanford researchers details a conflict-avoidance algorithm that will minimize the threat of low-altitude, unmanned collisions. The team's cloud computing architecture separates multi-aircraft conflicts into paired problems, and quickly chooses the best action for each pair of drones from a table predicting each drone's flight path. Simulations showed the pairwise solution offers significant safety improvements, faster decision times, and decreased alert rates. The team plans to deliver an updated version of the software for a project that NASA plans to complete by 2019.

New Lie-Detecting Software From U-M Uses Real Court Case Data
The University Record (12/10/2015) Nicole Casal Moore

Researchers from the University of Michigan (U-M) are using machine-learning techniques to develop new lie-detecting software. The team is training the software on video of media coverage of actual court trials. The prototype considers the words and gestures of the speaker, and does not need to touch the subject in order to work. The group reports the software was up to 75-percent accurate in identifying who was being deceptive, as defined by trial outcomes, compared with only more than 50 percent for humans. The software associated more hand movement, trying to sound more certain, looking questioners in the eye, and other behaviors with lying individuals. "There are clues that humans give naturally when they are being deceptive, but we're not paying close enough attention to pick them up," says U-M professor Rada Mihalcea. She says the software could be a helpful tool for security agents, juries, and mental health professionals. The initiative is part of a larger project to integrate "physiological parameters such as heart rate, respiration rate, and body temperature fluctuations, all gathered with non-invasive thermal imaging," says University of Michigan-Flint professor Mihai Burzo.

Inside The Machine: Hewlett Packard Labs Mission to Remake Computing
TechRepublic (12/08/15) Nick Heath

Hewlett-Packard (HP) Labs director Martin Fink calls the Machine the company's most important research project. HP Labs hopes to create a computer that will be able to handle tasks vastly more complex than is possible today. The design of modern machines limits their efficiency, so the researchers are developing a new architecture for computing, one that changes how machines store data, according to Fink. The Machine will enable processors to share access to a large pool of "universal memory," which would be non-volatile and capable of retaining data in the event of losing power while still able to read and write data far faster than hard disks or solid-state storage. "The goal here is with this architecture we can ingest, store, and manipulate truly massive datasets while simultaneously achieving multiple orders of magnitude less energy per bit," Fink says. Security will be one of the first uses of the Machine, and HP is considering shrinking the architecture down to enable it to be used for data-intensive tasks such as voice recognition. A prototype is scheduled to launch next year.

Realistic Facial Reconstructions Enhanced by Combining Three Computer Vision Methods
EurekAlert (12/08/15) Jennifer Liu

Researchers at Carnegie Mellon University and Disney Research have found that three computer vision techniques traditionally used to reconstruct three-dimensional (3D) scenes produce better results in capturing facial details when they are performed simultaneously. The methods--photometric stereo (PS), multi-view stereo (MVS), and optical flow (OF)--are well-established techniques for reconstructing 3D images, and each has its own strengths and weaknesses that often complement the others. PS is good at capturing the fine detail geometry of faces or other textureless objects by photographing the object under different lighting conditions. The technique is often used to enhance the detail of MVS, but requires OF to compensate for the 3D motion of the object over time. The researchers combined PS, MVS, and OF into a single technique called photogeometric scene flow (PGSF), and were able to create synergies that improved the quality and detail of the resulting 3D reconstructions. "PGSF could prove extremely valuable because it can capture dynamically moving objects in high detail and accuracy," says Disney researcher Paulo Gotardo. The researchers found facial details such as skin pores, eyes, brows, nostrils, and lips obtained via PGSF were superior to those obtained using other techniques. "The PGSF technique also can be applied to more complex acquisition setups with different numbers of cameras and light sources," says Disney researcher Iain Matthews.

Columbia Engineers Build Biologically Powered Chip
Columbia University (12/07/15) Holly Evarts

Columbia University researchers say they have harnessed the molecular machinery of living systems to power an integrated circuit from adenosine triphosphate (ATP), the energy currency of life. The team achieved the breakthrough by packaging a conventional solid-state complementary metal-oxide semiconductor integrated circuit with an ATP-harvesting "biocell." The researchers report the hybrid system pumped ions across the artificial lipid bilayer membrane. The team is exploring how to isolate a desired function of living systems and interface it with electronics. The researchers note a system that combines the power of solid-state electronics with the capabilities of biological components holds great promise. "With appropriate scaling, this technology could provide a power source for implanted systems in ATP-rich environments such as inside living cells," says Columbia Ph.D. student Jared Roseman, who led the team. For example, dogs would not be needed to sniff for bombs because the molecules that do the sensing could be taken from them, notes Columbia professor Ken Shepard. "We don't need the whole cell," he says. "We just grab the component of the cell that's doing what we want. For this project, we isolated the ATPases because they were the proteins that allowed us to extract energy from ATP."

Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe