Welcome to the June 19, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

supercomputer, illustration Top500: U.S. Knocked Out of 'Top Three Most Powerful' List
Liam Tung
June 19, 2017

The U.S. is not among the top three in the Top500's latest ranking of the world's fastest supercomputers. It is the first time in 21 years the U.S. has fallen out of the top three, as China's Sunway TaihuLight system and its Tianhe-2 system rank first and second, respectively, while Switzerland's Piz Daint system came in third. The TaihuLight system is the fastest computer in the world as measured by floating-point operations (flops), and is capable of 93 quadrillion flops (93 petaflops) under Linpack performance tests. The U.S.'s fall in the rankings follows a December report that warned U.S. leadership in high-performance computing (HPC) was under immediate threat unless the U.S. commits to a decade-long "surge" in investments to compete with China's accelerating HPC development. However, U.S. supercomputers made up five of the world's top 10 systems, and 169 systems overall on the Top500 list.

Full Article
Faster Performance Evaluation of 'Super Graphs'
Daegu Gyeongbuk Institute of Science and Technology
June 16, 2017

Researchers at the Daegu Gyeongbuk Institute of Science and Technology (DGIST) in Korea have developed a computer model that produces synthetic data for simulating real-world applications using giant graphs much faster than currently available generators, while also being more resource efficient. DGIST's Himchan Park and Min-Soo Kim's new graph generation model reuses compactly maintained data in an extremely fast computer cache memory during graph generation. The researchers say their TrillionG model creates more realistic synthetic data than earlier models and also can generate bigger graphs and similar-sized trillion-edge graphs in a shorter time using less computer resources. "We have demonstrated that TrillionG outperforms the state-of-the-art graph generators by up to orders of magnitude," the researchers say. Park and Kim also believe TrillionG could generate synthetic graphs the size of the human brain connectome using 240 standard personal computers.

Full Article

robot Facebook Teaches Bots How to Negotiate. They Learn to Lie Instead
Liat Clark
June 15, 2017

The Facebook Artificial Intelligence Research group, in collaboration with the Georgia Institute of Technology, has released code they claim will enable bots to negotiate. "We show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states," the researchers say. The team fed the bots a dataset of natural language negotiations between two individuals where they had to decide how to divide and share a set of items both held separately, of differing values. They were initially trained to respond based on the "likelihood" of the direction a human dialogue would take. However, the bots also can be trained to "maximize reward," and this skill led to the bots lying, or "feigning interest in a valueless issue so that it can later 'compromise' by conceding it," according to the researchers. Still, the team says the bots' conversation improved, which was the point of the experiment.

Full Article
Cars Could Soon Negotiate Smart Intersections Without Ever Having to Stop
A*STAR Research
June 14, 2017

Researchers at the A*STAR Institute of High Performance Computing in Singapore have developed a model of a system in which smart cars do not have to stop at red lights at intersections. According to the model, each car crosses the intersection in its own virtual bubble of safe space, adjusting speed using adaptive cruise control to result in smooth traffic flow in each direction. The new system also requires traffic lights to be equipped with a communications beacon that gathers and transmits data about the distance and approach speed of vehicles nearing the intersection. Each car feeds that data into an algorithm that plots a safe course through the intersection without having to stop. The algorithm is based on adaptive repulsive force, which means the closer two cars' trajectory would bring them at an intersection, the stronger their repulsion and the greater the speed adjustment they need to make to pass each other safely.

Full Article

Dmitry Kuzmichev, Konstantin Egorov, Andrey Markeev, and Yury Lebedinskiy posing next to atomic layer deposition apparatus New Prospects for Universal Memory
MIPT News (Russia)
June 16, 2017

Researchers at MIPT's Center of Shared Research Facilities in Russia have found a way to control the oxygen concentration in tantalum oxide films produced by atomic layer deposition, a breakthrough they say could be the basis for creating new forms of nonvolatile memory. In an attempt to find an alternative to resistive switching memory (ReRAM), which is not applicable to functional three-dimensional architectures, the MIPT researchers turned to atomic layer deposition (ALD), a chemical process by which thin films can be produced on the surface of a material. "The hardest part in depositing oxygen-deficient films was finding the right reactants that would make it possible to both eliminate the ligands contained in the metallic precursor and control oxygen content in the resulting coating," says MIPT researcher Andrey Markeev. The researchers achieved this by using a tantalum precursor, which by itself contains oxygen, and a reactant in the form of plasma-activated hydrogen.

Full Article

computer science workshop students Summer Workshop Pushes Minority Students to Pursue Computer Science Degrees
Cornell Daily Sun (NY)
Anne Snabes
June 16, 2017

Cornell University last week held the Software Defined Network Interface Workshop, a week-long event backed by the U.S. National Science Foundation and Google that was designed to boost the number of underrepresented minority postdoctoral students in computer science. Such minorities make up fewer than 3 percent of the postdoctoral student population studying computer science, and workshop organizer and Cornell professor Hakim Weatherspoon attributes these low numbers to a "pipeline." "There's very few underrepresented minority faculty in computer science," Weatherspoon says. "I'm the only one here at Cornell and across the nation, there's very, very few." Weatherspoon also notes many minority postdoctoral computer science graduates opt for employment in the computer science industry instead of becoming faculty. Participants in the workshop, which attracted undergraduate tech students from universities across the U.S., praise it for bringing together people with various backgrounds and skills, with an emphasis on collaboration.

Full Article
How to Build Software for a Computer 50 Times Faster Than Anything in the World
Argonne National Laboratory
Joan Koka
June 15, 2017

Exascale computers need software that supports and connects hardware and applications, which Argonne National Laboratory's Rajeev Thakur says must be "robust and flexible enough to handle a broad spectrum of applications, and be well integrated with hardware and application software so that applications can run and operate seamlessly." Argonne scientists aim to produce exascale-enabled software by surmounting problems in memory, power, and computational resources. The Argo project is exploring techniques for data regulation of memory, power, and processing cores. Argonne's Pete Beckman says exascale systems entail added memory layers requiring complementary management software, while in terms of power, he says "the goal...is to achieve a level of control that maximizes the user's abilities while maintaining efficiency and minimizing cost." For processing core management, Beckman says he and colleagues are considering containerization "to give users the ability to operate and manage how they're using those cores more carefully and directly."

Full Article
Data-Mining 100 Million Instagram Photos Reveals Global Clothing Patterns
Technology Review
June 15, 2017

Researchers at Cornell University applied data-mining technology to determine the global variance of clothing styles from 100 million Instagram photos taken in 44 cities. The team used a standard face-recognition program to screen out certain qualities, leaving 15 million images of people showing the upper half of their body, along with their location and the date. The team trained a machine-learning algorithm to recognize various types of clothing and accessories, and then had it sift through the photo dataset while another algorithm searched for clusters of images with similar visual themes and tracked how these varied across time and between locations. The clustering algorithm identified about 400 distinct visual themes, whose variation by time and place could be analyzed. "The combination of big data, machine learning, computer vision, and automated analysis algorithms would make for a very powerful analysis tool more broadly in visual discovery of fashion and many other areas," the researchers note.

Full Article
Wireless Charging of Moving Electric Vehicles Overcomes Major Hurdle in New Stanford Research
Stanford News
Mark Golden; Mark Shwartz
June 14, 2017

Researchers at Stanford University have demonstrated a technology based on magnetic resonance coupling to surmount a major obstacle in the development of wireless electric power transmission to moving objects. The team was able to wirelessly charge a moving 1-milliwatt light-emitting diode lightbulb by removing the radio-frequency source in the transmitter and replacing it with an off-the-shelf voltage amplifier and feedback resistor. The system can automatically calculate the right frequency for different distances without human interference. "Adding the amplifier and resistor allows power to be very efficiently transferred across most of the three-foot range and despite the changing orientation of the receiving coil," says Stanford's Sid Assawaworrarit. "This eliminates the need for automatic and continuous tuning of any aspect of the circuits." Stanford professor Shanhui Fan says the research could lead to methods for recharging electric vehicles while moving, electronic devices such as smartphones, and robots.

Full Article
New Face-Aging Technique Could Boost Search for Missing People
University of Bradford
June 13, 2017

Researchers at the University of Bradford in the U.K. have developed a method of aging facial images that could enhance the search for people who have been missing for many years. The team says the method identifies key features, such as the shape of the cheek, mouth, and forehead, of a face at a certain age. The information is then fed into an algorithm that synthesizes new features for the face to produce photographic quality images of the face at different ages. In addition, the researchers say the technique teaches the system how humans age by feeding the algorithm facial feature data from a large database of individuals at various ages. The technique uses a method of predictive modeling and applies it to age progression. The researchers tested the method by taking an individual's picture and running the algorithm backwards to de-age that person to a more youthful appearance.

Full Article

Robot Uses Deep Learning and Big Data to Write and Play Its Own Music Robot Uses Deep Learning and Big Data to Write and Play Its Own Music
Georgia Tech News Center
Jason Maderer
June 13, 2017

Researchers at the Georgia Institute of Technology (Georgia Tech) have developed Shimon, a marimba-playing robot with four arms and eight sticks that can write and play its own compositions in a lab. The researchers fed the robot nearly 5,000 complete songs and more than 2 million motifs, riffs, and pieces of music. The researchers gave the robot the first four measures to use as a starting point, which was the only human involvement in either the composition or the performance of the music. "Once Shimon learns the four measures we provide, it creates its own sequence of concepts and composes its own piece," says Georgia Tech Ph.D. student Mason Bretan. He notes this is the first time a robot has used deep learning to create music. "Shimon is now coming up with higher-level musical semantics," Bretan says. "Rather than thinking note by note, it has a larger idea of what it wants to play as a whole."

Full Article
MIT Advances in Imaging
ACM Publications

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701

ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]