Welcome to the December 21, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
The World's First Demonstration of Spintronics-Based Artificial Intelligence
Tohoku University (12/19/16) Shunsuke Fukami
Researchers at Tohoku University in Japan have demonstrated the basic operation of spintronics-based artificial intelligence (AI). Conventional AI systems lack the compactness and low-power feature of the human brain. Researchers have attempted to overcome this challenge by implementing a single solid-state device that acts as a synapse. The Tohoku researchers developed an artificial neural network in which their recently developed spintronic devices, composed of micro-scale magnetic material, are utilized. The researchers note a spintronic device is capable of memorizing arbitrary values between 0 and 1 in an analog manner unlike conventional magnetic devices, thus performing a learning function. They used the developed network to examine an associative memory operation, which is not readily executed by existing computers. The researchers were able to confirm that spintronic devices have a learning ability with which the developed artificial neural network can successfully associate memorized patterns from their input noisy versions similar to how the human brain works. The proof-of-concept demonstration should open up new AI opportunities, including a compact size that also achieves fast-processing capabilities and ultralow-power consumption. The researchers say with these features, the AI could be applied to a wide range of societal uses, including image/voice recognition, wearable terminals, sensor networks, and nursing-care robots.
The Computer That Can Tell If You're Pure or Flirty Just by Your Looks!
Daily Mail (United Kingdom) (12/20/16) Tracy You
Researchers at Shanghai Jiaotong University in China have developed an artificial intelligence (AI) program that they say can predict a woman's personality based only on her appearance. The AI analyzes a woman's facial features, and then places her into one of two personality categories--either "positive," which is described as "pure," "endearing," and "elegant;" or "negative," which is described as "pretentious," "pompous," or "coquettish." The two categories the researchers used to define women's personalities reflected "the aesthetic preference and value judgments that prevail among young males in contemporary China," according to Shanghai Jiaotong professor Wu Xiaolin. As part of the study, the researchers collected images of 3,954 women from the Internet. Each picture was given a personality trait, such as "sweet," "endearing," "pretentious," or "coquettish." After dividing the pictures into groups, the researchers fed them into a convolutional neural network. Eighty percent of the pictures were used as the data to train the neural network, 10 percent were used to verify the program, and the remaining 10 percent were used to test the new AI system.
Researchers Propose Using Software-Defined Networking to Unify Cloud and Edge
The Stack (UK) (12/19/16) Nicky Cappella
Researchers from Germany, Canada, and the U.S. proposed a method to use cloud and fog, or edge, computing structures to complement one another. The method involves using software-defined networking (SDN) to manage the interaction between cloud and edge resources, enabling a network to remain dynamic, agile, and efficient while providing a better experience for the end user. However, combining resources in the cloud and at the edge requires a local coordinator to divert tasks to the appropriate resources in real time, and in a dynamic and unpredictable environment. Therefore, the researchers created an SDN-enabled architecture to meet those requirements and create a usable network that uses cloud and edge computing capabilities at the same time. The SDN provides real-time knowledge of available resources that is both flexible and reliable, with a centralized controller that allows for optimal decision-making for each unit within the system and a dedicated control channel that permits a range of translations, giving the system acute control. The researchers conducted two case studies, both of which concluded that using SDN-enabled architecture enabled a network to use resources of cloud computing and edge computing interchangeably.
Obama White House's Final Tech Recommendation: Invest in AI
Computerworld (12/21/16) Patrick Thibodeau
In what is likely to be the Obama administration's final report on technology policy, the White House recommends investing in artificial intelligence (AI) research and development to shore up the U.S. economy and help the country "stay on the cutting edge of innovation." "Anything we can do to have more AI will contribute to more productivity growth and will help make possible more wage and income growth," says Jason Furman, chairman of the White House Council of Economic Advisers. The report also notes potential job displacement by AI could be countered via investments in education, and by establishing a safety net for those affected. One theme of the report is "technology is not destiny," meaning a solid policy can ameliorate the impact of change driven by AI. Although the report notes it is difficult to predict the type and rate of AI-driven change, it says, "AI-driven automation has unique features that may allow it to replace substantial amounts of routine cognitive tasks in which humans previously maintained a stark comparative advantage." However, Ed Felten, deputy chief technology officer at the White House Office of Science and Technology Policy, doubts AI will achieve general-purpose, human-like intelligence in the next two decades.
Slowing the Spread of Viral Misinformation: Can Crowdsourcing Help?
The Huffington Post (12/20/16) Kate Starbird; Emma Spiro
Social media platforms play a pivotal role in the modern information-sharing environment's facility with virally spreading misinformation, which leads to speculation as to what actions such platforms can and should take to address the problem, write University of Washington (UW) professors Kate Starbird and Emma Spiro. They led a comprehensive study of online rumoring during crisis events to better understand how rumors spread and how to devise techniques for automatically detecting rumors in Twitter. Starbird and Spiro believe their work with crowdsourcing dovetails with the challenge of identifying and curtailing online misinformation. They note their research into "self-correcting" crowdsourcing of rumor posts demonstrated limitations, but suggest expressed uncertainty in message content could help in automatic early detection. Starbird and Spiro also note the potential of explicit recommendation systems and formal crowdsourcing initiatives, but they say "the most successful solutions are likely to be hybrid ones that integrate automated, [machine-learning] algorithms based on a variety of features with real-time feedback from people to catch errors and refine the models." However, Starbird and Spiro acknowledge making these methods publicly transparent creates the likelihood of malefactors "gaming" the techniques to avoid identification. They suggest redesigning social media platforms to help people better rate information credibility on their own.
MIT News (12/20/16) Jennifer Chu
Massachusetts Institute of Technology (MIT) researchers have developed a mathematical model to study the effects of two different scheduling policies on fuel consumption and travel delays in order to determine the optimal deployment of trucks. Under a timetable policy, vehicles assemble and depart as a platoon at set times, either at regular or staggered intervals. For feedback scheduling, vehicles leave as a platoon when a set number or varying numbers of trucks are assembled. The MIT team found timetables set to deploy platoons at regular intervals were more efficient and cost-effective than those deployed at staggered times. Moreover, feedback scenarios that waited for the same number of trucks to assemble before deploying were better than scenarios that varied the number of trucks in a platoon. Overall, feedback policies saved about 5 percent more fuel than timetable scenarios. The mathematical model depends on trucks following each other at very close range, which may be difficult for drivers to maintain over long distances. The researchers say truck platoons eventually may require autonomous driving systems to activate during long stretches to keep the platoon close enough to save fuel.
NSF DCL: The Outcome of the Division of Advanced Cyberinfrastructure Positioning Review
CCC Blog (12/19/16) Helen Wright
In a letter pertaining to the outcome of the Division of Advanced Cyberinfrastructure (ACI) positioning review, the U.S. National Science Foundation's (NSF) Directorate for Computer and Information Science and Engineering (CISE) positively views the process by which the science, engineering, and research divisions arrived at their conclusions. "Working together, you have developed forward-looking approaches to strategy and implementation that have enabled cyberinfrastructure to advance the frontiers of discovery," the letter says. The review found advanced cyberinfrastructure to be of growing value to science and engineering, the NSF, and the U.S. The Directorate agrees with NSF Director France Cordova's assessment that ACI and the Division of Polar Programs' management and budgets are well supervised and align with their respective directorates. In terms of their leadership, Cordova stressed the "importance of visibility of these units among peer entities, direct interactions with leaders in the field, and access to the NSF Director and senior leadership." The ACI realignment review stipulated ACI will remain a part of CISE, and be rebranded as the Office of Advanced Cyberinfrastructure. The name change will reflect ACI's cross-foundational role to serve all of NSF and the science and engineering research community.
Rail Crossing Warnings Are Sought for Mapping Apps
The New York Times (12/19/16) Daisuke Wakabayashi
The U.S. National Transportation Safety Board (NTSB) wants technology and delivery companies to add the precise locations of more than 200,000 grade rail crossings on digital maps and provide alerts when drivers encounter them, following a lengthy investigation into a fatal collision at one such crossing caused by a mapping app error. Berg Insight estimates about 1 billion people worldwide use a mapping app/service each week. The accuracy of mapping data is growing in importance as driverless cars are deployed on roadways, and it is the responsibility of navigation apps to direct autos onto the safest routes and warn passengers of potential hazards. For the last 18 months, the Federal Railroad Administration has been pushing technology companies to add alerts for grade crossings, and it has contacted 11 firms to incorporate its location data of grade crossings. The NTSB says Apple and three other companies have agreed to do this, but their timeline is uncertain. In September, railroad agency administrator Sarah E. Feinberg criticized technology companies for procrastinating on adding data to mapping apps that "will save many lives." The need for NTSB recommendations reflects some drivers' heavy reliance on navigation apps, opting to follow directions even when they seem contradictory.
How Long Before AI Systems Are Hacked in Creative New Ways?
Technology Review (12/15/16) Will Knight
The rapid adoption of artificial intelligence (AI) systems is expected to lead to malicious efforts to disrupt them. OpenAI research scientist Ian Goodfellow, speaking at a recent AI conference in Spain, noted "almost anything bad you can think of doing to a machine-learning model can be done right now." During the last several years, scientists have shown ways machine-learning programs could be rigged by taking advantage of their pattern-recognition capabilities. Their vulnerability partly stems from their lack of real intelligence, and although Goodfellow and others are developing countermeasures, safeguarding against every possible attack is a difficult proposition. Pennsylvania State University professor Patrick McDaniel notes fooling machine-learning systems has been standard hacker procedure for years. For example, spammers feed learning programs bogus emails so spam messages will be admitted later, and McDaniel thinks more refined assaults could come soon, initially targeting online classification systems. Moreover, a recent study demonstrated that certain deceptions are reusable against various machine-learning systems, or even against a large "black box" system that hackers have not previously encountered. With new machine-learning tools being rapidly developed and often issued online for free, the danger also exists that unvetted bugs within the programs can be exploited.
Google Releases Test Set to Check Cryptographic Library Security
eWeek (12/19/16) Jaikumar Vijayan
Google has released Project Wycheproof, a set of tests developers can use to check open source cryptographic libraries for known security vulnerabilities. Project Wycheproof is designed to help developers catch subtle mistakes in those libraries that could have significant consequences if left unaddressed. The set is comprised of a collection of 80 unit tests for different types of attacks, which enable developers to test whether certain cryptographic libraries, authenticated encryption, and elliptic curve cryptography are vulnerable. Google researchers developed each unit test by surveying available literature and implementing most of the known attacks against the algorithms. The tests resulted in the discovery of 40 security flaws in cryptographic algorithms such as the Digital Signature Algorithms (DSA) and the Elliptic Curve Diffie-Hellman Cryptography (ECDHC). One serious flaw discovered by Google could be used to recover the private keys associated with ECDHC and DSA implementations. Unit tests currently are available for several popular open source algorithms, including DSA, ECDH, RSA, AES, and Diffie-Hellman. However, "passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tests for," note Google researchers Daniel Bleichenbacher and Thai Duong.
Report Urges States to Take Action on Computer Science Education
THE Journal (12/19/16) Dian Schaffhauser
The Southern Regional Education Board (SREB) published a report recommending actions states and schools can take to help more young people learn computer science (CS). The report says states should develop CS standards for K-12, and recommends bringing educators and other experts together to develop them. In addition, states should lay the foundation for learning CS by integrating lessons on literacy skills and math that will help students master grade-appropriate CS standards. SREB also recommends states create "clear pathways" to computing careers by building blocks of courses for training in high-demand jobs in cybersecurity, informatics, and related fields. In order to accomplish these goals, states need to recruit and train exceptional CS teachers by offering teaching endorsements to new educators who complete a multi-week summer class to learn their curriculum. Finally, SREB says states need to educate communities about opportunities in CS. "Like reading, writing, and math, knowledge of computer science can no longer be considered optional in our innovation-driven economy, where data and computer technology are central to our lives," says SREB president Dave Spence.
No More Burning Batteries? Stanford Scientists Turn to AI to Create Safer Lithium-Ion Batteries
Stanford News (12/15/16) Mark Shwartz
Researchers at Stanford University have described 21 new solid electrolytes with the potential to replace the unstable liquids utilized in the lithium-ion batteries of many electronic devices, using methods adapted from artificial intelligence (AI) and machine learning. "Liquid electrolytes are cheap and conduct ions really well, but they can catch fire if the battery overheats or is short-circuited by puncturing," notes lead study author Austin Sendek. He says solid electrolytes have a much lower probability of exploding or vaporizing, while their greater rigidity would add structural integrity to the battery. The researchers tapped AI and machine learning to construct predictive models from experimental data by training an algorithm to identify good and bad compounds. "We developed a computational model that learns from the limited data we already have, and then allows us to screen potential candidates from a massive database of materials about a million times faster than current screening methods," Sendek says. Criteria the model used for screening materials included stability, cost, abundance, and ability to conduct lithium ions and re-route electrons via the battery's circuit. Sendek says the algorithm yielded the 21 most promising compounds in a matter of minutes.
The Great AI Awakening
The New York Times Magazine (12/14/16) Gideon Lewis-Kraus
Google has advanced the functionality of its Google Translate service with artificial intelligence (AI), reflecting its commitment to deliver products enabled by machine learning. The innovation illustrates AI's transformation, engineered by the Google Brain division, to overcome the limitations of symbolic AI; these include time-consuming training and programs' strict adherence to clear rules and definitions. Google Brain's breakthroughs in neural-network-based deep learning demonstrated a computer can observe raw, unlabeled data and from it extract complex human concepts. Google Translate's evolution is based on research into neural-network frameworks that accommodate not only static, but also dynamic structures, such as language. Fundamental linguistic prediction could potentially set a standard for other intelligent tasks that have the outward appearance of thinking. Google Brain scientists tapped the concept of "word embeddings" to give plausibility to the theory of neural language translation, and then proceeded to scale it up to production level over two years. Google CEO Sundar Pichai distinguishes current AI applications from the ultimate goal of artificial general intelligence, or general-purpose, context-aware AI that can follow implicit instead of explicit instructions. Pichai believes a general computational facility with human language can be achieved by adhering to this model, and from that can be laid the groundwork for more impressive AI capabilities.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]