Association for Computing Machinery
Welcome to the October 24, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


AI Predicts Outcomes of Human Rights Trials
University College London (10/24/16) Bex Caygill

Researchers led by the U.K.'s University College London (UCL) developed an artificial intelligence (AI) method that successfully predicted the judicial outcomes of the European Court of Human Rights with 79-percent accuracy. The AI automatically analyzes case text via a machine-learning algorithm, and UCL's Nikolaos Aletras thinks the technique would be very helpful for finding patterns in cases that lead to certain decisions. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," he says. As they devised the method, the researchers found court judgments strongly correlate with non-legal facts instead of direct legal arguments, suggesting the judges are "realists" and not "formalists." The team identified English-language datasets for 584 cases pertaining to Articles 3, 6, and 8 of the Convention, and used the AI algorithm to uncover patterns in the text. They chose an equal number of violation and non-violation cases to prevent bias and mislearning. The language used, and the topics and circumstances cited in the case text, were the most reliable prediction factors. The high accuracy rate was achieved by integrating the information extracted from the abstract "topics" the cases cover and "circumstances" across data for all three articles.


Chinese Researchers Develop Algorithms for Smart Energy Grid
Phys.org (10/20/16)

Scientists at Northeastern University in China have proposed a way to distribute energy similarly to the Internet. Huaguang Zhang, director of Northeastern's Electrical Automation Institute, and colleagues have determined how to optimize power exchange between the main electrical grid and multiple microgrids using algorithms. Decentralized generators within a system would agree that one of them will represent their ideal state. Using consensus-based algorithms, the leader would communicate with the main grid and collect the power costs of each generator to set the price of electricity within the network, and the next algorithm would allow for precise calculation by each generator based on their local needs compared to the global supply and demand by collecting information from their networked microgrids. The generators would use this information to request more energy, or sell surplus energy to the main grid to be sent to a different generator. The researchers simulated their proposed management method and report the algorithms proved effective. Zhang says the method "is more cost-effective, reliable, and robust compared to the centralized approaches."


Artificial Intelligence: Computer Says YES (but Is It Right?)
University of Cambridge (10/20/16) Louise Walsh

The trustworthiness of self-learning computers is an increasingly vital issue as such systems become more common in "high-stakes" applications, according to professor Zoubin Ghahramani at the U.K.'s University of Cambridge. Ghahramani notes machine-learning systems can achieve near-human-level performance at many cognitive tasks even when dealing with incomplete datasets or new situations, but there is still little knowledge of how those systems function internally. "If the processes by which decisions were being made were more transparent, then trust would be less of an issue," Ghahramani says. His team constructs the underlying algorithms of such machines, and he says decision-making computers must be clear on what stage they have reached in this process, as well as when they are uncertain. One strategy involves building an internal self-assessment or calibration stage so the machine can test its own certainty and report back. Working with Ghahramani is Cambridge researcher Adrian Weller, whose research seeks to add transparency to decision pathways partly via visualization and by analyzing artificial intelligence (AI) systems' performance in real-world conditions that extend outside of their training environments. These trust and transparency issues are the basis of one project at the newly launched Leverhulme Center for the Future of Intelligence, which is investigating AI's ramifications for civilization.


Wits Researchers Find Techniques to Improve Carbon Superlattices for Quantum Electronic Devices
University of the Witwatersrand (South Africa) (10/19/16)

The quantum properties of carbon-based superlattices could lead to a fundamental shift in the design and development of electronics, according to researchers at the University of the Witwatersrand (Wits) Nanoscale Transport Physics Laboratory. Superlattices are composed of alternating layers thin enough to be governed by quantum mechanics. The researchers created a theoretical framework that can calculate the electronic transport properties in disordered carbon superlattices, which can be used to design quantum devices for specific applications. Superlattices currently are used as high-frequency oscillators and amplifiers and are beginning to be utilized in optoelectronics as detectors and emitters in the terahertz range. The lack of terahertz emitters and detectors has created a gap in that region of the electromagnetic spectrum, which superlattice electronics are able to fill. Unlike conventional semiconductors, the properties of superlattices enable devices to operate in a much wider range of frequencies than their conventional counterparts. Carbon devices also are extremely strong, are operable at high voltages, and can be developed in laboratories without sophisticated nanofabrication equipment. The Wits researchers say the model could find application in biology, space technology, and science infrastructure.


Designing the Future Internet
Rutgers Today (10/20/16) Todd B. Bates

Connecting many smart objects to the Internet will result in an enormous boost in online traffic, which Rutgers University professor Dipankar Raychaudhuri aims to make manageable by a network redesign. The U.S. National Science Foundation (NSF) in 2010 launched a Future Internet Architecture initiative, and Raychaudhuri and colleagues proposed a "MobilityFirst" project, which won funding from the agency. Raychaudhuri says the project concentrates on migrating from the current Internet protocol to name-based routing, in which names represent people, mobile phones, Internet devices, small sensors, or any other Internet-connected objects. The advantages of a MobilityFirst approach include more flexible services, improved security, support for mobility across many technologies, efficiency, and the capacity to handle large volumes of traffic and data. NSF expects the presence of about 50 billion smart objects by 2020, and 1 trillion sensors soon thereafter. Raychaudhuri says this system requires fast and low-delay networks to guarantee timely receipt of data, and three MobilityFirst trials are planned or underway. One trial involves a satellite service company that will employ the system to deliver content closer to users, and the second will extend an Internet service provider's circuits to deliver mobile service. The third trial will focus on targeted emergency messaging in disaster recovery.


Bath Researcher Shows Machines Can Be Prejudiced Too
TechSPARK (10/21/16) Nick Flaherty

Researchers from Princeton University and the University of Bath in the U.K. have demonstrated that artificial intelligence (AI) can exhibit the same prejudice and bias as humans. The researchers say the bias inherent in the language that is used to train the machines is carried over into the results, from morally neutral bias toward insects or flowers to more problematic issues concerning race and gender. "In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful," the researchers note. AI experts recommend the technology should always be applied transparently, and without prejudice, while both the code of the algorithm and the process for applying it must be open to the public. In addition, they say transparency should enable courts, companies, citizen watchdogs, and others to understand, monitor, and suggest improvements to algorithms. Another suggestion for avoiding prejudice in AI is to promote diversity among AI developers, which could address insensitive or under-informed training of machine-learning algorithms. The researchers suggest engineers and domain experts who are knowledgeable about historical inequalities should be encouraged to collaborate on developing AI technologies.


Microsoft Speech Recognition Technology Now Understands a Conversation as Well as a Person
Network World (10/18/16) Michael Cooney

Microsoft researchers say they have developed a speech recognition system that can understand human conversation as well as a human person does. The Microsoft Artificial Intelligence and Research group says the technology makes fewer errors than a human professional transcriptionist. The researchers say the technology marks the first time human parity has been reached for conversational speech. The milestone comes after decades of research in speech recognition and has broad implications for consumer and business products. Consumer entertainment devices such as the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana can be significantly augmented for speech recognition. Microsoft's Geoffrey Zweig says the researchers continue to develop ways to make sure speech recognition works in places where there is a lot of background noise, can assign names to individual speakers when multiple people are talking, and can accommodate a variety of voices, regardless of age, accent, or ability. For the longer term, researchers want the technology to understand spoken words. "That would give the technology the ability to answer questions or take action based on what they are told," according to Microsoft.


Thoughts From White House Frontiers Conference and the National AI R&D Strategic Plan
CCC Blog (10/18/16) Greg Hager; Beth Mynatt

The recent White House Frontiers Conference in Pittsburgh, PA, coincided with the release of a report on the future of artificial intelligence (AI) technology and research. Drafted by the U.S. National Science and Technology Council's Networking and Information Technology Research and Development AI task force, the National Artificial Intelligence Research and Development Strategic Plan lays out a roadmap for federally-funded AI research and development (R&D). The report prioritizes long-term investments in the next generation of AI research, which will be needed to develop effective methods for human-AI collaboration. The ethical, legal, and societal implications also must be understood to ensure AI systems can behave according to formal and informal human norms. Before AI can be used widely, developers must demonstrate the systems operate safely, securely, and reliably. To improve AI performance, researchers should develop high-quality shared datasets and environments for AI training and testing, while additional research is needed to develop standards and benchmarks with which to evaluate AI technologies. As AI advances require a strong community of AI researchers, a better understanding of research and development workforce needs will be crucial. The report recommends the development of an AI R&D implementation framework to support investments and sustain a healthy AI workforce.


New 3-D Wiring Technique Brings Scalable Quantum Computers Closer to Reality
Waterloo News (10/18/16) Pamela Smyth

A new extensible wiring technique is capable of controlling superconducting quantum bits, according to a paper by researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo in Canada, INGUN Prufmittelbau GmbH in Germany, INGUN USA, and Google. The researchers say the technique represents a significant step toward the realization of a scalable quantum computer. The quantum socket method uses three-dimensional wires based on spring-loaded pins to address individual quantum bits (qubits), says IOC researcher Jeremy Bejanin. He says the technique connects classical electronics with quantum circuits, and is extendable far beyond current limits, from one to possibly a few thousand qubits. One promising implementation of a scalable quantum computing architecture uses a superconducting qubit cooled to temperatures close to -273 degrees Celsius inside a cryostat, or dilution refrigerator. To control and measure superconducting qubits, the researchers used microwave pulses. "We have been able to use [the quantum socket] to control superconducting devices, which is one of the many critical steps necessary for the development of extensible quantum computing technologies," says Matteo Mariantoni, a faculty member at IQC.


What IARPA Knows About Your Canceled Dinner Reservation
NextGov.com (10/18/16) Mohana Ravindranath

The U.S. Intelligence Advanced Research Projects Activity (IARPA) is running several simultaneous research projects examining how crowdsourced data could be used to predict specific events, such as disease outbreaks or riots. Some of the research indicates unexpected connections between seemingly unrelated data and the probability of an event occurring. For example, researchers found that canceled dinner reservations, in the aggregate, are good predictors of disease outbreaks. A research team from Virginia Polytechnic Institute and State University working on the Open Source Indicators program developed a system that incorporated tens of thousands of data streams to create a forecast for whether a particular event was likely to occur, says IARPA director Jason Matheny. By the end of a forecasting round analyzing data from Latin America, China, and parts of Africa, the researchers generated a forecast a week in advance with about 70-percent to 85-percent accuracy. Meanwhile, IARPA's Cyber Attack Automated Unconventional Sensor Environment project is designed to predict when cyberattacks are being planned, pulling data from hacker forums, Web search queries, and changes in the prices of malware on the black market. The researchers note forecasting economic instability has been one of the toughest challenges.


Debates: Linguistic Trick Boosts Poll Numbers
University of Michigan News (10/18/16) Nicole Casal Moore

A University of Michigan (U-M) study of U.S. presidential debates between 1976 and 2012 found mimicking subtle aspects of an opponent's language better engages a third-party audience and leads to a bump in the polls. Linguistic style matching refers to the matching of function words, such as "also," "but," and "somewhat," and other supporting parts of speech. "These function words are inherently social, and they require social knowledge to understand and use," says U-M professor Daniel Romero. "We think that matching an opponent's linguistic style shows greater perspective taking and also makes one's argument's easier to understand for third-party viewers." Researchers examined the transcripts of 26 debates over 36 years of presidential election seasons. Each candidate was rated on the degree to which they matched their opponent in eight different style markers. Researchers found frequent style matchers enjoyed a median one-point bump in Gallup polls following the debate. No candidate excelled at linguistic matching consistently, and poll data did not always correlate with election outcomes. In 1976, Gerald Ford received a style matching score of .02 in the first debate, and his poll numbers spiked 6.5 percent. Conversely, Jimmy Carter's score was -0.53, and his poll numbers dropped by 2 percent. However, Carter won the presidency.


Here's How Young People Decide When They're Drunk 'Enough,' According to Math
Ohio State University (10/17/16) Pam Frost Gorder

Mathematical models developed by computer engineers at Ohio State University (OSU) have enabled colleagues to explain the factors that drive alcohol consumption among young people. The preliminary findings indicate college students drink until they attain a certain level of drunkenness, and then adjust the pace of their drinking--sipping versus gulping, for example--at different times throughout the night to maintain that level. The study provides a proof of concept for new research that will make use of very large and complex datasets. Participants will wear trans-dermal blood alcohol monitors when they go out on the weekends and will use personal fitness monitors, which will track data such as their sleep and exercise habits. Researchers will be able to track as many as 5,000 different variables per person during a two-week period. The goal is to develop a smartphone app that will alert users when they have had enough to drink. "The way the students made decisions about drinking actually resembled the single most common feedback controller that's used in engineering," says OSU professor Kevin M. Passino. "It's called a proportional-derivative controller, and it measures how far a system has moved from a particular set point and adjusts accordingly."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe