Association for Computing Machinery
Welcome to the January 11, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Chinese Humanoid Robot Turns on the Charm in Shanghai
Agence France-Presse (01/09/17)

Researchers at the University of Science and Technology of China have developed Jia Jia, China's first human-like robot. Lead researcher Chen Xiaoping predicts that within a decade, artificially intelligent (AI) robots will begin performing a range of menial tasks in China. Jia Jia can accurately answer a question about the day's weather, hold basic conversations, and recognize the gender of those questioning it. Significant advancements are being made in AI, and such products were a focal point at the recent Consumer Electronics Show in Las Vegas. The conference included a range of products that can respond to voice commands to play music at home and follow other remote-controlled orders. Moreover, many of these products can improvise by accessing and "learning" from the Internet cloud. Such technology is not only a novelty, but necessary in China because many young Chinese are eschewing menial jobs while the aging population requires more assistance in hospitals and nursing homes. Chen dismisses fears of future robots getting too smart and taking over the world, noting, "as long as this is done in a step-by-step and controlled manner, I don't think there will be a big impact on society."


Taking Graphics Cards Beyond Gaming
KAUST Discovery (01/10/2017)

A new mathematical solver enables the graphics cards found in gaming computers to solve computationally intensive mathematical problems. Researchers at the King Abdullah University of Science and Technology's (KAUST) Extreme Computing Research Center in Saudi Arabia modified a graphics-processing unit (GPU) to include a more efficient solver. Acceleration of the solver could considerably reduce the execution time and energy required to solve the problem. GPUs are more energy efficient compared with standard high-performance processors because they eliminate much of the hardware used by standard processors to executive general-purpose code. However, GPUs' supporting software is immature, so researchers aimed to make a more efficient system that maximizes the trade-off between the number of processors and the memory available to temporarily store the data. A solver scheme was designed to operate directly on data without making an extra copy, meaning a system twice as large can be stored in the same amount of memory. The KAUST researchers also redesigned the way in which simultaneous equations are performed, implementing a triangular matrix-matrix multiplication method that achieves up to an eightfold acceleration compared with the speed of existing systems. The advanced solver will be integrated into the next software library for NVIDIA GPUs.


A Sensor-Fitted Suit to Analyze Stroke Patients' Movements
CORDIS News (01/09/17)

A multi-sensor-equipped suit could help physicians monitor stroke patients' daily life activities and motor function after being released from the hospital. The INTERACTION project is a joint European study that has tested an unobtrusive and modular system of 41 sensors that can be worn under a stroke patient's clothes, collecting data on muscle strength, stretch, and force. "There has long been a great need for systems like this, but the technology simply was not ready," says Bart Klaassen, a doctoral student at the University of Twente in the Netherlands. "That is now changing rapidly, thanks to rapid developments in the fields of battery technology, wearables, smart e-textiles, and big data analysis." The INTERACTION suit was tested on patients over a period of three months, during which data was transmitted, stored, and processed. The system's portable transmitter relayed all information gathered to data processing servers at the University of Twente. Health insurance companies and healthcare professionals were involved in the project from the beginning, indicating the commercial potential of the technology.


MIT Media Lab to Participate in $27 Million Initiative on AI Ethics and Governance
MIT News (01/10/17)

A new $27-million initiative aims to apply humanities, social sciences, and other disciplines to the development of socially responsible artificial intelligence (AI). The Massachusetts Institute of Technology (MIT) Media Lab and Harvard University's Berkman Klein Center for Internet and Society will serve as founding institutions for the Ethics and Governance of Artificial Intelligence Fund, which will support global AI research and education conducted from a multidisciplinary approach. The fund will collaborate with existing efforts, oversee an AI fellowship program, and convene an advisory group of experts in the field. Research will include questions pertaining to society's ethical use of AI, using machine learning to extrapolate ethical and legal implications from data, and using data-based techniques to understand the potential impact of AI on the labor market. The Media Lab currently is exploring the moral ambiguities associated with autonomous vehicles and the ethics of human-robot interaction. MIT Media Lab director Joi Ito says a key question will be how to ensure AIs are not trained to perpetuate human biases. "How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only 'smart,' but also socially responsible?" Ito asks.


New Camera Can See Around Corners
Technology Review (01/06/17)

Researchers at Xi'an Jiaotong University in China have built a single-pixel camera that can capture images of objects even when they are not in direct view. Single-pixel cameras can produce images using a single pixel to detect light by randomizing the light intensity to be detected by the pixel, correlating with the scene in front of the pixel. Data is then mined to find the correlation and recreate the image, and by recording the intensity of light thousands of times, it is possible to create a high-resolution image. To test their camera, researchers illuminated an object with light from a projector that produces a random pattern of illuminated squares. The object was placed next to a white wall that scattered light toward the single pixel, which did not have a clear view of the object; this pixel then recorded the light intensity scattered from the wall. The process was repeated about 50,000 times, and a data-mining algorithm sorted through the data to create the image. The resulting image clearly matched the target object, and the team says further improvements to image resolution can be made by optimizing the algorithm and reducing the size of the illuminated squares in the random pattern of projected light.


What Did Big Data Find When It Analyzed 150 Years of British History?
University of Bristol News (01/09/17)

Artificial intelligence (AI) researchers at the University of Bristol in the U.K. analyzed 150 years of British history via regional newspapers, uncovering significant patterns from more than 35 million articles. "The key aim of the study was to demonstrate an approach to understanding continuity and change in history, based on the distant reading of a vast body of news, which complements what is traditionally done by historians," says Bristol professor Nello Cristianini. The study, part of the university's ThinkBIG project, compiled material from the British Library's newspaper archives, representing 14 percent of all British regional outlets from 1800 to 1950. Content analysis found specific key events such as wars, epidemics, coronations, or gatherings with high accuracy. More sophisticated AI methods were used to spot references to named entities, such as individuals, companies, and locations. "We have demonstrated that computational approaches can establish meaningful relationships between a given signal in large-scale textual corpora and verifiable historical moments," says Bristol's Tom Lansdall-Welfare. "However, what cannot be automated is the understanding of the implications of these findings for people, and that will always be the realm of the humanities and social sciences, and never that of machines."


OSTP Exit Memo
CCC Blog (01/10/17) Helen Wright

The White House Office of Science and Technology Policy (OSTP) last week published an Exit Memo highlighting the Obama administration's impact in "reinvigorating the American scientific technological enterprise." OSTP director John Holdren and U.S. Chief Technology Officer Megan Smith offer near-term guidance for expanding participation in science, technology, and innovation to continue fueling prosperity. They say investment in fundamental research is essential to drive new technology and innovation. Among the science and technology frontiers Holdren and Smith cite as enabling U.S. innovation and addressing major societal needs are neuroscience and neurotechnology. Related to this is the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative; the Computing Community Consortium (CCC) hosted a BRAIN workshop in 2014 to bring together researchers to explore the interfaces between brain and computer science. Other areas of focus include smart communities and the Internet of Things, via the White House Smart Cities Initiative. The memo also highlighted strategic computing and the National Strategic Computing Initiative, which was launched to ensure sustained U.S. leadership in high-performance computing. The National Robotics Initiative is cited for developing robotics and intelligent systems for boosting human capabilities and intelligence, and the White House issued a National Artificial Intelligence Research and Development Strategic Plan to push AI, machine learning, and big data forward.


How on Earth Does Geotagging Work?
University of Alberta (01/09/17) Katie Willis

Researchers at the University of Alberta (UAlberta) in Canada are using automated geotagging models to add a location to online data and documents. Geotagging helps Internet users better understand people, places, and things referenced in online documents. "With the proliferation of online content and the need for sharing it across the globe, it is important to correctly match names to the places they refer to," says UAlberta professor Davood Rafiei. The researchers used a two-part model to develop a technique to automate geotagging for news articles and other online documents and data. Rafiei says the model integrates two competing hypotheses: inheritance and near-location. The inheritance hypothesis calls for having named entities given the same geographical location as the document in which they are mentioned. Meanwhile, the near-location hypothesis links the named entities to geographical locations mentioned in nearby text. "Our data shows that the inheritance hypothesis holds in 72 percent of the cases, the near-location hypothesis holds in 67 percent of the cases, and at least one holds in close to 99 percent of the cases," Rafiei says. He also notes the new model is highly accurate and automated, significantly cutting the cost of geotagging.


Bias in Criminal Risk Scores Is Mathematically Inevitable
The Louisiana Weekly (01/09/17) Julia Angwin; Jeff Larson

Algorithms based on a mathematical formula courts and parole boards employ to predict future criminal behavior are designed to inevitably give rise to racial bias, according to new research. A ProPublica team compiled COMPAS algorithm scores for more than 10,000 people arrested in Broward County, FL, and checked to see how many were charged with further crimes within two years. The investigators found black defendants were twice as likely to be wrongly labeled as higher risk than white defendants. Meanwhile, white defendants labeled low risk were far more likely to end up accused of new offenses than blacks with comparably low COMPAS risk scores. Four research groups independently researched the possibility of crafting a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions. However, they found it impossible for a risk score to meet both fairness criteria simultaneously. The researchers determined an algorithm written to achieve predictive parity unavoidably leads to disparities in what sorts of people are incorrectly classified as high risk when two groups have differing arrest rates. "If you have two populations that have unequal base rates, then you can't satisfy both definitions of fairness at the same time," notes Cornell University professor Jon Kleinberg.


The Humans Working Behind the AI Curtain
Harvard Business Review (01/17) Mary L. Gray; Siddharth Suri

There is a human factor at work in tasks promoted as artificial intelligence (AI)-driven, in the form of people paid to respond to queries and requests sent to them via application programming interfaces of crowdwork systems, write Microsoft Research scientists Mary L. Gray and Siddharth Suri. "The creation of human tasks in the wake of technological advancement has been a part of automation's history since the invention of the machine lathe," they note. "We call this ever-moving frontier of AI's development the paradox of automation's last mile: as AI makes progress, it also results in the rapid creation and destruction of temporary labor markets for new types of humans-in-the-loop tasks." Gray and Suri predict the enhancement of human services by AI will augment daily productivity, but present new social challenges. "The AI of today can't function without humans in the loop, whether it's delivering the news or a complicated pizza order," the researchers note. Technology and media companies therefore employ people to perform content moderation and curation, while many jobs are outsourced overseas and paid a low, flat rate. "This workforce deserves training, support, and compensation for being at-the-ready and willing to do an important job that many might find tedious or too demanding," according to Gray and Suri.


Innovators Wanted: Machine Learning, IoT Jobs on the Rise
InfoWorld (01/09/17) Serdar Yegulalp

The job market for machine learning and artificial intelligence-related (ML/AI) positions is heating up significantly. From the beginning of 2014 to the start of 2016, job postings for ML/AI positions rose steadily from about 60 job postings per million to more than 100, according to trend data provided by job search site Indeed. In 2016, the number of such postings jumped as much as they had over the previous two years���up to 150 postings per million. The number of postings for such jobs currently outstrips the number of searches for such jobs--100 per million searches versus 150 per million postings. The leading companies hiring for those positions are Amazon, Apple, Google, Microsoft, Facebook, and NVIDIA. Even back in 2014, AI was solidly in the lead compared to other job postings involving emerging technologies such as three-dimensional printing, blockchain technology, the Internet of Things (IoT), virtual/augmented reality, and wearable technology. The other technologies on the list, with the exception of IoT, have remained consistently at about 10 posts per million during that time.


Jill Watson, Round Three
Georgia Tech News Center (01/09/17) Jason Maderer

The Georgia Institute of Technology (Georgia Tech) is beginning its third semester using a virtual teaching assistant (TA) system, called Jill Watson, in an online course about artificial intelligence (AI). Jill, which is implemented on IBM's Watson platform, was first used last spring to successfully answer particular types of frequently asked questions without the help of humans. Georgia Tech professor Ashok Goel told the students at the beginning of the semester some of their TAs may or may not be computers. "Then I watched the chat rooms for months as they tried to differentiate between human and artificial intelligence," Goel says. At the end of the semester, the students were polled about which TAs were human and which were AI. Slightly more than 50 percent of the students correctly guessed that one of the AIs, known as Stacey, was a computer. However, just 16 percent of the students figured out that another AI, called Ian, was not human. In addition, more than 10 percent of the students mistakenly thought two of the human TAs were actually computers. "When we started, I had no idea that this would blossom into a project with so many dimensions," Goel says.


Searching Deep and Dark: Building a Google for the Less Visible Parts of the Web
The Conversation (01/08/17) Christian Mattmann

Apache Tika could help with the effort to teach computers to recognize, index, and search all the different types of material that is available online, writes Christian Mattmann, director of the University of Southern California's Information Retrieval and Data Science Group and principal data scientist at the U.S. National Aeronautics and Space Administration. He says the tool enables users to understand any file and the information contained within it. Mattmann notes improvements to Tika during the U.S. Defense Advanced Research Projects Agency's (DARPA) Memex project launched in 2014 made it even better at handling multimedia and other content found on the deep and dark Web. Memex sought to create a search index that would help law enforcement identify human trafficking operations online--in particular by mining the deep and dark Web. Tika, which Mattmann co-developed, can now process and identify images with common human trafficking themes, and additional software can help it find automatic weapons and identify a weapon's serial number. However, Mattmann says more work is needed to achieve Memex's goals. The tool is part of an open source software library available on DARPA's Open Catalog.


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]