Association for Computing Machinery
Welcome to the July 1, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the Independence Day holiday, TechNews will not be published on Friday, July 3. Publication will resume Monday, July 6.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Will AI Drive the Human Race Off a Cliff?
Computerworld (06/30/15) Sharon Gaudin

A recent panel discussion in Washington, D.C., sponsored by the Information Technology and Innovation Foundation (ITIF) focused on the need to develop policies to prevent artificial intelligence (AI) and machine learning from evolving to the point where it exceeds human intelligence and threatens humanity's existence. "The arguments are fairly persuasive that there's a threat to building machines that are more capable than us," noted University of California, Berkeley professor Stuart Russell. "If it's a threat to the human race, it's because we make it that way. Right now, there isn't enough work to making sure it's not a threat to the human race." Meanwhile, Carnegie Mellon University professor Manuela Veloso stressed AI's potential benefits. "We'll have machines that will help people in their daily lives," she predicted. "We need research on safety and coexistence. Machines shouldn't be outside the scope of humankind, but inside the scope of humankind." Veloso said ensuring AI remains beneficial and not malignant depends on having people improve themselves and use technology for good. ITIF president Robert D. Atkinson argued most people's projections about the arrival of autonomous, self-aware systems are overly optimistic, noting intentionality will be a missing element for a long time.


Helping Students Stick With MOOCs
MIT News (07/01/15) Larry Hardesty

Massive open online courses (MOOCs) have always faced very high rates of dropout, or stopout as it is referred to in MOOC parlance. Identifying which students are likely to stopout so they can get the extra support they need to stay in and complete the course is a major challenge. At last week's International Conference on Artificial Intelligence in Education, researchers from the Massachusetts Institute of Technology (MIT) showed a stopout-prediction model could be trained on data from just one course and then be able to predict with a good deal of accuracy which students in the next offering will stopout. Kalyan Veermachaneni, a research scientist at MIT's Computer Science and Artificial Intelligence Laboratory, says the model uses a machine-learning technique called transfer learning. The researchers first developed a set of variables that would enable them to collect data, such as the average time a student spent on each correct homework problem. They then normalized the raw data against class averages and fed them through the algorithm to find correlations between the variables and stopout. The researchers say the method has proved fairly accurate, and has been improved by the addition of other techniques, such as importance sampling.


How Computer Science Education Has Changed
IT World (06/29/15) Josh Fruhlinger

The rapid advancement of technology has supported changes in many aspects of computer science education in the past decades, while other aspects have remained consistent. One major change is the expectation that every student has their own computer, thanks to the evolution in form factors and increased affordability. Another significant development has been a de-emphasis on computing hardware as a basic field of study, and a greater concentration on programming, with one former student noting, "none of the intro classes teach assembly language. At the junior level there's Computational Structures, which goes into the theory and math behind binary logic and arithmetic, graphing, shortest-route, and so on." Another aspect that has shifted over the years is the practical applications of computer science, with data processing and its applicability mainly to business the chief focus of the field in the 1980s. "[Students] weren't taught with the idea that you'd eventually want to try to put these pieces together into a bigger whole that would actually do something relevant or useful," says civil engineer and instructor Nick Carlson. However, one aspect that has not changed very much is today's students, like the students at the dawn of computer science education, do not all necessarily want to become computer scientists, but instead seek skills to make their careers less complicated.


Paired With AI and VR, Google Earth Will Change the Planet
Wired News (06/29/15) Cade Metz

On the eve of Google Earth's 10th anniversary, Google researchers are applying artificial intelligence and virtual reality (VR) technology to transform the service into a much more powerful resource for research. Under the leadership of Sean Askay, Google Earth is evolving, with neural networks currently being used to sift through Google's massive cache of satellite imagery for insights. Meanwhile, VR is being explored to imbue Google Earth with new levels of fidelity and realism. Askay and Google are constructing three-dimensional models of actual locations that can be visited from desktop PCs, while future iterations will be designed to interact with Oculus-like headsets to make the experience even more immersive. Engineers have designed cameras for capturing 360-degree stereoscopic images, and Google has a service that stitches the images into a digital environment for viewing with a special headset. Another initiative to evolve Google Earth further involves the Google Earth Engine, a tool that outside developers and companies can employ to tap Google's data center network to run calculations on environmental data. Askay and his supervisor, Rebecca Moore, intend to make Earth Engine available to a wider audience. The World Resources Institute's Aaron Steele notes his organization is using neural networking to build a system for expanding, accelerating, and enhancing related projects.


Collaboratively Exploring Virtual Worlds
National Science Foundation (06/29/15) Aaron Dubrow

The U.S. Pentagon's Office of Force Readiness and Training is collaborating with Lockheed Martin and the U.S. National Science Foundation (NSF) to create the Virtual World Framework, a new platform for distributed, immersive training applications. The platform is designed to make it easier, faster, and less expensive to develop educational and training games and simulations. The platform allows for multiple players to access the same game or application at the same time through a Web browser and makes use of the Global Environment for Network Innovations (GENI), an ultra-high-speed experiential networking testbed that is supported by NSF. GENI is ideal for the sort of shared and collaborative applications the Virtual World Framework is designed to facilitate, offering very low network latency that enables multiplayer sessions to be nearly seamless. The researchers presented the first game built using the platform, The Mars Game, at the Beyond Today's Internet Summit in March. The game uses the scenario of a crash-landed Mars rover to teach students lessons about math and computer programming. An initial test of the game with 9th- and 10th-grade students in Colorado found the game increased both learning outcomes and engagement levels, and these gains were shared across both genders.


Computer Comedian Suggests Pics to Make Your Online Chat Funnier
New Scientist (06/29/15) Douglas Heaven

New software can bring a sense of humor to online chat. Called CAHOOTS, the system suggests humorous pictures to use when chatting online. Carnegie Mellon University Ph.D. student Miaomiao Wen and colleagues from Microsoft Research have designed the system to use text typed into a chat window to search for images and constantly provide an updated selection from which to choose. For example, in response to the typed question "Why u late?," the system suggests images found by searching for "funny why" and "funny late," an image emblazoned with the words "I don't know!," as well as an image meme generated on the fly. The team tested CAHOOTS on more than 700 recruits from Amazon's Mechanical Turk crowdsourced working platform, and found most thought using the system was more fun than plain chat and that it helped them express their sense of humor. Although computers are not known for their sense of humor, they "can definitely help people be funny," Wen says. She notes some participants even said the images helped lead to new topics of discussion. The team now is adapting the system to work with email.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


New Role for Twitter: Early Warning System for Bad Drug Interactions
University of Vermont (06/29/15) Joshua E. Brown

University of Vermont researchers have developed software for discovering potentially dangerous drug interactions and unknown side effects before they show up in medical databases. The researchers say an algorithm called HashPairMiner can efficiently search millions of tweets for the names of many drugs and medicines and build a map of how they are connected based on the hashtags that link them. "Our new algorithm is a great way to make discoveries that can be followed up and tested by experts like clinical researchers and pharmacists," says University of Vermont computer scientist Ahmed Abdeen Hamed, who led the program's development. He says the program's approach also could be used to generate public alerts before a clinical investigation has started or before healthcare providers have received updates. The researchers also hope to help overcome a long-standing medical research problem associated with the fact that published studies are too often not linked to new scientific findings. The researchers used the algorithm to create a website that will enable an investigator to explore the connections between search terms, existing scientific studies, and Twitter hashtags. Their approach involves building a K-H network, which is a dense map of links between keywords and hashtags, and then removing much of the "noise and trash," according to Hamed.


Seeking Big Data Utopia--Next-Gen User Personalization With Tighter Privacy
Trinity College Dublin (06/29/15) Thomas Deane

Computer scientists and legal experts from Trinity College Dublin and SFI's ADAPT center are working to give computer users better options for personalizing and controlling their personal data. Trinity and ADAPT's researchers are developing Privacy Paradigm, an online privacy system that would tailor each piece of content and every interaction to their individual needs. Privacy Paradigm, which uses algorithms to model user needs, could be added to websites and applications to show how private user personal information would be. The researchers plan to build personalized visualizations that show users not only what a system has modeled about them, but also gives them easy-to-use controls that can modify and adjust those models. The researchers say this form of explorative personalization enables users to reflect on what they are doing, and to identify overlooked opportunities. The idea is similar to the Creative Commons licensing system, and the goal "is to make the way online services use our personal--and often privacy-sensitive--information as transparent and easy to understand and manipulate as possible for ordinary users," says Trinity professor Owen Conlan. He says tighter and more transparent privacy controls would benefit users as they take advantage of personalization.


'Clean Room' Research Could Revolutionize Computers
Orangeburg Times and Democrat (SC) (06/28/15) Paul Alongi

Clemson University researchers are making devices called "optics" by etching microscopic patterns onto silicon and glass wafers, and then shining an ordinary light beam through the optics to make the light do extraordinary things. The researchers first take a wafer inside a clean room where it is spun, covered with liquid, exposed to ultraviolet light, stenciled, washed in developer, and etched with ions. The finished product is a disc covered in squares that reflect rainbow colors like a CD and appears smooth to the naked eye. The researchers say using elements of light-based technology in microchips could lower costs by reducing energy consumption. The devices can be used in the field of silicon photonics, which combines light-based innovations with traditional microchips, according to Clemson researcher Eric Johnson. He says these advances could significantly impact data centers. Johnson's research team also is attempting to use blue-laser diodes to improve how the U.S. Navy communicates underwater. "One of the problems now is that the blue-green laser sources are large, bulky, not efficient," Johnson says. "But by leveraging the blue-laser diodes, they can make a much more efficient system, which in the end reduces cost and complexity."


Opening a New Route to Photonics
Lawrence Berkeley National Laboratory (06/26/15) Lynn Yarris

Researchers at Lawrence Berkeley National Laboratory and the University of California, Berkeley conducted a study in which a mathematical concept called adiabatic elimination is applied to optical nanowaveguides, which are the photonic versions of electronic circuits. The researchers used a combination of coupled systems and adiabatic elimination to resolve an inherent crosstalk problem for nanowaveguides that are too densely packed. The researchers applied the adiabatic elimination concept, which has a proven track record in atomic physics and other research fields, to a coupled system of optical nanowaveguides by inserting a third waveguide in the middle of the coupled pair. Although only about 200 nanometers separate each of the three waveguides, a distance that would normally generate too much crosstalk to allow for any control over the coupled system, the middle waveguide operates in a dark mode, which means it does not seem to participate in the exchange of light between the two outer waveguides. "By judiciously selecting the relative geometries of the outer and intermediate waveguides, we achieve adiabatic elimination, which in turn enables us to control the movement of light through densely packed nanowaveguides," says Tel Aviv University researcher Haim Suchowski.


Georgia Tech Researchers Train Computer to Create Games by Watching YouTube
Georgia Tech News Center (06/24/15) Joshua Preston

Georgia Institute of Technology (Georgia Tech) researchers have developed a computing system that views video game play on streaming services, analyzes the footage, and then creates original new sections of a game. The researchers tested the system with the original Super Mario Brothers game. The system focuses on the gaming terrain and the positioning between elements on-screen, and determines the required relationship or level design rule. "An initial evaluation of our approach indicates an ability to produce level sections that are both playable and close to the original without hand coding any design criteria," says Georgia Tech researcher Matthew Guzdial. The system relies on studying players in action to see where they spend most of their time in the game. The algorithms identify high-interaction areas, and the automatic-level designer targets these areas to gain design information, enabling the system to build a new level section, element by element. "Our system creates a model or template, and it's able to produce level sections that have never been seen before, do not appear random, and can be traversed by the player," says Georgia Tech professor Mark Riedl. The system created 151 distinct level sections from 17 samples of the original game, controlling for overall playability and style variables.


How Computers Are Learning to Make Human Software Work More Efficiently
The Conversation (06/25/15) John R. Woodward; Justyna Petke; William Langdon

Genetic improvement is an approach to computer program optimization in which an automated "programmer" is written to manipulate the source code of a piece of software via trial and error, with the goal of boosting the software's operational efficiency. Each manipulation is then assessed against some quality measure to see if the new version of the code is an improvement. Among the potential benefits this approach can yield are faster programs, bug removal, easier conversion of old software to new hardware, and enhancement of non-functional properties. Genetic improvement diverges from the discipline of genetic programming, which attempts to rebuild programs from scratch, by instead making small numbers of minuscule modifications. The genetic-improvement field's potential was demonstrated in the past year by several research initiatives overseen by University College London. One project involved a program capable of taking a piece of software with more than 50,000 lines of code and accelerating its functionality 70-fold. A second project conducted the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it within an instant-messaging system called Pidgin.


Israeli Computer Expert Works to Simplify Cyber Security
The Algemeiner (06/29/15) Maayan Jaffe

Israeli-born computer scientist and electrical engineer Dan Boneh was presented with the 2014 ACM-Infosys Foundation Award in Computing Sciences on June 20 in San Francisco. The award, which came with a $175,000 prize, was in recognition of Boneh's contributions to making cryptography easier to use. "Boneh has produced new directions and given the field a fresh start," says ACM president Alexander L. Wolf. Among the fruits of Boneh's work are algorithms that helped to establish the field of pairings-based cryptography, in particular identity-based encryption. The algorithms can enable an identity credential, such as an email address, to act as an encryption key. David Kravitz, a research staff member at IBM, says this type of encryption is already having a major impact on e-commerce and other related transactions. Boneh's work also has produced new mechanisms for enhancing Web and mobile device security. Some of Boneh's more recent work has been in finding ways to secure the numerous sensors contained in the average smartphone so they will not readily reveal sensitive data. Boneh also has worked to advance cryptographic watermarking, runs a popular online course on cryptography, and co-founded Voltage Security, which has licensed its solution to more than a 1,000 corporations worldwide.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe