Association for Computing Machinery
Welcome to the January 6, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Consumer Electronics Show Will Highlight New Ways to Collect Biometric Data
The Washington Post (01/06/14) Cecilia Kang; Hayley Tsukayama

Products showcased at the annual International Consumer Electronics Show (CES) reflect the spread of biometric tools into personal devices to support more customized user experiences while also testing new privacy constraints. The proliferation of biometrics into security is a major trend, with companies exhibiting technology at CES that employs fingerprint, palm-print, and iris scanners, and voice-recognition software to replace passwords and set up another protective layer against hackers. Eye-scanning technology also is gaining ground with its ability to read consumer behavior. The biometric boom is fueled by increased availability of inexpensive sensors and computing advances that allow "the digitization of everyday objects," says the Consumer Electronics Association's Shawn DuBravac. "Anything that we want to digitize we now can." Meanwhile, the Electronic Privacy Information Center's Jeramie D. Scott says the storage of and accessibility to biometric data are pressing issues, since such data is personally identifiable information that companies share to construct more refined consumer profiles. Scott says biometric data is a highly valued commodity, noting in one example that data about a person's gait can sometimes be sufficient to identify that individual. Most companies promise that biometric data will be housed on individual devices and not leaked onto the Internet.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Thanks to the NSA, Quantum Computing May Some Day Be in the Cloud
Computerworld (01/04/14) Patrick Thibodeau

The U.S. National Security Agency is spending about $80 million for basic quantum computing research, and that funding may eventually support the commercialization of quantum computing and even make it accessible through the cloud. IDC analyst Earl Joseph notes the quest for quantum computing currently is an academic competition between various nations. "The goal is to fund basic research and make new discoveries that may be useful for our safety and national defense," he says. Meanwhile, Intersect360 Research's Christopher Willard sees quantum computing as one manifestation of a widespread market shift to more innovative computing architectures driven by the recognition that commercial, off-the-shelf technologies are becoming less and less capable of integrating into high-performance systems. In May, Google, the U.S. National Aeronautics and Space Administration, and the Universities Space Research Association began jointly working on quantum computing research and are using a quantum computer developed by D-Wave Systems. D-Wave CEO Vern Brownell says such a system has wide applications to big data problems, analytics, and machine learning, and he envisions quantum computing ultimately fulfilling the role of co-processor rather than being a direct substitute for classical computing systems. Brownell foresees cloud-based quantum computing resources that are accessible to any developer wishing to tackle a particularly formidable challenge.


Study: Self-Driving Car Sales Will Explode
USA Today (01/06/14) Chris Woodyard

The global population of self-driving cars (SDCs) is projected to grow from 230,000 in 2025 to 11.8 million by 2030, with an accumulated 54 million expected to be on roads worldwide in 2035 as sales increase, according to a study by IHS Automotive. IHS also predicts that almost all vehicles, both commercial and private, will be self-driving by 2050, while road safety will increase in proportion to the number of SDCs deployed. "As the market share of SDCs on the highway grows, overall accident rates will decline steadily," says IHS analyst Egil Juliussen. "Traffic congestion and air pollution per car should also decline, because SDCs can be programmed to be more efficient in their driving patterns." IHS also predicts that about 30 percent of SDCs sold will be in North America during that period. The first wave of SDCs will be equipped with systems that only assume control of the vehicle in relatively safe driving conditions, while more refined systems for driving in increasingly complex conditions will hit the market in the 2020s. The major obstacles to SDC development will be cybersecurity and software reliability, and the government also will play a prominent role by setting the rules dictating SDC deployment.


Unemployed in Europe Stymied by Lack of Technology Skills
The New York Times (01/03/14) Liz Alderman

Many information technology-based job opportunities are emerging across Europe, but a large portion of the continent's unemployed workers and young people entering the workforce lack the necessary skills. "In all countries, there is an expectation that many of the new jobs created will be in the knowledge-intensive economy," says the Organization for Economic Cooperation and Development's Glenda Quintini. "But we are seeing a worrisome skills mismatch that means a large number of unemployed people are not well prepared for the pool of jobs opening up." Overall, there are approximately 2 million job vacancies in the European Union in various sectors, including hotel work and computer programming, according to Eurostat, and a recent study by the European Commission (EC) predicts that by 2015 about 900,000 information and communications technology vacancies may go unfilled in the European Union. Microsoft, PayPal, and Fujitsu have started expanding in Ireland, for example, but have had to look outside the country to recruit sufficient people. The EC study notes the lack of qualified workers "is of major concern to European competitiveness" and to the economy as a whole. In response, European governments and companies are boosting their efforts to retrain the unemployed and are trying to attract university students to mathematics, engineering, and science degrees.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


ButtonMasher: AI Takes on Humans to Create Video Game
New Scientist (01/02/14) Douglas Heaven

Goldsmiths, University of London researchers have developed Angelina, an artificial intelligence-based game designer that recently submitted its first entry to the game-making event Ludum Dare. "I can safely say that the game created by Angelina has better game play and graphics than several other entries," says Imperial College London researcher Alan Zucconi. Angelina was developed as part of work on computational creativity, which examines whether software can be made to do things that would be considered creative if done by a human. In Angelina's game, the player must collect one type of object and avoid another. Angelina first identifies a key noun in the phrase and uses that to search an online database for associated words and images. Angelina then expands the interpretation by looking up the word in a database of metaphors. Angelina relies on a form of procedural generation, in which content is created by an algorithm rather than by hand. "Eventually, Angelina will enter a game jam with an idea that surprises people," says Goldsmiths, University of London researcher Mike Cook. "It won't be because I gave it better templates, it'll be because I gave it more freedom."


Microsoft Advances C# With the M# Language
eWeek (01/02/14) Darryl K. Taft

Microsoft researchers have developed a high-level systems programming language known as M# that is an extension of C#. M# aims to deliver a language that provides developers with type safety and productivity as well as performance. "With respect to M#, I think we should keep in mind that there has been an age-old struggle in computer science to deliver highly productive development models that are also efficient and high-performance," says IDC analyst Al Hilwa. "As automation becomes more sophisticated and we head towards driverless cars, for example, it stands to reason that automated code generation should improve." However, before Microsoft can open-source the M# language, the team needs to resolve a few aspects of the language and move to the company's Roslyn compiler as a service codebase so the C# relationship is more elegant, says Microsoft developer Joe Duffy. "That a new operating system needs to be accompanied by a new programming model through a new language or a framework is common in the industry, but it is also risky," Hilwa notes. "OS and programming language designers should keep in mind that coming up with new metaphors for constructing applications may find a skills gap in the industry which can detract from adoption."


It's Time to Take Mesh Networks Seriously (and Not Just for the Reasons You Think)
Wired News (01/02/14) Primavera De Filippi

Emergency "mesh" networks that wirelessly connect computers and devices directly to each other have existed for some time and offer numerous benefits, but are not yet widely used, writes Primavera De Filippi, a research fellow at the Berkman Center for Internet & Society at Harvard Law School. Ad hoc network infrastructures automatically reconfigure themselves according to the availability and proximity of bandwidth, storage, and other factors, making them resilient in the face of interference such as disasters. Packets can use multiple routes to navigate the network due to dynamic connections between nodes. Mesh networks can only be taken down when every node is shut down, unlike more centralized network architectures. Mesh networks have been deployed for their resilience during political upheavals and natural disasters, but also as an inexpensive, basic connectivity infrastructure in poor neighborhoods and underserved areas. Furthermore, privacy concerns are reduced with mesh networks, which do not have a central regulating authority and therefore maintain the confidentiality of online communications. Socially, mesh networks offer an alternative to traditional governance models by enabling people to self-organize into communities that share resources and control the infrastructure of communication. Although mesh networks offer many advantages, they are currently limited by technical obstacles, the perception that they are emergency tools, and political factors due to concerns about the lack of third-party regulation.


OK, Glass, Find a Killer App
Technology Review (01/02/14) Rachel Metz

Although Google Glass will not be available to the public until later this year, a select group of developers have been experimenting with it for months and have created apps that could indicate where the technology will go in the future. For example, the Moment Camera app takes pictures every few seconds when it detects the presence of faces. "Glass has this sort of built-in awareness that a phone that's in your pocket or sitting face-down on a table doesn't have," says Moment Camera developer Kenny Stoltz. Meanwhile, Georgia Tech professor Thad Starner is developing Captioning on Glass, an app that transcribes the words that someone speaks into a smartphone onto the Glass display of someone with impaired hearing. "By having a head-up display, the wearer can stay 'in the flow' of the conversation, attending the other person's face to get as much information as possible while speeding the natural conversation," Starner says. Some developers also are modifying current apps for Glass. For example, Quest Visual's Word Lens uses the smartphone's screen to translate signs in real time without the need for an Internet connection. The company's Glass app performs the same tasks, but on a different platform.


New Innovation by NUS Researchers Enhances Information Storage in Electronics
National University of Singapore (12/30/13) Kimberly Wang

National University of Singapore (NUS) researchers say they have developed magnetoresistive random access memory (MRAM) technology that can boost information storage in electronic systems. The researchers developed a new device structure useful for the next-generation MRAM chips, which they say could be applied to enhance the user experience in consumer electronics. "Storage space will increase, and memory will be so enhanced that there is no need to regularly hit the 'save' button as fresh data will stay intact even in the case of a power failure," says NUS researcher Yang Hyunsoo. The researchers say the technology could change the architecture of computers, making them much easier to manufacture. Traditional methods of applying MRAM revolve around the technology that uses an in-plane current-induced magnetization. However, this method is challenging to implement because it includes the use of ultra-thin ferromagnetic structures. The NUS researchers solved this problem by incorporating magnetic multilayer structures as thick as 20 nanometers, providing an alternative film structure for transmitting electronic data and storage.


Preview of Writing Code for Future
Boston Globe (12/29/13) Jennifer Fenn Lefferts

About 200 schools in Massachusetts participated in the Hour of Code, a nationwide campaign sponsored by Code.org aimed at introducing millions of students to programming. "We think that computer science is emerging as a 21st-century literacy," says Massachusetts Computing Attainment Network (MassCAN) executive director Jim Stanton. "Computer science provides the tools to become creators of technology, and we think that's where there is huge excitement." The Hour of Code was designed to make computer science less intimidating by guiding students through introductory coding tutorials. The event took place during the recent Computer Science Education Week, and introduced approximately 15 million students in 170 countries to basic coding. In addition, Computer Science Education Week organizers said more girls participated in computer science in U.S. schools during the event than in the last 70 years. Nine out of 10 schools do not offer programming courses, despite the fact that jobs in computer-related fields are predicted to outnumber students by 1 million by 2020, according to Code.org. Furthermore, 33 states, including Massachusetts, do not count computer science courses toward math or science requirements for a high school diploma. MassCAN is a coalition of educational nonprofits, business associations, corporations, and educational institutions working to encourage students in computer science through local initiatives.


Man and Machine: Cognitive Computing in the Enterprise
Information Age (United Kingdom) (12/28/13) Ben Rossi

The next generation of cognitive computers aims to serve as cognitive assistants that will supplement human intelligence. Such machines will offer not only data-crunching capabilities, but also the ability to analyze real-world situations, hypothesize, reach conclusions, and advise on outcomes. The main benefit of human-machine collaboration is merging machines' productivity and speed with humans' emotional intelligence and ability to handle the unknown, according to Gartner. To move beyond IT and truly collaborate with people, computers need to improve in areas such as natural language question-and-answer capabilities. "Right now the science of cognitive computing is in the formative stages," says IBM Research's Ton Engbersen. "To become machines that can learn, computers must be able to process sensory as well as transactional input, handle uncertainty, draw inferences from their experience, modify conclusions according to feedback, and interact with people in a natural, human-like way." Computers will need to think and interface in a way that fits with natural human patterns, rather than humans adapting to the functionality of computers. Already, new smart devices are monitoring and controlling the temperature in living spaces, lifestyle choices, and other aspects of daily life, while smart sensors in neonatal wards are monitoring the vital signs of premature babies. Advances also are occurring in stochastic optimization, predictive analytics, and contextual analytics.


Artificial Intelligence to Help Disaster Aid Coordination
SciDev.net (12/27/13) Kieran Dodds

A consortium of universities and private firms in the United Kingdom is working on the ORCHID project, which aims to use artificial intelligence to streamline disaster response. The need for such a project was evidenced by splintered relief efforts in November in the Philippines following Typhoon Haiyan, the group says. The ORCHID project will merge human and artificial intelligence into an efficient complementary unit known as a Human Agent Collective (HAC). The team is creating computer systems that will direct surveillance drones, resource management, and search planning, says David Jones, head of Rescue Global, the disaster response organization that will test the software next year. "Coordination of such a large response [after a disaster] is so challenging without technological assistance that makes data more accessible," Jones says. "Bringing humans and artificial intelligence together is the only way to get the job done better." Computers can parse large volumes of information generated during an emergency from local status reports, social media, and various organizations involved in the relief effort. HAC systems can collect and analyze data to flexibly implement disaster response activities. The team will conduct field trials this year in the Bay of Bengal that should demonstrate the effectiveness of HACs.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe