Association for Computing Machinery
Welcome to the August 5, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Despite Job Boom, Fewer Students Study Tech
CBS MoneyWatch (08/01/13) Lynn O'Shaughnessey

The number of computer and IT jobs grew 13 percent from 2003 to 2012, but the number of people with degrees in these fields dropped by 11 percent over the same time period, according to a recent CareerBuilder and Economic Modeling Specialists study. "The slowdown in IT degrees over the last decade may have been influenced, in part, by the dot-com bubble collapse and by more recent trends of tech workers being trained by employers or trained through informal programs outside of a traditional academic setting," says CareerBuilder CEO Matt Ferguson. "The deficit in IT degree completions is concerning when you consider that there is already a considerable gap between the demand for and supply of IT labor in the U.S. today." The drop in technology degrees was especially notable in New York City, San Francisco, Atlanta, Miami, and Los Angeles. Meanwhile, Salt Lake City, Washington, D.C., and Minneapolis-St. Paul experienced the greatest percentage increase in IT and computer degrees from 2003 to 2012. Engineering degrees, which traditionally command among the highest salaries, registered a more modest growth during the past 10 years.


How to Build a Digital-Humanities Tool in a Week
Chronicle of Higher Education (08/02/13) Sara Grossman

George Mason University recently hosted 12 scholars, who were collaborating to develop a Web application for the digital humanities for the One Week | One Tool challenge, sponsored by the National Endowment for the Humanities (NEH). In five days, the team developed Serendip-o-matic, a "serendipitous" discovery tool that enables users to enter information such as a bibliography or article to view results on similar materials that they might otherwise have missed. The tool pulls material from the online collections of the Digital Public Library of America, the Europeana digital library, and Flickr Commons, says Emory University's Brian Croxall, Serendip-o-matic's project manager. Croxall says the tool shuffles results, returning different content with each search in an effort to "recreate browsing in the stacks and finding a book perhaps in a special collection or in an archive that you didn’t know existed but is related to what you’re doing." Although humanities scholars are increasingly relying on tools and software systems, few understand how these applications work, and the week long seminar provided a chance for humanities scholars to learn the inner workings of an app, says NEH's Brett Bobley.


W3C's Web Storage Tech Goes Live
IDG News Service (08/01/13) Joab Jackson

The World Wide Web Consortium has finalized its specification for Web Storage, a technology that will give Web applications more flexibility in storing data on user machines. Browser makers and Web application developers can now deploy Web Storage without worrying about changes to the application programming interface (API), or about being liable for potential patent infringement. Web Storage is similar to HTTP session cookies, which can store user data on the user's machine for a website for extended user sessions, such as those for online purchases. However, Web Storage provides programmers with a richer programmatic interface, and it makes it easier for a browser to support multiple sessions at the same site simultaneously. Web Storage also can store megabytes of information on the user's computer. "One of the nice properties of Web Storage is that it is a relatively simple specification from a feature and API perspective," says WebApps Working Group co-chair Arthur Barstow. In addition, Web Storage provides a way to delete data after a certain period of time and restricts access to the data to only the websites that created the storage area.


Stanford Engineers Receive Award to Improve Supercomputing and Solar Efficiency
Stanford Report (CA) (08/01/13) Bjorn Carey

A Stanford University researcher team led by Gianluca Iaccarino will receive $3.2 million a year for the next five years under the National Nuclear Security Administration's Predictive Science Academic Alliance Program II (PSAAP II) to house one of its three new Multidisciplinary Simulation Centers. PSAAP II participants will devise new computing paradigms within the context of solving a practical engineering problem, focusing on predicting the efficiency of a relatively untested and poorly understood method of harvesting solar energy. The Stanford researchers have proposed a system in which fine particles suspended within a fluid would absorb sunlight and directly transfer the heat evenly throughout the fluid, which would allow for higher energy absorption and transfer rates, ultimately increasing the efficiency of the overall system. However, a critical aspect of assessing this technique involves predicting uncertainty within the system, and "there is currently no supercomputer in the world that can do this, and no physical model," Iaccarino notes. To solve this problem, the researchers will have to develop programming environments and computational approaches that target an exascale computer. The researchers also will operate a physical experiment of the solar collector to test the predictions and identify other critical sensitivities.


Cells Reprogrammed on the Computer
University of Luxembourg (08/01/13)

Scientists at the University of Luxembourg have developed computer-based instructions for reprogramming cells. The model makes predictions on which differentiated cells would be efficiently changed into completely different cell types. "Our theoretical model first queries databases where vast amounts of information on gene actions and their effects are stored and then identifies the genes that maintain the stability of differentiated cells," says Isaac Crespo at the Luxembourg Center for Systems Biomedicine. "Working from the appropriate records, the model suggests which genes in the starting cells need to be switched on and off again, and when, in order to change them into a different cell type." Crespo says the model has made very accurate predictions in the lab. The team says the model could be of enormous benefit for regenerative medicine, potentially enabling doctors, for example, to reprogram a patient's own healthy skin cells and develop them into nerve cells when nerve tissue becomes diseased, and treat conditions such as Parkinson's disease.


Turning Unused TV Frequencies Into Wireless Broadband
A*STAR Research (07/31/13)

The Singapore White Spaces Pilot Group was formed in April 2012 to support Singapore's efforts to adopt TV white spaces (TVWS) for consumer and business services and applications. The group was founded by A*STAR's Institute for Infocomm Research (I2R), Microsoft Singapore, StarHub, and Neul. "The successful pilots that we have seen in Singapore have set benchmarks in showcasing the potential of TVWS technology in delivering reliable and cost-efficient wireless broadband for multiple commercial applications," says I2R executive director Tan Geok Leng. In one of the projects, I2R is working to develop a TVWS-based infrastructure for utility metering, which could be a basis of energy grid modernization. "The benefit of using TVWS technology is tremendous, be it to support 'smart city' infrastructure, to extend connectivity into previously challenging environments, or to enable ubiquitous, reliable wireless connectivity that will enhance our lives," says Neul's Tracy Hopkins. I2R researchers also are developing next-generation TVWS technology that will improve spectrum efficiency, scalability, and quality of service. "We believe TVWS is the first step to better utilize valuable frequency spectrums in order to support the exponential growth of wireless adoption," Tan says.


Berkeley Lab Researchers Discover Universal Law for Light Absorption in 2D Semiconductors
Berkeley Lab News Center (07/31/13) Lynn Yarris

Lawrence Berkeley National Laboratory researchers say they have discovered a quantum unit of photon absorption, called AQ, that should apply to all two-dimensional (2D) semiconductors. They say their discovery offers insight into the optical properties of 2D semiconductors and quantum wells, and could lead to exotic new optoelectronic and photonic technologies. "We discovered that the magnitude of step-wise absorptance in these materials is independent of thickness and band structure details," says University of California, Berkeley professor Ali Javey. He notes the discovery was made possible thanks to a unique process in which thin films of indium arsenide are transferred onto an optically transparent substrate. "We were then able to investigate the optical absorption properties of membranes that ranged in thickness from three to 19 nanometers as a function of band structure and thickness," Javey says. The researchers were able to measure the magnitude of light absorptance in the transition from one electronic band to the next at room temperature. "Our results add to the basic understanding of electron–photon interactions under strong quantum confinement and provide a unique insight toward the use of 2D semiconductors for novel photonic and optoelectronic applications," says Berkeley Lab researcher Eli Yablonovitch.


Watson and the Future of Cognitive Computing
Computerworld Australia (07/30/13) Rohan Pearce

In an interview, IBM Research-Australia director Glenn Wightwick discussed the technologies behind the Watson supercomputer and what differentiates cognitive computing from past approaches to machine learning and natural-language processing. Wightwick says that much of the technology behind cognitive computing has existed for many years, but many applications now being explored involve processing a tremendous volume of data extremely rapidly to enable cognitive systems to engage with people in a natural way. "What is dramatically different about how we are approaching cognitive computing based on the Watson technology is we approached natural language as a stochastic problem," he says. "Watson learns through a combination of training via machine learning, adapting for features of the language that are new to a particular domain, and ingesting all the information it can find on the domain." Cognitive computers such as Watson have the potential to transform a wide range of industries, including healthcare, finance, education, law, government services, and commerce. Wightwick says the next cognitive computing advances for IBM will include expanding recall, learning, judgment, reasoning, and inference. He says cognitive computers will recognize emotions, increase expressiveness in generating speech, and add perception and creativity.


Researchers Aim to Create Virtual Speech Therapist
Associated Press (07/30/13) Kathy Matheson

Temple University is the site of a two-year, virtual rehabilitation study. Researchers working to develop a virtual speech therapist are testing a computerized travel agent, and the technology could serve as a key tool in helping people overcome the language disorder known as aphasia. Patients need to continuously practice their skills, but health insurers only pay for a limited amount of therapy. Temple plans to challenge patients to spontaneously generate speech, says Emily Keshner. "They are actually put in a situation, which we hope is going to be natural, that requires that they come up with the correct words in the correct order," she says. Temple wants to build the avatar's vocabulary so that it can recognize all possible pronunciations of various words and respond with appropriate dialogue, and a bigger test will be programming it to correct users who misspeak. The mouth movements of therapists also are helpful to patients, so the team will need to get these movements right. The researchers also want to examine whether patients respond differently to virtual and human therapists, and they are experimenting with the avatar's gender, ethnicity, and voice texture.


The Invisible Driver
Technical University Munich (Germany) (07/26/13) Karsten Schafer

Technical University of Munich (TUM) researchers have proven that full-size remote control cars can be driven safely on public roads, and predict that the technology will reach the roadway within the next five to 10 years. Engineers at TUM's Institute of Automotive Technology placed six video cameras on an electric car, with a central control panel to activate all functions. Video images are wirelessly transmitted via the long-term evolution (LTE) standard to a remote driver at an operator station that is similar to a driving simulator, with a steering wheel, instrument panel, and pedals. The driver views the camera images, which provide a 360-degree view, on three large monitors, while a force-feedback steering wheel uses actuators to mimic the driving experience and other technology lets the driver hear inside the car. The data can already be transmitted on today's universal mobile telecommunication system network, but new technologies and capacity gains will further facilitate the transmission requirements of remote driving. The researchers note that mobile networks are expanding, and LTE networks in many cities can provide the bandwidth to carry video images, sound, and control data. In addition, the next video coding standard, H.265, offers significantly more efficient image compression.


Sensor Knows When You're Lying Through Your Teeth
New Scientist (07/25/13) Paul Marks

The mouth could provide doctors with information on a variety of health issues with the aid of a sensor that has been developed by researchers at National Taiwan University. The sensor includes an accelerometer that sends data on mouth motion to a smartphone. The research team has programmed machine-learning software to recognize the various jaw motion patterns and determine how much time the user has spent chewing, drinking, coughing, smoking, or talking. In testing on eight people, the prototype system correctly recognized oral cavities 94 percent of the time. Although the device can fit into dentures or a dental brace, the team wants to miniaturize it to fit in a cavity or crown. The researchers also envision a wireless device. "This could have a number of uses in dentistry, for example as a research tool, for monitoring patients who clench or grind their teeth, and for assessing the impact of various dental interventions," says Trevor Johnson at the Faculty of General Dental Practice.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Computing and Networking Capacity Increases at Academic Research Institutions
NCES InfoBrief (07/13) Michael Gibbons

Academic research institutions have experienced a significant increase in cyberinfrastructure resources since 2005, according to a new report from the U.S. National Science Foundation. In fiscal year 2011, 59 percent of academic institutions reported bandwidth of at least 1 Gbps, up from 21 percent in fiscal year 2005, the report says. The percentage with network connections of 10 Gbps or greater rose to 25 percent from 2 percent. Doctorate-granting institutions accounted for 43 percent of institutions with bandwidth of at least 2.5 Gbps in 2011, versus 4 percent of non-doctorate-granting institutions. Forty-seven percent of institutions had dark fiber to an external network in 2011, up from 29 percent in 2005, and the percentage with dark fiber between their own buildings rose to 90 percent from 86 percent. In addition, 192 of the 539 surveyed institutions owned centrally administered high-performance computing (HPC) resources of 1 teraflop or faster in 2011. Although 47 percent of doctorate-granting institutions provided HPC resources for their campuses, less than 9 percent of non-doctorate-granting institutions did so. Ninety-seven percent of HPC-providing institutions employ cluster architectures, and they also use architectures such as massively parallel processors, symmetric multiprocessors, or other types of architectures. The median total performance for centrally administered systems was 14 teraflops in 2011, up from 8 teraflops in 2009.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe