Welcome to the February 13, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Software Developers Expected to See the Highest IT Job Growth Come 2020
InfoWorld (02/12/13) Ted Samson
There are currently about 4.16 million information technology (IT) professionals working in the United States, and that number is expected to grow by 22 percent through 2020, according to a new CompTIA report. The report provides a picture of how core IT positions break down today and a prediction of how the IT job landscape will look in 2020. The IT occupation with the highest projected growth rate through 2020 is systems software developer, which will grow by 32 percent from 387,050 positions today to 510,906 positions in 2020. The other positions with the highest projected growth rates include database administrators, application software developers, and network and systems administrators, which are expected to increase 31 percent, 28 percent, and 28 percent, respectively, by 2020. Meanwhile, the CompTIA report predicts that positions for computer programmers, hardware engineers, and miscellaneous computer occupations will increase just 12 percent, 9 percent, and 6 percent, respectively, by 2020.
Can Computers Save Health Care? IU Research Shows Lower Costs, Better Outcomes
IU News Room (02/11/13) Steve Chaplin
Indiana University researchers have found that machine learning can improve both the cost and quality of health care in the United States. The researchers used an artificial intelligence framework combining Markov Decision Processes and Dynamic Decision Networks to show how simulation modeling that understands and predicts the outcomes of treatments can reduce health care costs by more than 50 percent while also improving patient outcomes by almost 50 percent. "The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects," says Indiana Ph.D. student Casey Bennett. The research addresses the rising costs of health care, the decreasing quality of care, and a lag time of as much as 17 years between research and practice in clinical care. "We're using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths," Bennett says. The researchers note the technology could be used for personalized treatment by integrating large-scale data passed along to clinicians at the time of decision-making for each patient.
A System That Improves the Precision of GPS in Cities by 90 Percent
Carlos III University of Madrid (Spain) (02/13/13)
Carlos III University of Madrid (UC3M) researchers say they have developed a system that improves global positioning system (GPS) capabilities by up to 90 percent. The system, which is based on sensorial fusion, incorporates a conventional GPS signal with those of other sensors to reduce the margin of error in establishing a location. "We have managed to improve the determination of a vehicle’s position in critical cases by between 50 and 90 percent, depending on the degree of the signals’ degradation and the time that is affecting the degradation of the GPS receiver," says UC3M's David Martin. He says the prototype can guarantee the position of a vehicle to within two meters in urban settings. The system, which incorporates three accelerometers and three gyroscopes to measure changes in velocity and maneuvers performed by the vehicle, uses specialized software to analyze the data and find the geographic coordinates. "This software is based on an architecture that uses context information and a powerful algorithm that eliminates the instantaneous deviations caused by the degradation of the signals received by the GPS receiver or the total or partial loss of the satellites," says UC3M's Enrique Marti.
MIT Aids Human, Robot Cooperation With Cross-Training
Computerworld (02/11/13) Sharon Gaudin
Cross-training holds the key to successful cooperation between humans and robots, according to Massachusetts Institute of Technology (MIT) researchers. A longstanding obstacle to human-robot collaboration has been that humans do not perform tasks in the same manner each time. To tackle this issue, MIT researchers developed an algorithm that enables robots to learn from switching roles with humans in a manufacturing setting, which requires the robots to acquire information through demonstration. Cross-training increased collaboration between humans and robots by 71 percent and reduced the time humans spent waiting for robots to complete a task by 41 percent. Furthermore, people felt that robots cooperated more efficiently after cross-training. Human-robot collaboration also has been the focus of other research, including a Harvard University project to develop a smart suit to improve soldier endurance in war zones. Using sensors and equipped with its own energy source, the suit would postpone fatigue to allow soldiers to travel farther and would protect the body from heavy loads. Meanwhile, Toyota Motor last year announced plans for robotic health-care aids to lift patients and help paralyzed people walk. The National Aeronautics and Space Administration says such collaborations also will be necessary for future space missions.
C-DAC Unveils India's Fastest Supercomputer
Times of India (02/09/13) Swati Shinde Gole
India's Center for Development of Advanced Computing (C-DAC) recently announced the country's fastest supercomputer, Param Yuva II, which at 524 teraflops is 10 times faster than its predecessor. If Param Yuva II had debuted in time for the November list of the world's fastest 500 supercomputers, it would have ranked 62nd. Although C-DAC set out only to upgrade the current facility, researchers found that they could upgrade to half a petaflop and achieved this in a record three months. C-DAC intends to upgrade to a full petaflop by this December. Param Yuva II uses a hybrid architecture called Genome 5, developed with Intel, which enables a 35-percent reduction in power consumption. The new supercomputer is expected to aid aircraft design by improving parameters for weather conditions as well as airflow speed and direction. In addition, Param Yuva II will help reduce drug development time from 18-20 years of research to 15 years, and will improve weather prediction so that, for example, floods could be forecast two weeks in advance instead of one week.
Carnegie Mellon Analysis Shows Online Songwriters Seek Collaborators With Complementary Skills, Status
Carnegie Mellon News (PA) (02/11/13) Byron Spice
Carnegie Mellon University (CMU) researchers have found a way of determining the unique balance of qualities that contribute to musical collaboration using a path-based regression technique with implications for further social science research involving big data. The group collected data over a four-year period from an online songwriting initiative called February Album Writing Month (FAWM), which challenges participants to compose 14 new works of music during the month of February. Using a path-based regression program recently developed through a project called Never-Ending Language Learning, the researchers analyzed 39,103 songs, 6,116 participants, song tags, locations, and forums. Taking "random walks" through the FAWM data set, the program analyzed paths between potential collaborators to assess which paths were predictive and eliminate those that were not. "With this technique, the program can randomly sample thousands of paths and automatically identify the ones that seem most noteworthy," says CMU researcher Burr Settles. Participants were more likely to collaborate with those from different genres than those who shared their interests and skills. In addition, participants with equal social status in the community did not collaborate as often as those from disparate backgrounds.
How 'Bullet Time' Will Revolutionize Exascale Computing
Technology Review (02/12/13)
Kobe University researchers have developed a method for compressing output data without losing its essential features in exascale computing. The approach uses "bullet time," a Hollywood filming technique that slows down ordinary events while the camera angle changes as if it were flying around the action at normal speed. The technique involves plotting the trajectory of the camera in advance and then placing many high-speed cameras along the route. The Kobe researchers want to use a similar technique to access exascale computer simulations by surrounding the simulated action with millions of virtual cameras that all record the action as it occurs. The compression occurs as each camera records a two-dimensional image of a three-dimensional scene. Using this technique, the footage from a single camera can be compressed into a file about 10 megabytes in size, so even if there are 1 million cameras recording the action, the total amount of data they produce is only about 10 terabytes, according to the researchers. “Our movie data is an order of magnitude smaller,” the researchers say. “This gap will increase much more in larger-scale simulations."
Algorithm Learns How to Revive Lost Languages
New Scientist (02/11/13) Douglas Heaven
University of British Columbia (UBC) researchers have developed a machine-learning algorithm that uses rules about how the sounds of words can vary to infer the most likely phonetic changes behind a language's divergence. The researchers applied the technique to thousands of word pairings used across 637 Austronesian languages, including Fijian, Hawaiian, and Tongan. The system was able to suggest how ancient languages might have sounded and also identify which sounds were most likely to change. The researchers compared the results with work done by human specialists, and found that more than 85 percent of the suggestions were within a single character of the actual words. The technique could help improve machine translations of phonetically similar languages, or help preserve endangered languages, says UBC professor Alexandre Bouchard-Cote.
Cell Circuits Remember Their History
MIT News (02/10/13) Anne Trafton
Massachusetts Institute of Technology (MIT) researchers have developed genetic circuits in bacterial cells that perform logic functions and remember the results, which are coded in the cell's DNA and passed on for several generations. The researchers say the circuits could be used as long-term environmental sensors, efficient controls for biomanufacturing, or to program stem cells to differentiate into other cell types. "We think complex computation will involve combining both logic and memory, and that’s why we built this particular framework to do so," says MIT professor Timothy Lu. The circuits are based on memory circuits that Lu and his colleagues designed in 2009. The new circuits' memory function is built into the logic gate and the inputs stably alter regions of DNA that control green fluorescent protein production. Using this design technique, the researchers can create all two-input logic gates and implement sequential-logic systems. The researchers note the circuits also could be used to create a digital-to-analog converter, which takes digital inputs and converts them to analog output.
University Data Sharing Project Takes Big Step Forward
InformationWeek (02/08/13) Ellis Booker
The Predictive Analytics Reporting (PAR) Framework recently released the common data definitions for all of the variables in its database, which now includes more than 1.7 million anonymized and institutionally de-identified student records and 8.1 million course-level records. The PAR Framework originally was developed to identify factors that influence student retention and progression, and to guide decision-making designed to improve postsecondary student completion in the United States. "Interest in analytics across the board, and learning analytics in particular, have taken higher education by storm," says WICHE Cooperative for Educational Technologies (WCET) executive director Ellen Wagner. The PAR Framework also includes whether a particular student passed a course, achieved a major, or dropped out. The data definitions were issued under a Creative Commons license "to encourage distribution of the definitions into the higher education research community," according to WCET. The PAR Framework published the data definitions using the Data Cookbook, IData's collaborative data dictionary and data management tool for higher education. Wagner says another goal for 2013 is the release of an intervention taxonomy, which will help benchmark the efficacy of different types of educational interventions for at-risk students.
Initiative Hopes to Double Maine’s Computer Science Graduates by 2017
Bangor Daily News (ME) (02/07/13) Whit Richardson
The University of Maine intends to double its computer science and information technology graduates over the next four years through the new Project>Login initiative. Project>Login is a collaboration between Maine's business community, the nonprofit Educate Maine, and the University of Maine System to address a critical labor shortage faced by the state's companies. At the current graduation rate, Maine could fill only 39 percent of the 977 computer science and information technology (IT) jobs projected for 2018, according to Southern Maine Community College estimates. The University of Maine graduated 71 students with computer science degrees in 2012, so Project>Login must help the campuses graduate 142 students by 2017 to meet the initiative's goal. Increasing the 33-percent retention rate for computer science and IT programs will be critical, and the initiative hopes to boost the rate to a minimum of 43 percent by next year. The University of Maine in March will survey all computer science and IT students about obstacles that could cause them not to graduate or to switch majors; the feedback will help the university increase mentoring services to retain computer science and IT majors. In addition, the Project>Login Web site offers information about computer science and IT events and opportunities.
Solving Big-Data Bottleneck
Harvard Medicine (02/07/13) David Cameron
Commercial crowdsourcing platforms can solve massive computational problems more quickly and inexpensively than traditional approaches and benefit from the input of contributors from different economic sectors, according to a study from Harvard Medical School, Harvard Business School, and London Business School. The team used the TopCoder crowdsourcing platform to develop a program that analyzes huge volumes of data from genes and gene mutations that form antibodies and T cell receptors. Predicting the immune system's genetic combinations has long been an imposing challenge, but the crowdsourcing efforts resulted in a program with greater accuracy and speed than existing algorithms. Using TopCoder crowdsourcing, the researchers had to reframe the problem to enable those who were not computational biologists to understand. Within two weeks, the group had 16 solutions more accurate and up to 1,000 times faster than the National Institute of Health standard BLAST algorithm. The top five performing submissions will be available under an open source license. “This is a proof-of-concept demonstration that we can bring people together not only from different schools and different disciplines, but from entirely different economic sectors, to solve problems that are bigger than one person, department, or institution,” says Harvard's Eva Guinan.
Virtual Vehicle Vibrations
Iowa Now (02/06/13) Gary Galluzzo
University of Iowa researchers have developed software that enables engineers to accurately predict the role posture plays in transferring the stress of vehicle motion to bones and muscles in the head and neck. "The goal of this project is to introduce a computerized human model that can be used to predict human motion in response to whole-body vibration when the human takes different head-neck postures," says Iowa professor Salam Rahmatalla. The current model can be used to drive more sophisticated computer-human models that can predict muscle forces and internal strain and stress between tissues and vertebrae, according to the researchers. The system also could reduce the need for human subjects to test-drive vehicles. "One major benefit of the current computer-human model is the possibility of using it instead of humans in the design/modification loop of equipment in whole-body vibration," Rahmatalla says. He notes a wide variety of industry, university, and other research venues will be able to find applications for the technology.
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.