Welcome to the May 20, 2011 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Also, please download our new ACM TechNews iPhone App from the iTunes Store by clicking here and our new ACM TechNews iPad App by clicking here.
HEADLINES AT A GLANCE
Study Sees Way to Win Spam Fight
New York Times (05/19/11) John Markoff
University of California, San Diego (UCSD) researchers recently completed a study aimed at finding a choke point that could reduce the flow of spam emails. The researchers tried to receive as much spam as possible for three months and then they bought items from the Web sites advertised in the messages. The researchers, led by UCSD professor Stefan Savage, found that 95 percent of credit card transactions for spam-advertised drugs and herbal remedies were handled by three main financial companies, one of which was in Azerbaijan, one in Denmark, and one in the West Indies. Savage says the study indicates that "you'd cut off the money that supports the entire spam enterprise" if those companies stopped authorizing online credit card payments to the Web sites. The researchers found that spam systems rely on just a few banks and even fewer credit card processors, which is a glaring weakness that regulators and law enforcement agencies can exploit. "The defenders can, in principle, identify which banks the scammers are using far faster than they can get new banks, and for basically zero cost," Savage says.
Massey Scientist's Software Finds 'Orphan' Planets
Massey University (05/19/11)
Massey University's Ian Bond has developed software that helped lead to the discovery of free-floating orphan planets, a major breakthrough in astronomy and in understanding the dynamics of solar systems. Bond is part of Microlensing Observations in Astrophysics (MOA), an international astronomy study involving researchers from Massey, Auckland, Canterbury, and Victoria universities, in addition to researchers based in Japan and the United States. The study found 10 Jupiter-sized free-floating gas giant planets that are not connected to a solar system. The drifting planets were probably ejected from solar systems because of close gravitational encounters with other planets or stars, according to the researchers. Bond's software can process 50 gigabytes of images in one night, and is programmed to analyze and chart gravitational microlensing events, which is the distortion and observable magnification of a star's brightness when another planetary body passes in front of it, phenomena that was first discussed by Albert Einstein in 1915. The software has found about 3,000 microlensing events since 2006, and analyzed 500 of those.
Senators: New Smartphone Tracking Law Needed
IDG News Service (05/19/11) Grant Gross
U.S. Sens. John Rockefeller (D-W.V.) and John Kerry (D-Mass.) have each introduced privacy legislation designed to protect smartphone users from having their locations tracked by operating systems and applications. "We need to establish what we as a society, in a country that has always valued privacy, believe is the basic, proper treatment of information," Kerry says. Rockefeller's bill would require online companies to honor consumer requests to opt out of tracking. Kerry's bill would require Web-based businesses to give clear notice about their data collection practices and allow consumers to opt out. Meanwhile, the U.S. Federal Trade Commission (FTC) has several open investigations targeting mobile-phone privacy practices, says David Vladeck, director of the FTC's Bureau of Consumer Protection. Although some senators questioned whether new regulations are necessary and Apple and Google executives defended their privacy practices, Common Sense Media questioned if tech companies are doing enough to protect privacy. Common Sense Media CEO Amy Guggenheim Shenkan called on Congress to mandate that Internet and mobile-phone companies need permission before tracking consumers.
'Mind Reading' Brain Scans Reveal Secrets of Human Vision
Stanford University (05/18/11) Dan Stober
Stanford University, Ohio State University, and University of Illinois at Urbana-Champaign researchers are developing software that interprets brain signals to determine what individuals are thinking about. Normally, when researchers conduct mind-reading research, they show volunteers a series of photos and then guess which photo the individual is looking at based on the brain signals. The new study, led by Stanford's Fei-Fei Li, uses images that have almost all of the detail removed, leaving just outlines of the different images. The researchers found that they were still able to read the minds of the participants, with as much accuracy as when detailed pictures were used. "By noting what is driving the brain, you will be learning the way the brain works, why certain cues are more important than other cues," Li says. The results show that outlines are very important in influencing how the human eye and mind interpret what is seen. "The representations in our brain for categorizing these scenes seem to be a bit more abstract than some may have thought--we don't need features such as texture and color to tell a beach from a street scene," says Ohio State researcher Dirk Bernhardt-Walther.
Java Use Increases Among Developers Worldwide: Survey
eWeek (05/18/11) Darryl K. Taft
Java use has increased over the past year, according to a new study from Evans Data. Already the world's most popular and widely used programming language, Java saw growth in all regions. The number of developers in the Asia-Pacific region using Java at least some of the time rose from 61 percent to 68 percent, a similar increase of 7.6 percent occurred in North America, and a smaller increase was noted in the Europe, Middle East, and Africa regions. The resurgence in Java use "correlates directly with the increasing importance of the smartphone as a development target," says Evans Data CEO Janel Garvin, who notes that the most widely used language for smartphone development is Java. "Asia-Pacific developers are slightly out in front because smartphone development has been more deeply ingrained in that region," Garvin says. The total number of users has risen worldwide, but the number of developers who use Java more than 50 percent of the time was steady.
Seven Technologies to Disrupt the Next Decade
New Scientist (05/18/11) Helen Knight; MacGregor Campbell; Paul Marks; et al.
Among the most significant technologies expected to appear in the next decade is augmented reality effected by eyewear and cameras that can add an informational overlay to the wearer's point of view, radically transforming the cityscape and disrupting shopping, advertising, and possibly social interaction. Another technology projected to emerge are telepresence robots, which will enable people to remotely interact across great distances, significantly impacting social rules, the workplace, health care, and travel. Meanwhile, brain-machine interfaces may progress beyond mere thought control of computers and other machines, perhaps enhancing memory and other mental functions and challenging long-cherished concepts about identity, culpability, and the acceptable thresholds of human enhancement. The potential ramifications of software that can automatically evolve technology, such as genetic algorithms, include a disruption of research and development and inventing, and experts say evolved inventions are already having a substantial cumulative impact in the acceleration of innovation. Forecasting will likely take a big leap forward with the advent of text-mining applications that tap the vast corpus of data available online to identify trends and make projections, with significant implications for marketing and government policy.
'Humanization' of Computers Becoming Less Hollywood, More Practical
Toronto Globe & Mail (Canada) (05/17/11) Carly Weeks
Artificial intelligence (AI) research has experienced a major boost as scientists continue to push the boundaries of the field. Computer scientists say that breakthroughs in AI could lead to new developments, such as robots that can replace human soldiers, search engines that can understand human language, and advertising campaigns designed to a meet a specific consumer's preferences. "Basically, anything people can do in the long run, machines are going to do better and cheaper," says University of Toronto professor Geoffrey Hinton. The increasing humanization of computers has led some experts to ask questions concerning privacy and security, and others, such as those at a recent Massachusetts Institute of Technology panel discussion, think the field has become too fragmented with specific problems and needs to refocus on fundamental AI questions. "It's not perfect, there's a lot more to be done, but it works much better than what we had before and it's also more similar to how we think humans get to be intelligent in the first place," says University of Montreal professor Yoshua Bengio.
Researchers Show Android Devices Susceptible to Eavesdropping
PhysOrg.com (05/18/11) Bob Yirka
Researchers at the University of Ulm, building on Princeton University's Dan Wallach's research, have found that they can use commercially available software to eavesdrop on an open Wi-Fi network and gain information from Google calendar, user contact data, and Picasa images. The attack is possible because the Android system uses tokens, known as authTokens, that enable legitimate users to stay logged into certain applications for up to two weeks. However, the researchers found that malicious users can capture those tokens and use them for illegal purposes, such as acquiring calendar information, contact email addresses, or viewing private images on Picasa and using that information to impersonate another user on Google.
Better Passwords Get With the Beat
A new approach to key-pattern analysis (KPA) developed by researchers at the American University of Beirut promises to render a stolen password useless. The approach involves scrutinizing the speed a user types in a login and measuring the gaps between keystrokes by incorporating intra timing to determine how long each key remains depressed. Inter timing, the time lag between pressing one key and the next, proved to be inadequate for ensuring a password is usable only by the legitimate user in previous KPA efforts, and the researchers say their approach provides the beat of typing and is a much more robust parameter. The program gathers information about how a user types in a password by recording electronic signals from a standard keyboard as keys are pressed and released, then compares the pattern of the password typed with a pre-stored pattern recorded when the account is initially set up. The user is expected to repeatedly type their password at the login registration stage to record a reproducible typing pattern. The validation algorithm then examines parameters such as intra and inter timing for the relationships between two keys, three keys, and up to the number of keys that are the password length.
A Project Applies Neuroscience to Robot Vision
Asociacion RUVID (05/16/11)
The European Union-funded EYESHOTS project is developing robots with human traits, such as vision, the ability to grasp objects, and spatial perception. One robot has a three-dimensional visual system that could enable robots to observe and be aware of their surroundings and also to remember the contents of the images they see and act accordingly. For a robot to successfully interact with its environment and develop tasks without supervision, it is first necessary to refine these basic mechanisms that are still not completely resolved, says Universitat Jaume I researcher Angel Pasqual del Pobil. The researchers began developing the system by studying how a monkey's neurons engaged in visual-motor coordination. The researchers used the neural data to develop computer models of which sections of the brain combine the images with movements of both eyes and limbs. The researchers wanted to prove that when humans make a grasping motion, the brain does not have to pre-calculate the coordinates. "Our findings can be applied to any future humanoid robot capable of moving its eyes and focusing on one point," Pobil says.
Fully Automatic Software Testing Now Possible
University of Twente (Netherlands) (05/16/11)
University of Twente researcher Machiel van der Bijl has developed Model-Based Testing, a system that automates all steps in the software testing process. Model-Based Testing eliminates the need to manually develop tests, run tests, and evaluate the results. If used properly, van der Bijl says that Model-Based Testing can completely eliminate the need for testing software manually. He says Model-Based Testing reduces the duration of the phase for testing new software by at least 30 percent, and is more accurate because in principle there is no limit to the number of tests you can run. "If you want, you can even run a million tests," says researcher Machiel van der Bijl. "Our automated method can improve product quality and significantly shorten the testing phase, thereby greatly reducing the cost of software development." He says the system can be used for any type of software.
Researchers Hope to Build a Brain
Spiegel Online (05/13/11) Cinthia Briseno; Christoph Seidler
The Human Brain Project, led by Ecole Polytechnique Federale's Henry Markram, aims to build a complete computer model of the human brain, which the researchers say could lead to disease cures, new supercomputers, and intelligent robots. Markram expects practical applications soon after the project is launched, including design principles for energy-efficient computers and new robots. Markram already completed preparatory work for the computer modeling of the brain with the Blue Brain Project, which is modeling the molecular makeup of the mammalian brain. The researchers expect to produce a functioning total simulation of the brain by 2023. The Human Brain Project will use the Julich Research Center's supercomputers to help complete the research. Julich neuroscientist Katrin Amunts is creating a detailed atlas of the human brain by cutting a brain into 8,000 slices and digitizing each of them with a high-performance scanner. The research will result in three terabytes of data, and Amunts says a higher-resolution brain atlas would likely consist of more than 700 terabytes.
Lack of Training Hinders GPU in HPC
ZDNet Asia (05/12/11) Liau Yun Qing
Several factors, including the lack of training in parallel programming and support from independent software vendors, are major obstacles keeping general-purpose computation on graphics processing units (GPGPU) from being used in industries that utilize high-performance computing (HPC). For researchers, the main challenge is in the lack of training in parallel programming, says NVIDIA chief solution architect Simon See. The use of GPGPU in HPC has gained momentum after it raised China's Tianhe-1A to the top of the Top500 supercomputer list. GPUs provide the fastest computing capability and the best results in terms of performance and cost, says Nanyang Technological University professor Bertil Schmidt. However, Chinese Academy of Science researcher Ge Wei says GPU-based computing can be difficult to use if programmed incorrectly. Wei says some algorithms may need to be changed to take advantage of the technology. See says research has shown how GPU-based computing can enhance enterprise databases, and he notes that researchers also are exploring whether it can be used to solve the database growth issue.
Abstract News © Copyright 2011 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.