Welcome to the January 9, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Online College Courses to Grant Credentials, for a Fee
Washington Post (01/09/13) Nick Anderson
Free online college course providers are experimenting with security features that will enable students who successfully complete the courses to pay a small fee and obtain credentials. However, the credentials will not translate into course credit toward a degree because there are still questions about how much schools are willing to grant students who do not pay tuition. As major universities across the U.S. develop massive online open courses, it remains unclear how they will generate revenue from them. Classes sponsored by Duke University, the University of San Francisco, Georgia Tech, and the University of Illinois now will enable students to obtain a "verified certificate" that carries the university's logo by paying a small fee. The certificates are a "much more meaningful and valuable credential that they can use in their professional life or for their own personal reward," says Stanford University professor Daphne Koller. To qualify for the certificate, the student would submit, via Webcam, a picture and photo identification. During the course, samples of the student’s keystrokes would be checked as assignments are filed and tests taken, and the patterns taken from those keystrokes can serve as a biometric identifier to verify the user.
Guaranteed Delivery--in Ad Hoc Networks
MIT News (01/09/13) Larry Hardesty
Massachusetts Institute of Technology (MIT) graduate student Bernard Haeupler has developed an algorithm that can relay messages so they reliably reach all the nodes in a decentralized network. Haeupler says the algorithm is deterministic rather than probabilistic, which means it will provably relay messages to every node in a network, and it also is faster than previous algorithms. "In the distributed community, solving problems without randomization is often a completely different problem, and deterministic algorithms are often drastically slower," he says. Haeupler envisions communication in the network as a game that proceeds in a series of rounds, where a round is the time required for two adjacent nodes to exchange information. As part of the algorithm, each node begins by selecting one neighbor and exchanging information with it for one round. Then it selects a second node, and it exchanges information with both of the nodes it has selected for two rounds each. The algorithm continues in this way, selecting new nodes from which it has not heard and incrementally increasing the number of rounds it uses to send information to each of them. The system continues this process until it has heard, either directly or indirectly, from all of its neighbors.
Vint Cerf: Nobody's Too Old for Tech
Computerworld (01/08/13) Sharon Gaudin
Technology has not only changed the way we communicate but it is changing the way we live our lives, says Google chief Internet evangelist and ACM president Vint Cerf, speaking at the International CES show. Cerf says technologies such as smartphones and social networks are changing the way people of all ages, not just young users, communicate and manage their day-to-day lives. He notes that machines have become an integral part of human social interaction and should be embraced by seniors as well as teenagers. Cerf notes that social networks help families stay in touch no matter where they are, and computers have gone beyond mediating a conversation to becoming participants in conversations between people. "As part of our interactions, machines have become integral," he says. "This is pretty powerful. It changes the way we discover things. It changes how we think about communications. It changes how we communicate and who we communicate with." In the future, even clothes might become digital, according to Cerf. "Can you imagine if you lost a sock?," he says. "You could send out a search and sock No. 3117 would respond that it's under the couch in the living room."
U.S. Library of Congress Saving 500 Million Tweets Per Day in Archives
IDG News Service (01/08/13) Jay Alabaster
The U.S. Library of Congress expects to finish the initial stage of building a Twitter archive by the end of January. In April 2010, Twitter agreed to provide an archive of every public tweet since the company went live in 2006. The initial four-year archive contained about 21 billion tweets that take up 20 terabytes when uncompressed, including data fields. The Library of Congress is storing 500 million tweets a day, and has added a total of about 170 billion tweets to its collection. The focus will now shift to making the collection accessible to lawmakers and researchers. "It is clear that technology to allow for scholarship access to large data sets is lagging behind technology for creating and distributing such data," the library says. The full archive now requires 133.2 terabytes for two compressed copies, which are stored on tape in separate locations for safekeeping. The library already has received 400 inquiries from researchers studying citizen journalism, vaccination rates, stock market trends, and other topics.
Revolutionary Paper Tablet Computer Is Thin and Flexible as Sheets of Paper
Queen's University (Canada) (01/08/13)
Researchers at Queen's University, Intel, and Plastic Logic have developed PaperTab, a tablet computer that looks and feels like a sheet of paper. PaperTab is fully interactive with a flexible, high-resolution 10.7-inch plastic display and a flexible touchscreen. However, instead of using several apps or windows on a single display, users have 10 or more interactive displays or "PaperTabs," with one per app in use. "Using several PaperTabs makes it much easier to work with multiple documents," says Queen's University professor Roel Vertegaal. “Within five to 10 years, most computers, from ultra-notebooks to tablets, will look and feel just like these sheets of printed color paper.” The PaperTab project demonstrates how digital tablets could be used in the future, says Intel's Ryan Brotman. PaperTab has an intuitive interface that enables users to create a larger drawing or display surface by placing two or more PaperTabs next to each other. In addition, PaperTabs keep track of their location relative to each other, and to the user, which the researchers say provides a seamless experience across all apps.
3-D Printers Could Bring Manufacturing to Your Home Office
Washington Post (01/07/13) Cecilia Kang
As their prices rapidly fall, 3D printers are gaining popularity among businesses, with far-reaching implications for manufacturing. In the near future, they may start showing up in home offices around the world, according to industry analysts. In just a few years 3D printers have gone from the size of a refrigerator and cost hundreds of thousands of dollars to something that can fit on a desk and costs about $1,500. "You can argue this is the democratization of manufacturing," says Yankee Group's Carl Howe. Already, researchers and early adopters have made figurines, jewelry, and working bicycles using 3D printers. Some other possibilities are more controversial. "It is just a matter of time before these three-dimensional printers will be able to replicate an entire gun," says Rep. Steve Israel (D-N.Y.). "And that firearm will be able to be brought through this security line, through the metal detector, and because there will be no metal to be detected, firearms will be brought on planes without anyone’s knowledge." Three-dimensional printers are expected to eventually make even more complex parts and machines. "Now you can iterate on an idea many times in one day and create huge efficiencies," says MakerBot CEO Bre Pettis.
Chattanooga Tests Ultra-Fast System for Disaster Response
Government Computer News (01/07/13)
Chattanooga, Tenn., will test a system that will run computerized disaster scenarios using a detailed layout of the city, with the goal of training emergency workers and delivering real-time information to workers and the public during an event. Researchers at the University of Tennessee at Chattanooga (UT-Chattanooga) are using ultra-fast, high bandwidth technology to design the disaster mitigation system. They are currently installing a small number of sensors that will link to the network and be able to detect potential hazards. The U.S. National Science Foundation is funding the development of the system. The researchers say Chattanooga is an ideal location for the pilot project, since it has the largest community-wide gigabit-capable network in the U.S. and an infrastructure able to maintain the communications and asset management needed for disaster mitigation. "You need big computers, but you can access the cloud and access large systems over high-speed bandwidth," says UT-Chattanooga's Henry McDonald. "If it works in Chattanooga, it can work anywhere."
UCSD's Robot Baby Diego-San Appears on Video for the First Time
Gizmag (01/07/13) Jason Falconer
Researchers at the University of California, San Diego's Machine Perception Lab have developed Diego-san, a robotic infant that is four feet three inches tall, weighs 66 pounds, and has 44 pneumatic joints. The robot was developed as a research platform for studying the cognitive development of infants. The research will include UCSD's work with natural communication, such as reading and mimicking facial expressions. UCSD's Javier Movellan says the project's primary goal is to understand the development of sensory motor intelligence from a computational point of view. "It brings together researchers in developmental psychology, machine learning, neuroscience, computer vision, and robotics," Movellan says. "We are trying to understand the computational problems that a baby’s brain faces when learning to move its own body and use it to interact with the physical and social worlds." He notes the researchers based Diego-san on previous robotics projects, making it one of the most realistic android infants ever built. "Diego-san was developed to approximate the complexity of a human body, including the use of actuators that have similar dynamics to that of human muscles," Movellan says.
A Motherboard Walks Into a Bar...
New York Times (01/04/13) Alex Stone
To understand humor, computers need to analyze linguistic tricks such as irony, sarcasm, metaphor, idiom, and allegory, aspects of language that do not normally translate well into binary code. Scottish researchers have developed the System to Augment Non-Speakers' Dialogue Using Puns (Standup), a program that generates punning riddles to help kids with language disabilities increase their verbal skills. In November 2012, Purdue University researcher Julia M. Taylor helped organize the first-ever U.S. symposium on the artificial intelligence of humor. In order to get around cognitive complexity, computational humor researchers have focused on simple linguistic relationships, such as double meanings, instead of trying to model the mental mechanics that underline humor. For example, Standup writes jokes by searching through a lexical database for words that fit linguistic patterns found in puns. Another method is to use machine-learning algorithms, which analyze massive amounts of data to identify statistical features that can be used to classify text as funny or unfunny. University of North Texas researchers have developed a computer system that separates humorous one-liners from nonhumorous sentences taken from Reuters headlines, proverbs, and other texts.
New 2D Material for Next Generation High-Speed Electronics
CSIRO (Australia) (01/04/13) Simon Hunter
Researchers at the Commonwealth Scientific and Industrial Research Organization (CSIRO) and RMIT University have developed a two-dimensional material they say could revolutionize the electronics market. The material consists of layers of crystals known as molybdenum oxides, which have unique properties that encourage the free flow of electrons at ultra-high speeds. The researchers developed a technique that adapts graphene to create the new conductive nano-material, which consists of layered sheets. "Within these layers, electrons are able to zip through at high speeds with minimal scattering," says CSIRO's Serge Zhuiykov. "The importance of our breakthrough is how quickly and fluently electrons--which conduct electricity--are able to flow through the new material." The researchers were able to remove "road blocks" that could obstruct the electrons, an essential step for the development of high-speed electronics, says RMIT professor Kourosh Kalantar-zadeh. "Quite simply, if electrons can pass through a structure quicker, we can build devices that are smaller and transfer data at much higher speeds," Kalantar-zadeh says.
Researchers Use Data From Traffic App to Identify High Frequency Accident Locations
Ben-Gurion University of the Negev (Israel) (01/04/13)
Geosocial networks such as the global positioning system traffic app Waze could aid the police in their effort to deploy resources to high-frequency accident locations, including in real time, according to Ben-Gurion University of the Negev researchers. Waze records location data and enables its 30 million users worldwide to upload and share comments on any detail, including traffic alerts, accidents, or police presence. The team used Waze data and Google Earth as part of its study, and was able to determine that 75 percent of locations in Israel with the highest number of accidents were intersections. The researchers also analyzed references to a police presence to determine if the police were present at the locations that had the worst traffic accidents. "There were numerous instances where the police were manning quieter intersections, while busier intersections went unmonitored," says researcher Michael Fire. The data showed that police response times varied from 20 minutes to 40 minutes in some situations.
Google Researcher Finds Most-Used English Words, Letters
TPM Idea Lab (01/07/13) Carl Franzen
Google Research's Peter Norvig recently published the results of a study that updated the 1965 Bell Labs survey of about 20,000 words gathered from a variety of printed sources. Norvig's study examined 743 billion words surveyed by former Bell Labs researcher Mark Mayzner and found that "Etaoin srhldcu," in that order, are the most common letters used in the English language. Using the Google Books Ngram viewer, Norvig created a new dataset of more than 97,000 unique words, collectively repeated 743.8 billion times, which is 37 million more occurrences than in 20,000-word sample that Mayzner organized. The most frequently used words include "the," "of," "and," "to," "in," "that," and "for." The survey also found that there are an average of 7.9 letters per English word, and that 80 percent of English words are between two and seven letters. The most frequent two-letter combination is “th,” and the most common seven-letter combination is “present.”
Measuring Image Quality Made Easier With New Computational Methods
Aalto University (12/31/12)
Aalto University researcher Mikko Nuutinen has developed methods for computationally measuring the quality of digital and printed images with algorithms from natural images. The best possible quality is sought for by comparing objective computational values to how the final and actual image is perceived. Nuutinen's algorithms assess low-level quality attributes such as sharpness, graininess, and color contrast in printed images, and sharpness, color noise, and color difference in digital images. "Instead of scanning prints, we digitize the images by taking several pictures of them in different exposures with high-quality reference cameras," Nuutinen says. "The computational methods utilize these to calculate objective quality measurements." The research has allowed computational quality analysis to take a step toward subjective evaluations made by users about the naturalness of images, among other qualities. In the future, Nuutinen wants to develop no-reference methods involving robust algorithms, which would be able to assess the quality of natural images without reference images. "With the help of image databases currently under construction, no-reference algorithms should be taught to recognize all kinds of distortion types in images," Nuutinen says.
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe
|