Association for Computing Machinery
Welcome to the October 2, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


IBM Scientists Find New Way to Shrink Transistors
The New York Times (10/01/15) John Markoff

IBM scientists on Thursday reported they have found a method for making transistors from parallel rows of carbon nanotubes, based on a new way to link ultrathin metal wires to the tubes. They say this makes it possible to continue miniaturizing the width of the wires without boosting electrical resistance, which may be key to upgrading the currently stalled speed of computer processors. The researchers speculate the likelihood of shrinking the contact point between the two materials to only 40 atoms in width sometime after 2020, and then reducing it to only 28 atoms three years later. In their normal state, carbon nanotubes form a giant mass of interwoven molecules, but researchers have coaxed them to align closely and in regularly spaced rows on silicon wafers so they can function as a semiconductor. IBM Research's Dario Gil says carbon nanotubes are a leading candidate to replace silicon as the favored base material for chip manufacturers. Over the last decade, the chip industry has faced physical limitations such as an increase of heat affecting switching speed, while the decline of transistor costs with each new chip generation has halted. The promise of carbon nanotube field-effect transistors is rekindling optimism in the industry, and the IBM researchers say they have modeled microprocessors optimized either for high performance or low power consumption.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Thought Process: Building an Artificial Brain
The Washington Post (09/30/15) Arianna Eunjung Cha

Microsoft co-founder Paul Allen, who has long been fascinated by the brain and the possibility of creating an artificial mind every bit as capable as that of a human being, has been funding a pair of parallel projects to understand the nature of intelligence. In the early 2000s, Allen founded the Allen Institute for Brain Science, seeding it with $100 million and the mission of trying to better understand the human brain. Over the course of more than a decade, the institute has had remarkable success, using a data-driven methodology to map the human brain and to pursue research into the nature of disorders such as autism and schizophrenia. Last year, Allen turned his fascination with the idea of creating an artificial mind into the Allen Institute for Artificial Intelligence. The first product of that institute's research is Aristo, an artificial-intelligence (AI) program the researchers are trying to teach to pass basic biology tests. So far, Aristo has passed the first- through third-grade tests, but it could be years before it can pass a high school test. Allen sees the research of his two institutes converging at some point down the road, and although his efforts are focused on creating an AI that could serve as a brilliant assistant to humans, he is curious and optimistic about what more it could become.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


A Peek Inside Google's Efforts to Create a General-Purpose Robot
Bloomberg (09/30/15) Jack Clark

Google is establishing a separate division for robotics research and development, within which robot manufacturer Boston Dynamics will work with some autonomy, according to an anonymous source. The group aims to develop general-purpose robots with applications ranging from security to domestic help. Fetch Robotics CEO Melonee Wise says Google's effort seeks generic solutions to challenging problems, and among the areas where Google is ahead of the competition is image and object recognition. "[Google has] many of the world's experts in [artificial intelligence (AI)] on their payroll," notes IDC analyst Scott Strawn. "You can look to them to be at the forefront of that technology, and it's that which will enable a robotics program." Google's DeepMind AI team is applying its software learning system to robots, and so far it has learned how to solve more than 20 simulated tasks, such as driving a car and walking, by watching low-quality video footage. In partnership with the University of Washington, Google also has taught robots to grasp objects. In addition, Google has started collaborations with other groups on robotic projects, with the Open Source Robotics Foundation's Nate Koenig noting the Robotics Operating System has been adopted by certain groups at the company.


This Car Knows Your Next Misstep Before You Make It
Technology Review (10/01/15) Will Knight

Researchers at Cornell and Stanford universities have demonstrated how an experimental dashboard computer can predict a driver's maneuvers--in this instance a lane change--within a few seconds with more than 90-percent accuracy. The researchers say machine-learning algorithms were used to train the system to read the motorist's body language and behavior preceding certain maneuvers. The training data used in the Brain4Cars initiative was obtained from 10 different people who drove a cumulative 1,180 miles throughout California. The work involves integrating data from a video camera with global-positioning system data and information from the car's computer systems. Many luxury vehicles are now equipped with sensors that trigger safety warnings, as well as automatic braking and steering. Brain4Cars project leader Ashesh Jain notes monitoring activity both inside and outside the car could make such systems smarter. "Suppose the driver is distracted for a second," he says. "If there's nothing in front, the car should be smart enough, and not alert the driver. It's about how you use information from all these sensors."


Are Datasets Truly Anonymized? Two Well-Suited Researchers Are Going to Find Out
Computerworld (09/29/15) Erika Morphy

Using a Faculty Research Awards grant from Google, Cornell University professor Vitaly Shmatikov and Pennsylvania State University professor Adam Smith will attempt to determine if very large datasets can be protected from de-anonymization without compromising deep-learning services. The goal of deep learning is to enable computers to recognize items of interest using a trial-and-error learning process to extract patterns or specific conclusions. However, there is uncertainty whether the datasets used for deep learning are divested of any identifying features prior to release, as researchers and dataset providers purport. Discrediting this assumption was Shmatikov and University of Texas at Austin researcher Arvind Narayanan's successful demonstration that Netflix's supposedly anonymized user data could be cracked using a relatively small amount of information about a particular person's film-watching preferences and habits. With the new project, Shmatikov and Smith will develop new deep-learning approaches designed to shield the privacy of individual users' data while still upholding the valuable services deep learning facilitates. Methods Shmatikov says they will investigate include collaborative learning, in which participants maintain their data's privacy and train independently--instead of pooling the training--while sharing a little bit of data. Another technique they will focus on is differential privacy, a mathematical model of privacy Smith co-created.


New Tech Automatically 'Tunes' Powered Prosthetics While Walking
NCSU News (09/28/15) Matt Shipman

Powered prosthetic legs require regular tuning by a prosthetics expert to ensure the amputees using them can walk normally using the prosthetic. However, Helen Huang, a professor in the biomedical engineering program at North Carolina State University (NCSU) and the University of North Carolina Chapel Hill (UNC-Chapel Hill), says powered prosthetic legs also require frequent adjustment for a variety of reasons, such as a patient becoming more comfortable with the prosthetic, or a weight change. Such frequent retuning can be expensive and time-consuming. To address this, Huang and a team of researchers at NCSU and UNC-Chapel Hill have developed an algorithm that can automatically tune powered prosthetic legs. The algorithm not only allows for retuning to adjust for longer-term changes such as weight gain, but also shorter-term changes such as changes in gait. Huang says the algorithm could even "provide more power to a prosthesis when a patient carries a heavy suitcase through an airport." Huang notes the algorithm has outperformed human prosthetists in achieving proper joint angle, which enables the prosthetic to mimic natural limbs during walking. However, Huang says the algorithm has not yet beat prosthetists' ability to help patients develop a comfortable posture while using the prosthetic.


Soft Robotic Hand Can Pick Up and Identify a Wide Array of Objects
MIT News (09/30/15) Adam Conner-Simons

Massachusetts Institute of Technology (MIT) researchers have developed a three-dimensionally (3D) printed robotic hand made of silicone rubber that can lift and handle a range of delicate objects. In addition, the hand's three fingers have sensors that can estimate the size and shape of an object accurately enough to identify it from a set of multiple items. "Grasping is an important step in being able to do useful tasks; with this work we set out to develop both the soft hands and the supporting control and planning systems that make dynamic grasping possible," says MIT professor Daniela Rus. When the robot gripper senses an object, the fingers send back location data based on its curvature. The robot uses this data to pick up objects and compare them to the existing clusters of data points that represent past objects. The robot's algorithms need just three data points from a single grasp to distinguish between objects. With further advances, the researchers say the system could identify dozens of distinct objects, and be programmed to interact with them differently depending on their size, shape, and function. "Our dream is to develop a robot that, like a human, can approach an unknown object, big or small, determine its approximate shape and size, and figure out how to interface with it in one seamless motion," Rus says.


Disney Research's Smart Lightbulbs Connect Toys & Devices
Product Design & Development (09/28/15) Megan Crouse

Disney is considering using light-emitting diode (LED) lightbulbs to create an Internet of Toys in which toys could be accessed, monitored, and acted on remotely. In a recent study, Disney's team of scientists at ETH Zurich University in Switzerland examined the idea of an LED-to-LED communication system that could more seamlessly integrate household items with the Internet of Things. The lightbulbs use visible light to send data at up to 1 kbps, and visible light communication (VLC) technology enables them to read data without full Wi-Fi connectivity. The team has combined off-the-shelf LED bulbs with a Qualcomm Atheros system-on-a-chip running Linux, a VLC controller module with Internet Protocol software, and an additional power supply. "Communication with light enables a true Internet of Things as consumer devices that are equipped with LEDs but not radio links could be transformed into interactive communication nodes," says Stefan Mangold, head of Disney Research's wireless research group. The team wants to improve the lightbulb's transmission speed and distance, as well as add support for multiple colors.


Twitter Behavior Can Predict Users' Income Level, New Penn Research Shows
Penn News (09/28/15) Michele Berger

A new study led by University of Pennsylvania post-doctoral researcher Daniel Preotiuc-Pietro links the online behavior of Twitter users to their income brackets. The researchers started by looking at Twitter users' self-described occupations, using the U.K. job code system, which sorts occupations into nine classes, to determine average income for each code. The team sought a representative sampling from each, which resulted in 5,191 Twitter users and more than 10 million tweets to analyze. They then created a statistical natural language processing algorithm that pulled in words that people in each code class use distinctly. The results validated findings that a person's words can reveal age and gender, and these are tied to income. However, the researchers note the results also offered some surprises, such as high earners tend to express more fear and anger on Twitter, while perceived optimists have a lower mean income. The University of Pennsylvania's World Well-Being Project is exploring the use of social media as a potential surveying tool, which could support, or even replace, expensive, limited, and potentially biased surveying.


Meet SeeMore: Maker Faire's Mesmerizing 256-Node Computer Cluster
Inverse (09/28/2015) Neel V. Patel

One of the centerpieces of the recent World Maker Faire in New York City was a giant cylindrical object studded with hundreds of translucent green electronic panels that waved around like leaves in the wind. The object was SeeMore, an animatronic sculpture designed by sculptor Sam Blanchard and Virginia Polytechnic Institute and State University computer scientist Kirk Cameron to illustrate the concept of parallel computing. Blanchard calls the project a "physical data visualization that demonstrates the changes occurring." The name is a reference to supercomputing pioneer Seymour Cray. SeeMore is itself a parallel computer: each of SeeMore's 256 translucent green "leaves" is a Raspberry Pi microcontroller attached to the main structure with a 90-degree reticulating motor. The Raspberry Pis are all networked together to parse up and down a database of New York City public records. When an individual Raspberry Pi was idle, its "leaf" would be held stationary against the main structure, and it would extend away when it was carrying out computations. In this way, SeeMore embodies the process of parallel computing. Blanchard says the goal of the installation was to "get us to stop thinking of computers in a black box."


New Method to Predict the Workload for Online Services
Umea University (Sweden) (09/30/15) Ingrid Soderbergh

Umea University postgraduate student Ahmed Hassan says he has developed a method that could prevent overloads on the Internet. The server resources websites depend on can be better managed with cloud computing, Hassan says. His approach relies on algorithms that automatically add and remove resources to a Web service based on actual demand. Hassan says renting more cloud capacity than needed to run a Web service is a costly choice, while "renting too little capacity will result in server overloads and service disruptions." Hassan's tool is capable of predicting the capacity requirements of various services running in the cloud by using more than one prediction algorithm, improving the overall performance of multiple services at the same time. Hassan developed the prediction algorithms and method for his dissertation, which also includes an analysis of some server workloads from major Web services. "First, I analyzed the usage of Wikipedia for a period of five-and-a-half years looking at what happens when major events occur such as Michael Jackson's death," Hassan notes. "We also analyzed how people use the premium services of TV4 video-on-demand. Some interesting findings is how impatient users are when streaming a video, with 50 percent of the users abandoning streaming a video after watching less than 12 minutes."


Legacy Algorithms Could Expedite Device-to-Device Discovery
IEEE Xplore (09/28/15)

Smartphones will soon enable one person to call another, with their devices connecting directly via low transmission power, according to a study by IEEE researchers. Device-to-device (D2D) capability promises to reduce interference jams and delays caused by third-party infrastructure, but the actual act of locating the proper devices in the same vicinity consumes a lot of memory and bandwidth. IEEE researchers in Korea and Canada propose the use of hash functions and Bloom filters, which are methods for compressing information into concise, easy-to-digest bits. When applied to D2D communications, the statistical-coding approaches can convert data into virtual "fingerprints" for improved discovery process. The researchers say the technique would improve the speed and efficiency of D2D discovery, and they note smartphones with built-in D2D capabilities would offer more effective app sharing, game playing, and geo-targeting. "We had to learn coding typically used in more conventional database settings and apply it to wireless engineering," says lead researcher Ekram Hossain. "Everyone wants devices to get smarter, and D2D is the next step to more intelligent and resourceful mobile communications." The researchers employed a mathematical method of analyzing random spatial patterns called stochastic geometry, to show their discovery protocol's performance is significantly superior to that of nonfiltering ones.


Will Machine Learning Become Part of Our Everyday Lives?
CIO Australia (09/28/15) Rebecca Merrett

University of Washington professor Carlos Guestrin predicts machine learning will become embedded within the core of every application people use within five years. "Machine learning is what's going to make an app truly useful and different to other things out there," he says. Driving this trend is more than just mounting volumes of collected data and technology advances--it is changing consumer expectations, according to Guestrin. He envisions machine learning becoming an essential element in Internet of Things apps, such as home-based appliance monitoring and scheduling. "With a home automation system, we would want it to predict our needs or react ahead of time to our needs and what our interests are and what the current situation is," Guestrin says. "The only way to do that is to gather the data, automate the adaptation process, and continuously adapt to how things are evolving." Guestrin emphasizes prioritizing privacy is critical if machine learning is to proliferate, and he cites users' push for clear and transparent terms of use. A dearth of resources and skills is a large obstacle to broad industry adoption of machine learning, and Guestrin anticipates machine-learning tools and application programming interfaces will grow to address this challenge.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe