Welcome to the May 31, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Computer Scientists Oppose Oracle's Bid to Copyright Java APIs
IDG News Service (05/30/13) James Niccolai
In a court brief, nearly three dozen computer scientists voiced concerns over Oracle's plan to copyright its Java application programming interface (API), which they believe would hinder the computer industry and limit end users' access to affordable technology. The group, which includes MS-DOS author Tim Paterson and ARPANET developer Larry Roberts, signed the amicus brief in support of Google in its copyright lawsuit with Oracle. Oracle accuses Google of infringing the copyright on its Java APIs in the development of Google's Android operating system, and is seeking billions of dollars in damages. Google argues that software APIs are not eligible for copyright protection under U.S. law. Last year, a district court in California mostly agreed with Google and ruled against Oracle in the case, but Oracle appealed the decision. "The freedom to reimplement and extend existing APIs has been the key to competition and progress in the computer field--both hardware and software," the brief states. "It made possible the emergence and success of many robust industries we now take for granted--such as industries for mainframes, PCs, peripherals, workstations/servers, and so on--by ensuring that competitors could challenge established players and advance the state of the art."
Universities Team With Online Course Provider
New York Times (05/30/13) Tamar Levin
Coursera is teaming with 10 major U.S. public university systems and public flagship universities to offer online courses for credit, as the massive open online course (MOOC) provider broadens its focus beyond elite universities. The new offerings, which could reach as many as 1.25 million students, include entirely online courses as well as blended online courses with faculty-led classroom sessions or proctored exams on campus. Although some courses will offer existing Coursera materials from top universities, others will feature newly-created content from participating universities' faculties available across the entire Coursera platform. “Our first year, we were enamored with the possibilities of scale in MOOCs,” says Stanford University professor and Coursera co-founder Daphne Koller. “Now we are thinking about how to use the materials on campus to move along the completion agenda and other challenges facing the largest public university systems.” Similarly, MOOC providers Udacity and edX also have recently partnered with public universities. Addressing faculty resistance to online offerings, Koller says the partnerships do not aim to diminish the role of faculty and that the project's success hinges on faculties making their own courses available via Coursera and tailoring their content.
HTML5 Webpage Locks 'Would Stifle Innovation'
BBC News (05/30/13)
Innovation could be held back by the World Wide Web Consortium's (W3C) plans to include ways to digitally lock media in the Web's core technology. These locks also could limit the ability to share images and videos, according to the Electronic Frontier Foundation (EFF), which has formally objected to plans to include rights management in the HTML5 formatting language. EFF says the proposed rights management system, Encrypted Media Extensions (EME), would create a "black box" that the entertainment industry could use to control what is done with media online. EFF's Danny O'Brien warns that accepting EME could change the free and open way the Web currently works, possibly generating a "Web where images and pages cannot be saved or searched, ads cannot be blocked, and innovative new browsers cannot compete without explicit permission from big content companies." W3C CEO Jeffrey Jaffe says EME is necessary to provide users with a rich Web experience. "Without content protection, owners of premium video content, driven by both their economic goals and their responsibilities to others, will simply deprive the open web of key content," Jaffe says.
Running Stochastic Models on HTCondor
HPC Wire (05/30/13) Ian Armas Foster
Research by Brigham Young University's Spencer Taylor applied the open source HTCondor software to a water resource model called Gridded Surface Subsurface Hydrologic Analyst (GSSHA), which requires computationally intensive stochastic functions found in many scientific fields. HTCondor executes jobs on a local network by harnessing computing power from idle systems. The resulting tests demonstrate that HTCondor is viable as an alternative to obtaining extra high-performance computing (HPC) resources for mid-level research institutions. The purpose of the project was to enable such institutions to combine their computing base with existing HPC resources both on site and in the cloud. “We found that performing stochastic simulations with GSSHA using HTCondor system significantly reduces overall computational time for simulations involving multiple model runs and improves modeling efficiency,” Taylor contends. He says HTCondor's inherent nature required each stochastic model to run on a different number of processors, ranging from roughly 80 to 140. “As expected, with about 100 times the computational power of normal circumstances, I was able to essentially reduce the runtime by factor of 100,” Taylor reports. He also cites the possibility of utilizing commercial cloud resources as part of the HTCondor base.
Our Ambiguous World of Words
University of Cambridge (05/30/13)
Decoding the ambiguity of language is a formidable challenge for computers, and several research projects led by the University of Cambridge's Stephen Clark aim to address this obstacle. Clark points to the need for a new language processing method if the computer is to truly understand text, and he is tapping quantum mechanics and a longstanding project with several researchers at other universities to provide such a technique. “In the same way that quantum mechanics seeks to explain what happens when two quantum entities combine, [University of London's] Mehrnoosh [Sadrzadeh] and I wanted to understand what happens to the meaning of a phrase or sentence when two words or phrases combine,” Clark says. He notes the compositional approach for modeling linguistic meaning resolves the fundamental issue of how humans can produce an infinite number of sentences with a finite vocabulary. “We would like computers to have a similar capacity to humans,” Clark says. The second, distributional strategy concentrates on the words' meanings, and the precept that such meanings can be deduced by considering the contexts in which words appear in text. Clark has embarked on a multi-university effort to leverage these two approaches' strengths via a single mathematical model that draws on quantum mechanics.
Apple Co-Founder Outlines 'Human' Computer Vision
E&T Magazine (05/13) Edd Gent
Apple co-founder Steve Wozniak envisions a future human-like computer that enables one-on-one teaching, and he recently detailed that vision at the European Business Network's annual conference. "We are moving closer to where a computer is like a person and we can have normal conversations with it," Wozniak says. "A computer is an awful cheap teacher, it has to get more human in its characteristics; anything another human being can understand is what I want my phone to understand." Wozniak says computers are becoming more human-like in their ability to solve complex problems, pointing out that "we did not invent the Internet to be a brain, we stumbled on it by accident." Wozniak also projects that mobile devices will become increasingly human-like in the next several decades. He notes this process has been going on for years, with people "lifting" data into virtual trash cans on desktops and using mouses designed to emulate the two-dimensional human experience. Wozniak cites foldable light-emitting diode displays as a key objective for many years, and his desire to see a glowing globe to zoom into for Google Earth-type apps. He also says people are becoming increasingly drawn to wearable technology such as Google glasses.
NSF and NICT of Japan Announce Partnership in Next-Generation Networking
National Science Foundation (05/29/13) Lisa-Joy Zgorski; Sachiko Hirota
The U.S. National Science Foundation (NSF) and Japan's National Institute of Information and Communications Technology (NICT) recently signed a memorandum of understanding to collaborate on next-generation networking technology. The agreement will enable NSF and NICT to collaborate on joint funding opportunities for U.S. and Japanese researchers in specific areas. NSF and NICT will work to support research and development (R&D) in networking technology and systems that will enable future Internet and new-generation networks. At the third Director General-level meeting of the U.S.-Japan Policy Cooperation Dialogue on the Internet Economy held in Tokyo last March, U.S. and Japanese researchers said there was a need for R&D on a new architecture enabling more robust and evolvable future Internet designs. They also expressed mutual interest in optical networking, mobile computing, and network design and modeling. "This agreement will create new opportunities for collaboration between top researchers in the United States and Japan, and forge new pathways to future global networks," says NSF's Farnam Jahanian.
'Blue Waters' Supercomputer Helps Crack HIV Code
CNet (05/29/13) Elizabeth Armstrong Moore
University of Illinois at Urbana-Champaign researchers used the Blue Waters supercomputer to discover the structure of the HIV capsid. The researchers developed molecular simulations that used data from lab experiments performed at the University of Pittsburgh and Vanderbilt University. "The work of matching the overall capsid, made of 64 million atoms, to the diverse experimental data can only be done through computer simulation using a methodology we have developed called molecular dynamic flexible fitting," notes Illinois professor Klaus Schulten. "You basically simulate the physical characteristics and behavior of large biological molecules, but you also incorporate the data into the simulation so that the model actually drives itself toward agreement with the data." The researchers found that the HIV capsid consists of 216 protein hexagons and 12 protein pentagons arranged just as the experimental data suggested. The proteins that comprise these pentagons and hexagons are identical, but from one region of the capsid to another, the angles vary. "The sustained petascale performance of Blue Waters is precisely what enabled these talented researchers to explore new methods combined with structural and electron microscopy data to reliably model the chemical structure of the HIV capsid in great detail," says the U.S. National Science Foundation's Irene Qualters.
Researchers Develop Software for New Cancer Screening Method
South Dakota State University (05/29/13)
South Dakota State University researchers are developing software for new breast-imaging technology that could make cancer screening more accurate and comfortable for women. The software is designed to first identify a tumor using microwave tomography imaging technology, compare the image to a database of more than 100,000 magnetic-resonance imaging images, choose the cases that are most similar, and extract the image along with the case files, says South Dakota Wei Wang. The researchers say this will tell the doctor what treatments were used, and how successful they were in combating the cancer. Based on the patient's history, doctors will have the information they need to determine the best plan of action for the patient. The researchers have devised several algorithms "to optimally identify the tumor," Wang says. The research team is working with scientists at Chung Nam University in Daejeon, South Korea, and the Electronics and Telecommunications Research Institute, which holds the patent on the microwave tomography machine. The software and imaging technique also should lower the cost of cancer screening.
IBM's Vision for Cognitive Computing Era
InformationWeek (05/29/13) Jeff Bertolucci
Preparing for a new era of cognitive computing, IBM is developing computer systems modeled on the human brain that will leverage big data to significantly impact everyday life, especially when combined with the rise of social, mobile, analytic, and cloud (SMAC) technologies. The age of programmable computing is giving way to the new cognitive computing era, driven by SMAC, machine learning, and the Internet of Things, says IBM research fellow Kerrie Holley, who notes each era lasts about 40 to 50 years. He says innovation and invention will be critical to organizations, for example, with the Internet of things as it enables machine-to-machine communication. Holley says IBM's cognitive technologies, particularly its Watson computer system, can transform industries such as healthcare by using big data to answer questions asked in natural language. Using Watson's evidence-based learning, hypothesis generation, and natural-language capabilities, medical professionals can make critical diagnosis and treatment decisions. IBM and Memorial Sloan-Kettering Cancer Center in March announced plans to create a cognitive system enabling Watson to use cancer patient treatment data to enable oncologists to diagnose and treat patients based on the most current available data. Holley notes cognitive computing also can play a key role in weather forecasting.
Why We Need to Build Sentient Machines
New Scientist (05/25/13) Celeste Biever
Building sentient machines may be key to unlocking human consciousness, and this potential is evident in the development of a leading theoretical model of consciousness--the global neuronal workspace model--inspired by artificial intelligence research. The model proposes that incoming sensory information and other low-level thought processes initially remain in the unconscious, but enter the conscious mind once information becomes sufficiently salient to penetrate the global workspace. The theory was informed by the Hearsay II system created in an attempt to develop computer speech recognition, designed to identify short sounds that could be linked into syllables, words, and sentences. The system involved multiple programs concurrently working at different stages of the problem, sharing the results likely to be of interest to others through a central database. Another stride toward machine consciousness is University of Illinois researcher Pentti Haikonen's experimental cognitive robot, which stores and manipulates incoming sensory data through physical objects rather than via software. This creates direct experiences to the brain, Haikonen says. "The contents of the consciousness are limited, but the phenomenon is there," he notes. Haikonen doubts that a software-based feeling machine will ever be created, but there is the possibility of consciousness in a brain in a vat connected to a supercomputer simulation.
Facial Recognition Technology Proves Its Mettle
MSUToday (05/24/2013) Tom Oswald
Michigan State University (MSU) researchers quickly identified one of the Boston Marathon bombing suspects from law enforcement video, using the latest automatic facial-recognition technology. MSU professor Anil Jain and research scientist Josh Klontz tested three different facial-recognition systems in the Pattern Recognition and Image Processing laboratory, and one system provided a rank one identification--a match--of suspect Dzokhar Tsarnaev, the younger brother. The other suspect, Tamerlan Tsarnaev, could not be matched at a sufficiently high rank, partly because he was wearing sunglasses. The experiment shows that under controlled conditions, when the face is angled toward the camera and lighting is good, the technology can be up to 99 percent accurate. "Sometimes police get bad tips so innocent people are questioned," Jain says. "Such situations can be avoided with a robust and accurate face-recognition system."
Hip-Hip-Hadoop: Data Mining for Science
Texas Advanced Computing Center (05/24/13) Aaron Dubrow
In 2010, the Texas Advanced Computing Center (TACC) at the University of Texas at Austin began experimenting with Hadoop to test the technology's applicability for scientific problems. TACC researchers won a Longhorn Innovation for Technology Fund (LIFT) grant to build a Hadoop-optimized cluster on Longhorn. "The LIFT grant let us add local drives and storage to enable researchers to do experimental Hadoop-style studies on a current production system," says TACC's Weijia Xu. The system enables researchers to run 48 eight-processor nodes on TACC's Longhorn cluster for Hadoop in a coordinated way with accompanying large-memory processors. Intel also has been working with TACC to assess the impact of new hardware the company has developed on the performance of Hadoop applications. For example, researchers at Intel and TACC recently described experiments using Intel's 10GBASE-T network adapters on Hadoop. Xu currently is applying data-mining and machine-learning techniques to study health communication. "While connecting users to those whom they may never be able to connect to otherwise, online communities present a new information environment that does not operate under the old publishing paradigm," says University of Texas researcher Yan Zhang. "This creates new challenges for users to access and evaluate information."
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.