ACM TechNews (HTML) Read the TechNews Online at: http://technews.acm.org

ACM TechNews
June 20, 2008

Learn about ACM's more than 3,000 online courses and 1,100 online books
MemberNet
CareerNews
Unsubscribe

Welcome to the June 20, 2008 edition of ACM TechNews, providing timely information for IT professionals three times a week.


HEADLINES AT A GLANCE:

 

Carnegie Mellon Ties Knot (Again) with GM to Build Autonomous Cars
Network World (06/19/08)

General Motors and Carnegie Mellon University recently announced that they would fund and build a $5 million lab dedicated to developing autonomous driving technologies. Carnegie Mellon and GM have a long history of collaborating on driverless technology, including Carnegie Mellon Tartan Racing's victory at the DARPA Urban Challenge last Fall with a GM SUV. GM has also spent more than $11 million since 2000 on similar Carnegie-partnered Collaborative Research Labs (CRL). The new CRL will focus on developing a variety of autonomous technologies, including electronics, controls, software, wireless capabilities, digital mapping, and robotics. The lab will be located at Carnegie Mellon in Pittsburgh and operate as an extension of GM's Global Research & Development network. Many of the proposed technologies have already seen significant advancement: For example, effective navigation and collision avoidance technologies were standard in the DARPA Urban Challenge, where vehicles constantly monitored the road ahead for vehicles and obstacles to avoid collisions. Other autonomous car projects are also in progress. Using technology created for the DARPA race, MIT AgeLab is working on its AwareCar, which is a black Volvo with minicameras and infrared lights above the steering wheel that monitor the driver's eye and eyelid movements. Other sensors monitor the driver's heart rate, blood pressure, and respiration to look for changes that the car should react to, like stopping the car if the driver experiences heart pain. The AwareCar also has monitors in the trunk to watch for lane drifting.
Click Here to View Full Article
to the top


Diamonds Offer Cool Computer Solution
ABC Science Online (Australia) (06/20/08) Salleh, Anna

University of Melbourne physicist Steven Prawer says the current generation of computers are power hungry and inefficient, but that quantum computers made using diamonds are a practical way to achieve a significant improvement in computer power without generating more heat. Prawer says quantum computers provide a new paradigm for computing that utilizes exponential processing power through a highly efficient process that does not create heat. Quantum computers will use "qubits" that can be on, off, or both states at the same time depending on the electrons' spin, providing extremely high processing power because messages based on different states can be processed in parallel. Prawer says many quantum computer designs rely on very low temperatures and complex infrastructures to detect the electron spin and protect from being influenced by the outside environment, but diamonds can provide a unique platform for building quantum computers that can operate at room temperature. "All of the things that you would want from a quantum computer have been demonstrated in diamond," says Prawer. Tiny manufactured diamonds with a nitrogen atom at their center can act as a qubit, and the spin of the electrons in the diamond can be manipulated using microwaves or laser pulses. Although true quantum computing is still years away, Prawer says diamonds can already be used for a variety of new engineering and research devices, and that the first quantum device to be commercialized was a diamond-based single photon source used for quantum cryptography.
Click Here to View Full Article
to the top


Chill Out, Your Computer Knows What's Best for You
ICT Results (06/18/08)

Computers are continually becoming more user-friendly and increasingly capable of anticipating the user's needs and acting to meet them, largely because of the European Union-funded Computers in the Human Interactive Loop (CHIL) project. The technologies developed by CHIL leave humans free to concentrate on their objectives instead of having to think about the computer and how to run the machine. The CHIL researchers examined ways computers can serve humans better rather than forcing humans to adapt to how computers are designed, and through their focus on human-machine interaction they aimed to create a new paradigm of machine-supported human-to-human interaction. The CHIL team developed systems that could understand the context of meetings and proactively help participants by controlling the meeting environment. Project scientific coordinator Rainer Stiefelhagen says the project made "remarkable" achievements, highlighting the project's advancements in building a new system of audio-visual components to monitor and analyze what people are doing and how they behave in different circumstances. One spin-off project that started two months after CHIL ended will look at how police or fire officials in a crisis management room handle incoming data during an emergency situation,while another proposed project will assign a CHIL partner to a building company to develop a smart house.
Click Here to View Full Article
to the top


Intel's Future Vision: Cars With Eyes, Processors With Engines
Computerworld (06/12/08) Gaudin, Sharon

About 70 research projects that could impact the future were on display during Research at Intel Day at the Computer History Museum in Mountain View, Calif. The demonstration included using electronic fields to enable machines to sense what is around them, a development that would allow a robot to determine the amount of pressure to use in its fingers when picking something up. With such a sense of touch, a robot would be able to pick up glasses without breaking them or help a senior get up from the couch. Intel is also building tiny specialized core engines that perform a specific function, such as encryption or video acceleration. "If you can build a chip and add a tiny, tiny engine inside the processor, then we can actually use a lot less power, save electricity and lengthen battery life," says Manny Vara, the company's technology strategist. Intel showed off a system with cameras and multicore processor-based computers that would enable cars to determine when they are getting too close to pedestrians or other vehicles and prompt a safety precaution. The company is also optimistic about the use of speech recognition and interfaces to improve the input capabilities of mobile Internet devices.
Click Here to View Full Article
to the top


IBM Launches Green Supercomputer in Sweden
InternetNews.com (06/17/08) Patrizio, Andy

IBM recently announced that it and Umea University in Sweden have installed the most powerful Windows-based supercomputer in Europe. The Akka supercomputer's performance is generally unremarkable, except for two interesting aspects: Akka can run Windows and it is extremely green, using about 40 percent less energy than a regular supercomputer of the same size, according to Andreas Ryden, Nordic sales manager for IBM's High Performance Computing market. The system is based on 672 IBM HS21XM blades that run Intel's 2.5 GHz L5420 Xeon processor, which uses only 50 watts. The mix of processors used in the supercomputer also allows researchers to use other technologies, says Ryden. The computer will be used by researchers from all over Sweden for projects including space science, material science, bioinformatics, theoretical physics and chemistry, engineering science, basic research in parallel algorithms, library software, and middleware for grid infrastructure.
Click Here to View Full Article
to the top


BGU Researchers Develop New Gesture Interface Device
Ben-Gurion University of the Negev (Israel) (06/18/08)

Ben-Gurion University of the Negev (BGU) researchers have developed a new hand gesture recognition system that allows doctors to manipulate digital images during medical procedures using motion, eliminating the need to touch a screen, keyboard, or mouse, which compromises sterility and could spread infection. Helman Stern of the BGU Department of Industrial Engineering and Management says the Gestix gesture recognition system functions in two stages: The initial stage is a calibration stage where the machine recognizes the surgeons' hand gestures, and the second stage is where surgeons must learn and implement eight navigator gestures, which involves rapidly moving their hands in and out of a "neutral area." Gestix users can zoom in and out by moving the hand clockwise or counterclockwise, and to prevent unintentional signals from being read, users can enter a "sleep" mode by dropping their hands. The gestures are read by a camera positioned about a large flat-screen monitor. The system runs on an Intel Pentium processor and a Matrox Standard II video-capturing device. Helman and Yael Edan, another principal investigator, have used hand gesture recognition as part of an interface to evaluate different aspects of interface design on performance in a variety of telerobotic and teleoperated systems, and ongoing research aims to expand this work to include additional control modes, such as voice, to create a multimodal telerobotic control system.
Click Here to View Full Article
to the top


Exciton-Based Circuits Eliminate a 'Speed Trap' Between Computing and Communication Signals
University of California, San Diego (06/18/08)

University of California, San Diego physicists have demonstrated that excitons, particles that emit a flash of light as they decay, could be used for a new method of computing that is better suited to fast communication. Integrated circuits currently use electrons to send signals needed for computation, but almost all communications devices use light, or photons, to send signals. The need to convert the signaling language from electrons to photons limits the speed of electronic devices. UCSD physics professor Leonid Butov and colleagues have built several exciton-based transistors that could be used in a new type of computer. "Our transistors process signals using excitons, which like electrons can be controlled with electrical voltages but unlike electrons transform into photons at the output of the circuit," says Butov. "This direct coupling of excitons to photons bridges a gap between computing and communications." Excitons are created by light in a semiconductor like gallium arsenide, which separates a negatively charged electron from a positively charged "hole." If the pair remains linked, it forms an exciton, and when the electron recombines with the hole, the exciton decays and releases its energy as a flash of light. The researchers used a special type of exciton where the electron and its hole are confined to different "quantum wells" separated by several nanometers, which creates an opportunity to control the flow of excitons using voltage supplied electrodes. The voltage fates create an energy bump that can halt the movement of excitons or allow them to flow, and removing the energy barrier allows the exciton to travel to the transistor output and transform to light, which could be fed directly into a communication circuit, eliminating the need to convert the signal.
Click Here to View Full Article
to the top


Google Sponsors Scholarships for Grace Hopper Celebration of Women in Computing Conference
Business Wire (06/18/08)

The Anita Borg Institute for Women and Technology recently announced that Google will fund more than 50 sponsorships for women to attend the 8th Grace Hopper Celebration of Women in Computing Conference, which offers students, specifically female computer science students, a unique opportunity to meet each other and network with other computer science students and professionals with a passion for their work and for increasing diversity in the field. The scholarships are available through a variety of areas, including the Google Women of Color Scholarship; the Google Global Community Scholarship, which is awarded to a group of international computer science students studying at universities outside the United States; Change Agent, which is awarded to accomplished technical women from emerging countries; and Google Anita Borg Scholars, which will award recipients with $10,000 in addition to registration and travel to the Grace Hopper Celebration. Academic scholarships for computer science students and travel scholarships to allow students to attend conferences like the Celebration of Women in Computing serve to support the goal of retaining and increasing the number of women and other underrepresented groups in the field of computer science.
Click Here to View Full Article
to the top


Standardization of Rule Based Technologies Moving Forward to Enable the Next Generation Web
Pressemitteilung Web Service (06/11/2008)

RuleML is following up its first industry-oriented gathering last year with the International RuleML Symposium on Rule Interchange and Applications (RuleML-2008). This year's event is scheduled for Oct. 30-31 at the Buena Vista Palace in the Walt Disney World Resort in Orlando, Fla., and will give business and technology professionals, researchers, and standardization representatives another opportunity to focus on the increasing performance and applicability of rule technologies. The international umbrella organization for Web rule research, standardization, and adoption will make every topic related to rules a focus of the symposium, from engineering and use of rule-based systems, the integration of rules and other Web technologies, languages and frameworks for rule representation and processing, rule-related web standards, to the interoperation of rule-based systems and the incorporation of rule technology into enterprise architectures. RuleML-2008 will offer peer-reviewed paper presentations, invited talks, software demonstrations, and social events. There will be a challenge session that gives participants the opportunity to show their commercial and open source tools, use cases, and applications, and to win prizes for best applications. The symposium will be co-located with the Business Rules Forum to bring greater attention to the connection between rule and business logic technologies. ACM is among the sponsors and partner organizations that support RuleML-2008.
Click Here to View Full Article
to the top


Microsoft Researcher Tackles Forest Data to Fight Climate Change
IT News Australia (06/13/08) Tay, Liz

The computational challenges of environmental research have become a focus of Microsoft Computational Science Research, a new unit that has brought ecologists, biologists, neuroscientists, mathematicians, and computer scientists together to study climate change and other issues. "These computational challenges are huge, and large software companies are one of the only places where we're going to find the knowledge and resources to address them," says ecologist Drew Purves. A research scientist at Microsoft Research Cambridge, Purves believes new computer models are needed to better predict environmental change. Purves, ecologist Stephen Pacala at Princeton University, and research colleagues in Madrid, Spain, are the authors of two research papers that recently appeared in the international journal Science. They favor new algorithms that would perform large numbers of calculations on ecological data sets, and would account for biodiversity. "Of course there are lots of these algorithms used every day in all kinds of fields [such as] actuaries or search engines, but we have developed some ideas for new algorithms that we think will be useful for ecology and, hopefully, in other areas," says Purves.
Click Here to View Full Article
to the top


Right on Cue
The Engineer (06/15/08) Vol. 293, No. 7749, P. 7; Baker, Berenice

Researchers at Portsmouth University are utilizing artificial intelligence software developed by Neuron Systems to identify sounds that might indicate a crime in progress and trigger CCTV cameras to swing towards the source. The three-year, EPSRC-sponsored project aims to adapt the software, which currently recognizes visual patterns, for sound cues. David Brown of Portsmouth's Institute of Industrial Research says the system would employ fuzzy logic to identify a type of noise, and notes that "we are looking for templates of sound--riser response, shapes of sound." Among the project's major challenges is keying the system for real-time responses to anomalous sounds on a scale that is comparable to human response times, and the software will work in tandem with CCTV-based human motion analysis that Brown's institute developed as well. "If the camera is pointing in a direction because an aggressive sound has been identified, the motion software can identify whether a person is punching another or running away from the scene," Brown remarks. The project could also help address the challenge of sifting through hours of security video to identify a particular action or object. By the project's conclusion, the team hopes to have produced algorithms that can be used within a commercial software suite, with each generation of algorithms growing in sophistication as the project progresses.
Click Here to View Full Article
to the top


Michigan Tech Physicist Models Single Molecular Switch
Michigan Technological University (06/16/08) Goodrich, Marcia

A team of Michigan Technological University researchers led by physicist Ranjit Pati have developed a model to explain the mechanism behind the single molecular switch, widely considered to be computing's Holy Grail. If worked out experimentally, the model could help explode Moore's Law and revolutionize computing technology. The fabled molecular switch would essentially involve replacing the current generation of transistors with molecules, which would allow 1 trillion switches to fit onto a centimeter-square chip. In 1999, a team of researchers at Yale University published a description of the first molecular switch, but scientists have been unable to replicate their discover or explain how it worked; Pati believes he and his team have discovered the mechanism behind the switch. Applying quantum physics, Pati and his group developed a computer model of an organometallic molecule firmly bound by two gold electrodes. After a current was turned on, the current increased along with the voltage, until it rose to a miniscule 142 microamps and suddenly, and counterintuitively, dropped, a phenomenon known as negative differential resistance (DNR). Up until the 142-microamp switch, the molecule's cloud of electrons had been whizzing about the nucleus in equilibrium, similar to planets orbiting the sun. However, that state fell apart under the higher voltage, and the electrons were forced into a different equilibrium, a process known as "quantum phase transition." Pati says he never thought this result could happen. A molecule capable of exhibiting two different phases when subjected to electric fields could function as a switch, with one phase acting as a "zero" and the other as the "one" to form the foundation of digital electronics. Pati and other scientists are now working to test the model experimentally.
Click Here to View Full Article
to the top


Researchers Develop Ultra Low-Cost Plastic Memory
University of Groningen (06/16/08)

Zernike Institute of Advanced Materials researchers at the University of Groningen in the Netherlands have developed a technology for a plastic ferro-electric diode, similar to technology used in flash memory chips, which the researchers believe will lead to a breakthrough in the development of ultra low-cost plastic memory material. Like flash, plastic memory can retain data without being connected to a power source. The researchers expect the new technology to lead to the development of products comparable with, and possibly even more significant than, flash memory. In 2005, a joint team of researchers from the University of Groningen and Philips successfully integrated a ferro-electric polymer into a plastic transistor; the ferro-electric material can operate as a non-volatile memory because the material can be switched between two different stable states when exposed to a voltage pulse, while the disadvantage of such a transistor is that three connections are needed for programming and reading memory, making fabrication more complicated. The challenge the researchers faced was to create a memory component with only two connections, a diode. The breakthrough was based on a radical new concept that, instead of stacking a layer of semiconducting material on a layer of ferro-electric material, uses a mixture of the two materials. The ferro-electric characteristic of the mixture is then used to direct current through the semi-conducting part of the mixture. The new memory diode can be programmed quickly, retains data for long periods, and operates at room temperature. Additionally, the voltages needed for programming are low enough for the diode to be used in commercial applications, and the material can be manufactured at a low cost using large-scale industrial production techniques.
Click Here to View Full Article
to the top


New Intrusion Tolerance Software Fortifies Server Security
George Mason University (06/16/08)

Researchers at George Mason University are taking a different approach to intrusion detection and prevention. Arun Sood, professor of computer science and director of the Laboratory of Interdisciplinary Computer Science, and Yin Huang, senior research scientist in the Center for Secure Information Systems, accept the likelihood of someone trespassing on computer servers, but believe limiting the time of continuous connection to the Internet can serve as an additional layer of defense. Sood and Huang have developed Self Cleansing Intrusion Tolerance (SCIT), and they use virtualization technology to create duplicate servers. The idea is to periodically cleanse an online server and restore it to a known clean state, regardless of whether an intrusion has been detected. Regular cleansings occur in sub-minute intervals. "SCIT interrupts the flow of data regularly and automatically, and the data ex-filtration process is interrupted every cleansing cycle," says Sood. "Thus, SCIT, in partnership with intrusion detection systems, limits the volume of data that can be stolen."
Click Here to View Full Article
to the top


Robot Asimo Can Understand Three Voices at Once
NewScientistTech (06/10/08) Barras, Colin

Researchers in Japan have developed new software that enables the advanced humanoid robot Asimo to understand three voices at once. Hiroshi Okuno at Kyoto University and Kazuhiro Nakadai at the Honda Research Institute in Saitama were inspired by the legend of Prince Shotoku, who is said to have had the ability to listen to the petitions of 10 people at the same time. They note that the "Prince Shotoku Effect" is very different from the "cocktail party effect," which involves focusing on a single voice while one can hear other voices. The new software, HARK, makes use of an array of eight microphones to determine where each voice is coming from and isolate it from other sound sources, gauges the reliability of its extracted individual voice, and uses speech-recognition software to decode it. Asimo has used HARK to judge rock-paper-scissors contests, and has had an accuracy rate of 70 percent to 80 percent. Okuno and Nakadai plan to increase the number of voices and complexity of sentences that HARK can understand. They presented their research at the 2008 IEEE International Conference on Robotics and Automation in Pasadena, Calif., in May.
Click Here to View Full Article
to the top


As Chips Go Multicore, Researchers Strive to Parallelize More Apps
SearchDataCenter.com (06/11/08) Botelho, Bridget

Stanford University and some of the largest organizations in the computing industry recently announced the creation of the Pervasive Parallelism Lab (PPL), intended to find a way for software developers to easily parallelize applications for multicore processing. Sun Microsystems, Advanced Micro Devices, NVIDIA, IBM, Hewlett-Packard, and Intel will support Stanford computer scientists and electrical engineers as they conduct their research and development efforts. The PPL has a budget of $6 million over the next three years to research and develop a top-to-bottom parallel computing system, from hardware to user-friendly programming languages, which will allow developers to exploit parallelism automatically. As part of another effort, Microsoft and Intel expect to invest a combined $20 million over the next five years to fund similar research at the University of California, Berkeley, and the University of Illinois at Urbana-Champaign. Until recently, multicore processors were too expensive for any application other than supercomputers, where multithreading is common. The limited use of multithreading means few software programmers learned how to design software that uses parallelism to exploit multiple cores, according to Stanford's PPL research director Kunle Olukotun. "We are working with application developers to provide solutions for their applications," Olukotun says. "After we figure out how to parallelize specific applications, we will work on doing it in a more general context. My hope is that our efforts will pave the way for programmers to create software for applications such as artificial intelligence and robotics, business data analysis, virtual worlds and gaming."
Click Here to View Full Article
to the top


Melding Mind and Machine
The Institute (06/06/08) Vol. 32, No. 2, P. 5; Riezenman, Michael J.

Giving paralysis victims mobility and a means to communicate is one of the goals of research into brain-machine interfacing (BMI), in which an artificial system senses and analyzes neural signals, and then translates those signals into movement. Ideally the sensory apparatus should be noninvasive, but IEEE members point out that such an approach can yield poor signal-to-noise ratio. Conversely, while surgical implantation of the interface may generate more accurate readings, the risk of infection and tissue trauma is a clear concern. Efforts to refine both types of BMI are showing progress, while still another avenue of research involves experimentation with an electrocorticographic method that positions a small electrode array on the cerebral cortex, yielding signals that suffer a lot less attenuation than EEG signals while manifesting a higher signal-to-noise ratio. Another significant BMI issue is keeping an implantable device's power consumption to a minimum so that battery life is maximized and the heating of the cerebral tissue is trivial. Minimizing the bandwidth occupied by the data being transmitted from the implanted device to the outside world is one possible approach, and a University of Florida professor devised a bandwidth-saving scheme that samples faster when the signal amplitude is large and slower when it is small.
Click Here to View Full Article
to the top


To submit feedback about ACM TechNews, contact: [email protected]

To be removed from future issues of TechNews, please submit your email address where you are receiving Technews alerts, at:
http://optout.acm.org/listserv_index.cfm?ln=technews

To re-subscribe in the future, enter your email address at:
http://signup.acm.org/listserv_index.cfm?ln=technews

As an alternative, log in at myacm.acm.org with your ACM Web Account username and password, and follow the "Listservs" link to unsubscribe or to change the email where we should send future issues.

to the top

News Abstracts © 2008 Information, Inc.


© 2008 ACM, Inc. All rights reserved. ACM Privacy Policy.