The Turing Award Honors Frances Allen
BusinessWeek (02/23/07) Hamm, Steve
IBM Fellow Emerita Frances Allen became the first female to receive the
prestigious ACM A.M. Turing Award for her pioneering work in the "theory
and practice of optimizing compiler techniques." In the 1970s Allen helped
promote higher-level languages over machine languages and was responsible
for ways to improve compilers so programs could run on different machines.
"She's really the mother of customer-oriented computing," says IBM VP
Robert Morris. "She was an early proponent and practitioner of what has
become our innovation model. There was a time when we thought of
innovation as being just associated with invention. Now we see it as a
path from the invention through to where it has an impact on how people
live their lives." Allen sees the award as a way to further her efforts to
bring more women into computing. She says, "I have worked hard for women
to be recognized, and I'll use this as a platform to get more attention to
the role of woman in computing." One of ACM's goals is to get more women
interested in computer science, as only 26 percent of U.S. IT workers are
female, down from 33 percent in 1990, and only 15 percent of undergraduate
CS degrees from major universities go to women. "It's essential for women
to participate," says Allen. "A diversity of people can bring a much more
creative environment and better results." For more information on the ACM
A.M. Turing Award, see
http://campus.acm.org/public/pressroom/press_releases/2_2007/turing2006.cfm<
/A>
Click Here to View Full Article
to the top
Panel Cites Voter Error, Not Software, in Loss of
Votes
New York Times (02/24/07) P. A9; Drew, Christopher
A team of computer experts from several universities has announced its
unanimous decision that there is no evidence supporting the argument that
voting machine malfunction was to blame for the undervote in the 2006
Sarasota County Congressional race. Possible explanations offered by the
team, which was lead by Florida State University computer science professor
Alec Yasinac and University of California, Berkeley professor David Wagner,
were that voters could have touched the screen twice, erasing their own
vote, or could have missed the race entirely, due to poor ballot design.
The race did not have the colorful heading that others did, was sandwiched
between long lists of candidates for the Senate and Governor elections, and
was squeezed in at the top of a screen. Wagner said, "I'm persuaded that
this wasn't caused by machine failure." However, in the days following the
election, a local paper reported several voters complaining that although
an "x" appeared in the box for Christine Jennings when they touched it, the
"x" was gone when they reached the verification screen at the end of the
ballot. The report did indicate that aging hardware could have been
responsible for isolated problems, but such problems would have also
impacted other races. This investigation marks the first time software
code has been audited to resolve an election, but some computer experts are
not satisfied. "The study claims to have ruled out reliability problems as
a cause of the undervotes, but their evidence on this point is weak, and I
think the jury is still out on whether voting machine malfunctions could be
a significant cause of the undervotes," says Princeton's Ed Felten.
Click Here to View Full Article
to the top
IT Heavyweights Plot the Green Grid
InfoWorld (02/23/07) Samsom, Ted
Eleven of the biggest computing companies have formed the non-profit Green
Grid, with the intention of dealing with the impending data center power
crisis. One of the largest problems the industry faces is a lack of power
consumption metrics for data centers, so the group plans to develop
technology standards, metrics, and best practices for lowering computing's
power consumption. Currently, the organization is asking more companies
and government agencies to get involved. "No one player owns the kingdom,"
says AMD's Bruce Shaw. "It's not just a processor problem. It's not just
a server problem. It's not just a memory problem. Everyone in the
ecosystem needs to be involved." Three white papers have been released by
the group: "Greed Grid Opportunity," which explains the need for standards
for energy-efficiency and practices and cites research on several
energy-use topics; "Guidelines for Energy-Efficient Datacenters," which
discusses best practices for creating more energy-efficient data centers
without going into much detail, but does go over system design, cooling,
and virtualization; and "Green Grid Metrics: Describing Datacenter Power
Efficiency," the most technical paper, which expresses support for both the
Power Usage Effectiveness (PUE) metric, which measures the ratio of "Total
Facility Power" to "IT Equipment Power," the Datacenter Efficiency (DCE)
metric, which reverses the ratio, and the Datacenter performance Efficiency
(DCPE), which measures the ratio of "Useful Work" to "Total Facility Work"
and is described as a natural evolution of the two previous metrics. The
paper states, "In effect, [the DCPE] calculation defines the datacenter as
a black box--power goes into the box, heat comes out, data goes into and
out of the black box, and a net amount of useful work is done by the black
box."
Click Here to View Full Article
to the top
The Future of Search: Reaching for a Piece of Google's
Pie
TechNewsWorld (02/25/07) Morphy, Erika
Researchers and entrepreneurs are confident that Google's ranking as the
No. 1 search engine will not last as other companies develop competing
technology. "Many of these firms are focused on solving niche or smaller
or industry-specific issues," notes SiteSpect President Erik Hansen. "They
are not necessarily big enough for Google to focus on now, but perhaps down
the road they will be developed for general use." One unique technology is
the WebSifter system, which lets individuals and companies specify a
conceptual taxonomy tree relating to their search needs and/or their line
of work. Search concepts are first expanded using synonyms, and then sent
to Google and other search engines as an alternative to the direct
transmission of keyword requests to Google; WebSifter's patented ranking
algorithm ranks and displays the aggregated results, and the user supplies
feedback so that WebSifter can adapt search results to the user's
preferences. "Storyteller" is a "creative discovery" search engine devised
by Virginia Tech researchers that finds links between information that
appear dissimilar at first glance, and the application has been employed by
Virginia Tech biochemistry faculty members Malcolm Potts and Richard Helm
to discover a unique protein related to chemical stress via an abstract
search of 140,000 publications about yeast. The university reports that
this research was detailed in the proceedings of the ACM and the Special
Interest Group on Knowledge Discovery and Data Mining's (SIGKDD)
International Conference on Knowledge Discovery and Data Mining in August
2006. "As more and more resources are put on the Web, people will be
looking not only for that one piece of information, but also a whole body
of information that can be used to create knowledge," predicts WebSifter
creator and George Mason University professor Larry Kerschberg.
Click Here to View Full Article
to the top
Surveillance Cameras Get Smarter
Associated Press (02/25/07) Manning, Stephen
The next generation of surveillance cameras are incorporating "intelligent
video" technology in order to not only observe but interpret the images
they capture. Cameras used in Chicago and Washington, D.C., can detect
gunshots and call police, while cameras deployed in Baltimore can play a
recorded message and take pictures of illegal dumpers or graffiti artists.
Intelligent video technology can analyze a person's gait, for example, to
determine whether a person may be concealing a weapon, and it can even
recognize faces. "If you think of the camera as your eye, we are using
computer programs as your brain," said the Army Research Lab's Patty
Gillespie. University of Maryland engineering professor Rama Chellappa and
a team of graduate students have created a system that can both watch for
changes in an environment based on what it is programmed to see as
"normal," and track people who cross established perimeters. The system
places a box around the suspicious person or object on a computer display
and alerts security to evaluate the threat. One exhibition showed the
system recognized a suspicious person who got out of his car in a garage
and went from car to car, looking in the windows, and placed a box around
him as he moved. However, before intelligent video technology makes it to
the market, the technology needs to be improved, and liability issues must
be worked out. Ultimately, Chellappa wants to develop systems that can see
what someone might be concealing and actually stop some threats.
Click Here to View Full Article
to the top
Design on Diagonal Path in Pursuit of a Faster
Chip
New York Times (02/26/07) P. C5; Markoff, John
Cadence Design Systems believes computer chips designed using its new
tools can bring about a significant reduction in power consumption as a
result of diagonal wiring that decreases the total wire length needed to
run over the surface of the chip. This design approach, known as X
Architecture, is said to address problems of increasing complexity while
improving the speed, efficiency, and performance of the next generation of
chips that are beginning to emerge. "The math is clear--if you can go
diagonally, the wires will be 30 percent shorter," says Cadence's Aki
Fujimura. Some in the industry say that such innovation will become
necessary as 65-nanometer chip manufacturing becomes the standard. "The
risks associated with getting designs out are going up dramatically," and
new ideas that can reduce risk will be sought out, says IBS President
Handel Jones. However, Cadence competitor Synopsys does not agree that X
Architecture will make such an impact. "It's not a new concept--it's been
around since designers used litho paper and cut it by hand," said Synopsys'
Steve Meir. Synopsys also claims that its test showed a lower level of
performance and efficiency than Cadence claims for X Architecture.
Companies that have begun using X architecture have reported saving in
power consumption, but Cadence admits it has had trouble convincing the
industry to convert to the new technology.
Click Here to View Full Article
to the top
Monitoring With Minimum Power
USC Viterbi School of Engineering (02/15/07) Mankin, Eric
Researchers at the University of Southern California Viterbi School of
Engineering's Information Sciences Institute (ISI) have developed a
communication protocol for wireless sensors that improves on the energy
efficiency of previous models by a factor of 10. Sensor networks, or
sensornets, are already being implemented in wildlife parks and other
inaccessible or unwired areas, and are being considered for use in
industrial settings. Three years of research went into special Media
Access Control (MAC) control rules known as SPC-MAC protocols, which are
responsible for the activities of battery-operated sensor units. The
protocol uses both "low power listening," in which units turn on for very
short periods, and "scheduled channel polling," which schedules and
synchronizes the listening. "The basic approach of SPC-MAC is to let units
alternate periods of sleeping with very brief periods of listening," says
lead researcher Wei Ye. "To minimize the listening cost, SPC-MAC utilizes
'low-power listening,' which detects channel activity very quickly."
Earlier protocols needed each unit to be active for about 29 to 45 minutes
of each day of sensornet activity, but SPC-MAC brought this number below
two minutes per day. "It further reduces the transmission cost by
synchronizing the listening schedules of nodes, so that a unit can wake up
its neighbors by transmitting a short tone," says Ye. The research was
presented in November at the Fourth ACM SenSys Conference in Boulder,
Colo.
Click Here to View Full Article
to the top
Emotion Robots Learn From People
BBC News (02/23/07)
British researchers are leading a team of roboticists, developmental
psychologists, and neuroscientists in a three-year, 2.3 million-euro
European project to create robots that are capable of emotional interaction
with people. The goal of the project is to construct robots whose behavior
can be shaped by sensory input from the people they are engaging with. "We
are most interested in programming and developing behavioral capabilities,
particularly in social and emotional interactions with humans," notes
project coordinator Dr. Lola Canamero. The hardware component of the
robots will be very simple, though Canamero says some of the machines will
boast specially-built expressive heads. The robots will receive feedback
from cameras, tactile sensors, audio, and proximity sensors, while
artificial neural networks are being employed because they can help the
machine adjust to changing inputs and spot patterns in movement, voice,
behavior, and so on. "The physical proximity between human and robot, and
the frequency of human contact--through those things we hope to detect the
emotional states we need," explained Canamero. She said the robots will be
designed to learn emotional cues much like a human infant does.
Click Here to View Full Article
to the top
SIGDOC 2007 CFP
Kairosnews (02/21/07)
The ACM Special Interest Group for Design of Communication has issued a
call for papers for the 25th ACM International Conference on Design of
Communication. The ACM SIGDOC 2007 conference will focus on communication
processes, methods, and technologies and designing printed documents,
online text, hypermedia applications and other communication artifacts.
More specifically, researchers and practitioners should address
multidisciplinary approaches to information design and information
architecture, processes for developing information, genre and discourse
analysis of information design and delivery, or agile documentation
processes for information design. Other topics include the methods,
methodologies, and approaches to participatory and user-centered design;
and the design, development, and impact of personalized information
systems, time management systems, and project management systems.
Participants have until June 1, 2007, to submit research papers, workshop
proposals, and experience reports, and notifications of acceptance will be
made by Aug. 1. A month later, final versions will be due. The ACM SIGDOC
2007 conference is scheduled for Oct. 22-24, in El Paso, Texas.
Click Here to View Full Article
to the top
Enter 'Junior': Stanford Team's Next Generation Robot
Joins DARPA Urban Challenge
Stanford Report (02/23/07) Orenstein, David
Although robotic vehicles in the 2005 DARPA Grand Challenge could maneuver
simply by knowing what was around them and avoiding obstacles, those in the
DARPA Urban Challenge, to be held Nov. 3, 2007, must know exactly what is
going on around them all the time. The Urban Challenge will involve
simulated traffic and traffic signals, whereas the Grand Challenge was
simply a race across the desert. The Stanford team has equipped Junior, a
Volkswagen Passat station wagon, with a range-finding laser array that can
spin 360 degrees to provides nearly real-time 3D feedback, six video
cameras, radar, and GPS and internal navigation hardware. Junior's brain,
a Core 2 Duo processor chip, has about four times the power of Stanley's,
Stanford's successful entry in the Grand Challenge. Software that handles
perception, mapping, and planning gives Junior machine learning abilities
that improve its driving and can translate sensor data into an
understanding of what is happening around it. This software has been in
the testing stage since the beginning of the year. "You could claim that
moving from pixelated perception, where the robot looks at sensor data, to
understanding and predicting the environment is a Holy Grail of artificial
intelligence," says Stanford project leader Sebastian Thrun.
Click Here to View Full Article
to the top
Robot Swarms 'Evolve' Effective Communication
New Scientist (02/23/07) Simonite, Tom
The robotics community is excited about research conducted in Switzerland
that has shown communication abilities evolving artificially in robots.
Such evolution in robots could result in more sophisticated bots, says Noel
Sharkey, a robotics researcher at Sheffield University in the United
Kingdom. In Switzerland, roboticists Dario Floreano, Sara Mitri, and
Stephane Magnenat at the Swiss Federal Institute of Technology in Lausanne
teamed up with biologist Laurent Keller from the University of Lausanne to
create simulated robots and real bots with an attraction for food and an
aversion to poison, and randomly-generated genomes that determine movement,
processing of sensory information, and use of lights for signaling. They
were able to evolve 500 generations of colonies of robots in software and
real robots employing various pressure conditions, and sophisticated
communication occurred in some situations. "We saw colonies that used
their lights to signal when they found food and others that used signals to
communicate that they had found poison," says Keller. Some misleading
behavior, such as bots leading others away from food, also evolved. The
breakthrough could allow designers of robot swarms to employ simulated
evolution instead of writing behavior from scratch.
Click Here to View Full Article
to the top
Computer Majors Down to Bits
Ithaca Journal (NY) (02/24/07) Sanders, Topher
Hiring for technology-related jobs has returned to the level of the
dot-com period, but college students are still down on computer science
studies. For the first time since 2000, enrollment in the computer science
program at Cornell University has increased. Cornell computer science
professor Ken Birman says students do not seem to be aware that jobs that
would allow them to use computer science degrees are available in the
United States. In addition to memories of the dot-com crash, perceptions
about outsourcing are also a problem, university professors say. "The
people coming up to us and asking us about outsourcing are not the
students, it's the parents," says Cornell computer science professor
Lillian Lee. Birman says tech-related jobs are moving overseas, but he
adds that the positions that are exciting and creative remain in the
country. "The kind of work taking place in India really isn't the kind of
cutting-edge work you associate with a company like Google," he says.
Between 2007 and 2014, the United States will have nearly 1 million
computer science-related jobs, reports the U.S. Bureau of Labor.
Click Here to View Full Article
to the top
Poet's Voice Used for Hi-Tech Speech Aid
Yorkshire Post (UK) (02/21/07)
Researchers at the University of Sheffield are developing a
computer-controlled device that is able to decipher the speech of someone
with a speech disorder, rearrange their words, and deliver the message in
new sentences so they can be more easily understood. A handheld computer
and a wireless Bluetooth headset comprise the Voice Input Voice Output
Communication Aid (VIVOCA), which is intended to be worn on the body of the
speaker. People who have problems controlling muscles used for speech
would be able to take advantage of such a device. The researchers enlisted
poet Ian McMillan to be the voice of VIVOCA, which they believe needs to
produce natural speech sounds that are recognizable locally to gain
acceptance. "If the person is using the device as their primary mode of
communication, it is important that the output voice is suitable to
represent that person," says Phil Green, a professor in Sheffield's
computer science department. "Eventually we would like to be able to
present a client with a choice of male and female voices, and perhaps also
adapt their chosen voice to resemble that of the client before their speech
deteriorates."
Click Here to View Full Article
to the top
Homeland Security Cyber Czar Sees Challenges Ahead
National Journal's Technology Daily (02/22/07) Greenfield, Heather
Cyber threats reported to the U.S. Computer Emergency Readiness Team
increased from 5,000 incidents in 2005 to 23,000 incidents in 2006, and
there have already been 19,000 reports of cyber attacks in 2007. Homeland
Security cyber czar Greg Garcia said groups vulnerable to cyber attacks are
interdependent and need to establish a level of trust and share information
on vulnerabilities and threats. In an interview with National Journal's
Technology Daily, Garcia outlined three broad goals to increase security.
Those objectives are to strengthen information sharing, create stronger
network security within federal agencies, and establish sector-specific
infrastructure protection plans, as about 90 percent of critical
infrastructure belongs to organizations in the private sector. Garcia also
said U.S. CERT and the National Coordination Center, which is responsible
for monitoring disruptions in the telecommunications network, will soon be
housed under one roof, but the location was not disclosed for security
reasons. Garcia said that although globalization of the world's technology
industry will provide more opportunities, it also creates new security
challenges, as does the move to a single, integrated Internet protocol. He
says, "There are cost savings, productivity enhancements, but it also
introduces a new level of vulnerability in our networks."
Click Here to View Full Article
to the top
The 'Intelligent' Web of Tomorrow
Hindu (02/23/07) Subramanian, Karthik
Researchers believe that having the human brain serve more as a model for
the Web will improve the online network of tomorrow. Wendy Hall, a leading
computer scientist in the United Kingdom, discussed the Semantic Web while
in Chennai to deliver a lecture on the "Science of the Web" at the Indian
Institute of Technology. Hall, head of the School of Electronics and
Computer Science at the University of Southampton, said the Semantic Web
would be based on an artificial intelligence system and that its primary
building blocks would be its association with data, and not documents.
Hall says universities and peer groups are already creating Web ontologies
to define the basic interoperable data blocks in which information would be
packaged on the Semantic Web. The different protocols and standards
required is one reason why it will take some time to create the Semantic
Web. The Semantic Web is a "web of actionable information--derived through
the semantic theory of interpreting symbols," according to a paper that was
co-authored by Hall, Web inventor Tim Berners Lee, and Hall's Southampton
colleague Nigel Shadbolt.
Click Here to View Full Article
to the top
The 309th Software Maintenance Group
Technology News Daily (02/21/07)
The software applications developed by the Air Force's 309th Software
Maintenance Group (SMG) handles ballistics calculations, communications
between units, and speed, target, and status displays in Air Force
vehicles, without requiring pilots to be aware they are even running. The
309th SMG's work is somewhat similar to video games, although the work is
less graphics-oriented and slower-paced. "With navigation pods, data
links, and guided weapons, F-16 (Fighting Falcon) pilots have been able to
put weapons directly on target time and time again to avoid collateral
damage, which helps limit enemy opposition and U.S. combat casualties,"
says 309th SMG technical program manager Kevin Tjoland. Development can
take as long as 18 months and contain as much as 500,000 lines of code,
with testing occurring throughout the process. Carnegie Mellon University
awarded the 309th SMG with the Capability Maturity Model Integration Level
5 Rating for Software Development, signifying that the software
successfully integrates organizational features that are normally separated
and meets process improvement goals. In 2004, a transition from a previous
model for engineering best practices, known as Capability Maturity Model
(CMM), was undertaken, but "The changes themselves were not overly
difficult since we were already a process-oriented organization," explained
the 309th SMG's Mr. Cain. Perpetual software upgrades have allowed 20-year
old aircraft to remain effective.
Click Here to View Full Article
to the top
All the Lab's a Stage
CITRIS Newsletter (02/07) Shreve, Jenn
CITRIS' December gala showcased 3D immersive technology that allowed
groups of dancers in two different locations to be displayed on the same
screen and collaborate in semi-real time. The performance was the work of
the Renaissance Project, which uses tele-immersion technology that enables
geographically separated groups to interact in a third environment as if
they were face-to-face. A group of dancers were recorded by 48 cameras
divided into 12 clusters placed all around them, and the images were then
sent by computers via Internet II to a third computer, which combines the
images with those of a single dancer transmitted from another location onto
a 3D virtual stage. Glitches in the technology such as the lack of haptic
feedback and problems processing fast movements are still being worked out,
although the latter is said to give the images an 'Impressionistic" look,
says Renaissance Project founder Lisa Wythe. The performance allowed
researchers to observe their work and consider possible improvements.
Improvements are expected to help the speed of image acquisition and
reconstruction, improve the ability to recognize and adjust photometrics
such as light levels and reflections, and decrease the amount of computers
and cameras required. The technology could have a major impact on
geographically distributed collaboration of all sorts.
Click Here to View Full Article
to the top