ACM Recognizes Leonidas John Guibas for Pioneering
Algorithms that Advanced Computer Graphics, Robotics Circuit Design
AScribe Newswire (03/04/08)
ACM Fellow and Stanford University computer science professor Leonidas
John Guibas has won the 2007 Allen Newell Award for his groundbreaking work
in applying algorithms across a wide range of computer science disciplines.
Jointly sponsored by ACM and the Association for the Advancement of
Artificial Intelligence, the award was created to honor career achievements
in computer science, or contributions that bridge the field and other
disciplines. Guibas' research into interactions with the physical world
and the development of efficient algorithms for geometric problems served
as the catalyst for computational geometry becoming a recognized discipline
with its own journals, conferences, and a large number of researchers. His
research has had a major impact on computer graphics, computer vision,
robotics, physical modeling, large-scale integrated circuit design, sensor
and communications networks, and computational molecular biology. Guibas
heads the Geometric Computation Group at Stanford, and is a member of the
Computer Graphics and Artificial Intelligence Laboratories and the
Institute for Computational and Mathematical Engineering. ACM will honor
Guibas at its annual ACM Awards Banquet on June 21 in San Francisco,
Calif.
Click Here to View Full Article
to the top
'Chic Geek': Computer Science Major Rebounds
Inside Higher Ed (03/05/08) Jaschik, Scott
The number of newly declared undergraduate majors at doctoral-granting
computer science departments is on the rise for the first time in eight
years, and experts attribute this upsurge to an improving job market,
curriculum changes, and more effective marketing to prospective students
and their parents. Stuart Zweben, associate dean for academic affairs at
the Ohio State University College of Engineering, says the loss of jobs
from dot-com failures and the market's subsequent inundation of experienced
workers, concurrent with colleges churning out record numbers of graduates,
led to turmoil and generated a perception that it was very difficult to
land a job in computer science. The number of computer science majors at
the seven universities in the Minnesota State Colleges and Universities
System has risen from 1,425 to 2,105 over the last two years, while
programming majors have climbed from 815 to 1,561. Bruce Lindberg of the
Minnesota State Colleges and Universities System's Center for Strategic
Information Technology and Security says he envisions students being drawn
to the expansion of the computer science field to cover other disciplines
besides hardware and software. "We are thinking about how we portray
ourselves and what we do," says Cal Ribbens of Virginia Tech. "We do not
want to be seen as just offering a bunch of programming classes." The
Georgia Institute of Technology's computer science curriculum has been
retooled to focus more on the creative process and what career roles
computer science majors play. Georgia Tech's Giselle Martin notes that
undergraduate applications are significantly higher in 2008, party thanks
to new strategies in communicating to parents the career opportunities that
exist for computer science majors.
Click Here to View Full Article
to the top
Artificial Intelligence Research Simmers at University of
Memphis
Memphis Daily News (03/05/08) Vol. 123, No. 45, Shepard, Scott
The FedEx Institute of Technology and the University of Memphis recently
hosted the first Conference on Artificial General Intelligence. The
concept of artificial general intelligence (AGI) dates back to 1955, but
only recently has the technology advanced to the point that AGI is
feasible. Types of artificial intelligence exist in a variety of places in
society, and people interact with artificial intelligence on a daily basis
without every realizing it. "Artificial intelligence got away from its
initial goal of AGI primarily because AGI rapidly proved too hard a problem
to solve," says conference chair Stan Franklin, a University of Memphis
professor and co-director of the university's Institute for Intelligent
Systems. "AI researchers concentrated on narrow goals, building smart
machines in narrow domains." Franklin says creating AGI is now an
obtainable goal through a convergence of computer science, cognitive
science, and neuroscience. Unlike specific AI applications, AGI is pure
research, Franklin says, and a goal unto itself that in a few decades will
have more applications than anyone has dreamed of. "No one would have
predicted that the microelectronics industry would grow out of space
flight," he says. "AGI will help me understand how minds work, perhaps the
single most interesting problem there is."
Click Here to View Full Article
to the top
TechFest: Microsoft Researchers Show Off Future of
Computing
Computerworld (03/04/08) Gaudin, Sharon
At Microsoft's seventh annual TechFest, the company demonstrated some of
its research projects that go beyond the next Windows operating system or
Internet Explorer browser. One of the research projects on display was the
10TB World Wide Telescope project, which aims to combine images and
information from major telescopes, scientists, and astronomical
organizations from around the world, including NASA. Also on display was a
new programming language to study cell biology, work on new AIDS vaccines,
software to monitor and predict global epidemics, and sensors that monitor
the melting of glaciers in the Alps. Microsoft chief research and strategy
officer Craig Mundie says that Bill Gates has encouraged the company to
invest some of its assets in projects that will make a difference even if
they do not relate directly to a company product or brand. "That's partly
been the motivation to go beyond using computer science to just benefit us
as a company," Mundie says. "It's important that we not just make money,
but that we contribute to working on these other problems."
Click Here to View Full Article
to the top
Survey: Developers Seek Web, Dynamic Languages
eWeek (03/03/08) Taft, Darryl K.
Developers want Web technologies and dynamic languages for new projects,
reveals a new Ziff Davis Enterprise survey, which also found that
developers are planning to use Web development and scripting or dynamic
languages more than traditional procedural languages over the next 18
months. The survey says the majority of developers plan to start using
AJAX in the next 18 months, while the second most mentioned language was
JavaScript. Open-source JavaScript library jQuery creator John Resig says
the push toward using more Web development technologies and dynamic
languages shows that AJAX and JavaScript are the universal meeting ground
for Web development. "It doesn't matter if you're using ASP.NET, Ruby,
Perl, or PHP, if you need to make your page interactive in a
standards-based, accessible way, you turn to JavaScript," Resig says. He
says that as developers turn to developing their next application they will
realize that it is easier to deploy and distribute when using the Web as a
platform. The emergence of dynamic languages is largely because they offer
simplicity while many other languages are more strict, Resig says. "Many
of them are easier to get started with, enforce less encumbrances, and
encourage community contributions, such as Ruby, Python, and PHP."
Click Here to View Full Article
to the top
Computer Users Get Sense of Touch
Carnegie Mellon News (03/01/08)
Carnegie Mellon University Robotics Institute professor Ralph Hollis has
developed a haptic interface that could provide computer users with the
ability to sense texture and shape when manipulating virtual 3D objects.
Hollis' device uses magnetic levitations and just one moving part. Users
can feel textures, hard contacts, and even slight changes in position. "We
believe this device provides the most realistic sense of touch of any
haptic interface in the world today," says Hollis, whose research group
built the first working version of the device in 1997. Using a $300,000
National Science Foundation grant, Hollis and his colleagues have improved
the device's performance, enhanced its ergonomics, and lowered its
production cost. The researchers built 10 copies of the device that are
being distributed to haptic researchers in the United States and Canada.
Giving the device to other researchers is important to the developing field
of haptics, Hollis says. Research in magnetic levitation haptic interfaces
has been particularly lacking because researchers have not been able to
access the devices. "This is an affordable device that's also practical,"
Hollis says.
Click Here to View Full Article
to the top
Tiny Etch-a-Sketch
Technology Review (03/04/08) Inman, Mason
Researchers led by Jeremy Levy of the University of Pittsburgh have
demonstrated a new technique that could be used to create rewritable logic
circuits and denser computer memory. The researchers used an atomic force
microscope (AFM) to draw nano-sized conductive paths that act like metallic
wires on a special material, a two-layer material developed at the
University of Ausberg in Germany. The interface between the two materials
can be switched between insulating and conducting by applying a voltage
across the interface. The lines drawn in the material were as thin as
three nanometers, making them significantly narrower than the lines that
can be drawn using electron beam lithography, currently one of the most
precise techniques for etching devices from silicon. The wires can be
erased by reversing the voltage and dragging the AFM tip across the wire,
or by exposing the material to blue light. Being able to draw conductive
patterns could allow researchers to create circuits that can be
reconfigured on the fly, and could be used for high-density memory. Levy
says it could be possible to integrate the new material with existing
silicon chips.
Click Here to View Full Article
to the top
World-Wise Web?
Financial Times (03/04/08) P. 9; Waters, Richard
A revolution in the way information captured on the World Wide Web is
retrieved and manipulated that will make Google's early breakthroughs seem
archaic is on the horizon, according to optimists in Silicon Valley. This
revolution will be facilitated by integrating core technologies that are
transforming the Web with approaches drawn from the field of artificial
intelligence. Powerset CEO Barney Pell says that "people are realizing
that the goals of AI may be way out, but in the field of AI the time is
here for really exciting applications." The core component of the new Web
3.0 technology movement is the semantic Web, in which data within documents
becomes machine-accessible so that computers can follow related links
between Web sites and draw together related information. Some semantic Web
advocates are saying that enough building blocks are in place to construct
the first true semantic Web services, but a major challenge lies in the
need to make information on the Web comprehensible to machines so that it
can be extracted, processed, and invested with usability. This requires
the attachment of machine-readable "tags" to each piece of data to describe
what type of information it represents, a massive effort that could be
beyond the capacities of the human brain. The semantic Web attempts to
tackle this problem through the creation of dictionaries known as
ontologies. Meanwhile, other technologies originally developed for AI
applications, such as natural language processing, are being tapped to
establish practical Web 3.0 services, although their long-term viability is
a matter of contention. Most people expect the repercussions of the Web
3.0 technological wave to be incremental, starting with such things as an
increase in the intelligence of a broad spectrum of Web services and the
enhancement of search engines to return results of higher quality.
Click Here to View Full Article
to the top
The New Art of War
Washington Post (03/03/08) P. A15; Pincus, Walter
Recent testimony before the Strategic Forces Subcommittee of the House
Armed Services Committee focused on preparing for war in space and
cyberspace. Space threats have received a significant amount of attention
in the past, so it was the possibility of cyberspace warfare that received
the most emphasis at the hearing. Head of U.S. Strategic Command Gen.
Kevin P. Chilton said cyberspace is an "emerging war-fighting domain" and
that potential enemies understand the U.S.'s reliance on the use of
cyberspace and are constantly probing the country's networks to find
competitive advantages, which is why the nation needs to develop defensive
and offensive cyberspace systems. Several strategies and institutions have
already been created to protect cyberspace, including the classified 2006
National Military Strategy for Cyberspace Operations, which concludes that
"offensive capabilities in cyberspace offer both the U.S. and our
adversaries an opportunity to gain and maintain the initiative." The
Strategic Command and Joint Chiefs of Staff personnel are developing
contingency plans and carrying out operations that protect the government's
computer networks through detection and coordinated counterattacks against
intruders. Chilton said the government is working "to operate, defend,
exploit, and attack in cyberspace."
Click Here to View Full Article
to the top
From Pictures to Three Dimensions
Jacobs School of Engineering (UCSD) (02/29/08)
University of California, San Diego researchers have developed a 3D
reconstruction algorithm that could enable users to convert their digital
photos into 3D displays. Research in 3D reconstruction involves
"autocalibration," a computer vision process that strives to recover the 3D
structure of a scene using only images acquired from cameras with unknown
internal settings and spatial orientations. The researchers say that their
algorithm could be used for more informative e-commerce pictures, to
automatically align security camera networks, or to create
augmented-reality walkthroughs of cities, supermarkets, or other places of
interest. They say their algorithm provides a "theoretical certificate of
optimality," or the best possible 3D reconstruction possible from the
available data. "Our algorithm is guaranteed to provide the best 3D
reconstruction," says UCSD fifth-year PhD student Manmohan Chandraker.
"Our approach utilizes modern convex optimization techniques to globally
minimize the involved cost functions in a branch and bound framework."
Click Here to View Full Article
to the top
Sun, University of Tokyo Announce Research
Partnership
HPC Wire (02/27/08)
The University of Tokyo and Sun Microsystems announced two joint research
projects that will focus on high-performance computing and Web-based
programming languages. The research includes the development of a library
based on skeletal parallel programming in Fortress, and the implementation
of a multiple virtual machine (MVM) environment on Ruby and JRuby. The
Fortress library project will be led by professor Masato Takeichi and
professor Zhenjiang Hu at the Graduate School of Information Science and
Technology at the University of Tokyo, in collaboration with Dr. Guy Steele
at Sun Labs. Skeletal parallelism is a programming method that uses
pre-defined components extracted from general-purpose parallel processing
constructs to make parallelization simpler and more scalable while helping
programmers avoid difficult tasks such as communication and
synchronization. Fortress is designed to do for Fortran what Java-based
technologies did for C by enabling highly productive programming
constructs. The implementation of a MVM environment on Ruby and JRuby will
be led by University of Tokyo professor Ikuo Takeuchi and Sun director of
Web technologies Tim Bray. The MVM environment is expected to make Ruby
programs run more efficiently and allow multiple applications to run
simultaneously without requiring multiple interpreters, which leads to
excessive memory consumption. The research aims to solve technical issues
such as the definition of common interfaces for using MVM, and the
parallelization of VM instances and memory sharing.
Click Here to View Full Article
to the top
Tooling Up for Tomorrow's Clever Cars
ICT Results (02/28/08)
Cars are becoming increasingly complex, largely due to the growing number
of sophisticated, software-driven electronics that now come as standard
features. Automakers say today's cars contain as much electronics as
commercial airliners did two decades ago. In 2002, electronic parts
accounted for 25 percent of a vehicle's value, but by 2015 that could reach
40 percent. Researchers on the European ATESST project say a substantial
percentage of vehicle failures can be directly traced to embedded systems,
and research shows that electronic failures will continue to increase and
reach unacceptable levels if no preventive action plan is established. The
ATESST project wants to reverse this trend through the use of the
Architecture Description Language, a new computer language the project
developed to improve methodology to handle component failures and avoid
design flaws. "New tools are needed to do a job which is becoming ever
more complex," says project manager Henrik Lonn. "The many components
which go into vehicles are being made by a host of manufacturers, often
using different processes and working to different standards." Lonn says a
common language at the highest level is needed to bind all of the
electronics together. Other initiatives have made similar efforts, such as
the European-developed AUTOSAR standard, and off-the-shelf modeling tools,
but Lonn says such efforts are not enough. "What we have developed is an
industry-specific system which works with these other standards and
dictates what part of the system is performing what function, and makes
sure the different components will work together," he says.
Click Here to View Full Article
to the top
Krehbiel Receives Grant to Further Research on Digital
Media
Bethel College (02/28/08) Siebert, Aimee
The National Science Foundation is backing a series of cooperative
experiments between Bethel psychology professor Dwight Krehbiel and College
of Charleston computer science professor Bill Manaris. The researchers aim
to create a musical search engine based on aesthetic similarity. Manaris
has focused on identifying and analyzing musical metrics, such as the
intervals between notes or chords, and whether patterns of these metrics
affect the appeal of a song. His research is based on Zipf's law, a law
commonly used with linguistics that says a body of text has a word that is
used most frequently, and the word that has the next-highest frequency
should have appear half as often as the word with the highest frequency.
Manaris' work shows that the same pattern often exists in the musical
metrics he studies. The two researchers are working together to compare
the computer-based metrics with actual human, emotional responses. The
researchers tested the predictions of a search engine against the emotional
and physiological response of human subjects. Their research could lead to
a search engine that is capable of analyzing the Zipfian characteristics of
a person's favorite song and supplying a list of aesthetically similar
songs.
Click Here to View Full Article
to the top
UCL Computer Science Research Revolutionises Computer
Games Graphics
UCL News (02/28/08)
University College London researchers are developing a technique that
quickly adds indirect light to simulated scenes. The fast method developed
by Dr. Jan Kautz and colleagues has the potential to make computer games
seem more realistic due to a greater variation in shade on an object, and
hues of reflected light adding extra detail. Kautz has received a grant
from the U.K. government's Technology Strategy Board to develop a system
that can quickly simulate grades of shadows from indirect light bouncing
off objects in both moving and static scenes. Kautz will work with
software company Geomerics on the system. "I am excited about
collaborating with an industrial partner in an area where it has been
difficult to get new results from research into actual products," Kautz
says. "This grant will allow us to develop new lighting methods that will
have a direct impact on future computer games."
Click Here to View Full Article
to the top
Prof Posits Metananocircuits as Electronics' Next
Frontier
EE Times (03/03/08)No. 1516, P. 16; Bains, Sunny
University of Pennsylvania engineering professor Nader Engheta theorizes
that nanotech circuits can function in a scheme where "current" is
reclassified as an electromagnetic wave, and he envisions the creation of
metananocircuit-based switches that could drive a new form of optical
information processing and possibly a new type of nanoscale computational
unit. Engheta's theory is founded on three fundamental notions:
Nanoparticles of various materials have characteristics that can be matched
to electronic counterparts; the nanoparticles can be regarded as "lumped
components" that can be linked into circuits through the use of additional
guiding structures; and the design of efficient devices is crucially
affected by the idea of metamaterials, in which composite materials
manifest properties governed by their nanoscale structures rather than
their chemistry. University of Toronto professor George Eleftheriades says
Engheta's work offers "a vision, consisting of building blocks, along with
instructions on how to arrange them together to enable transplanting
well-known passive inductor-capacitor-resistor [LCR] electrical networks to
the optical domain." The emergence of metamaterials could overcome the
absence in nature of ideal materials to implement such circuits at optical
wavelengths, but practical applications of Engheta's theory require the
creation of metamaterials to ensure the reliable operation of such devices.
Demonstrating basic nanocircuit principles is the focus of two research
teams, one of which is developing optical nanoantennas that Los Alamos
National Laboratory's Rohit Prasankumar says ought to function as lumped
nanocircuit elements at visible wavelengths. Meanwhile, University of
Pennsylvania physicist Marija Drndic says her team intends "to construct
specially designed grating structures with periods much less than the
operating wavelengths, and then experimentally verify the performance of
such nanostructures in terms of optical reflection and transmission."
Click Here to View Full Article
to the top
Turning Disabled Into Gamers, MIT Aims to Spread Robot
Rehab
Popular Mechanics (02/26/08) Sofge, Erik
MIT researchers in the Newman Lab for Biomechanics and Human
Rehabilitation have developed a system that combines simple video games
with different robotic joysticks to help rehabilitating patients regain
lost skills and abilities. Designed for stroke, spinal cord injury, and
possibly cerebral palsy patients, the robotic therapists assist the
patients in playing video games if the patient responds too slowly, but the
goal is to avoid having the robot help. As patients use the devices,
gradually improving their reactions, the devices expect faster reaction
times and will assist the player more quickly. The Department of Veterans
Affairs is currently testing the therapeutic robots against traditional
techniques. Designing robots capable of interacting with humans without
hurting them, or without burning out their motors, was a significant
challenge, and Newman Lab director Neville Hogan sees these devices as the
first "contact robots." Developing contact robots is a crucial step toward
having robotic maids and health care workers. If the Veterans Affairs
trial is successful, Medicare could start reimbursing patients for the
costs of the machines, which could lead to in-home treatment and online
social networks where rehab patients can compete against each other. Hogan
believes that within two to three years some form of robotic therapy could
be available in every major rehab clinic in the United States.
Click Here to View Full Article
to the top
Student Web Language Gains International
Recognition
Inside Rensselaer (02/28/08) Vol. 2, No. 4,
Rensselaer Polytechnic Institute computer science doctoral student Gregory
Williams, who works in the Tetherless World Constellation, has designed a
Web language that allows Web sites to communicate with each other.
Williams' language was given high marks by the World Wide Web Consortium
(W3C) and will be a foundation for other companies and researchers. In
January, the W3C released SPARQL, a Semantic Web-based query language
standard designed to enable Web sites to communicate. The SPARQL standard
includes several different implementations developed by companies,
university research teams, and individuals. Williams' SPARQL
implementation was among the top five languages. The standard languages
base will allow programmers and researchers to develop Web sites and
technologies that can easily share data with one another. Williams began
working on his SPARQL implementation in 2005. "My motivating base was to
implement a version of SPARQL that was easy to use and access so
researchers can quickly introduce themselves to the language and then begin
playing with it," Williams says. "I hope this will allow researchers to
quickly extend the language and continue to do new things on the Web."
Click Here to View Full Article
to the top
Secure and Easy Internet Voting
Computer (02/08) Vol. 41, No. 2, P. 08; Beroggi, Giampiero E.G.
A modular and service-oriented architecture was tapped as the platform for
a fully scalable and portable Swiss e-voting system that allows people to
cast votes using the Internet or cell phones, using two-step encryption and
redundant storage systems to maintain the authenticity and confidentiality
of votes, writes director of Canton Zurich's Statistical Office Giampiero
E.G. Beroggi. The system seamlessly integrates with traditional ballot-box
voting so that all citizens can vote, and no digital divide splits the
population. Six weeks prior to the vote, the communities in the
participating cantons enter the names of all citizens eligible to e-vote in
the electronic ballot box, which opens four weeks before the vote. To
vote, citizens use a password that they receive from the canton's
Statistical Office by mail as part of their voting forms. Citizens can
vote through the Internet by logging onto the e-voting Web site using ID
numbers and following the site's directions for vote casting, and the
system accepts the vote if it perceives a match between the security symbol
the voters enter and the one they got in the mail. The two-step encryption
process involves the voter's client computer first encrypting the votes and
ID and authentication characteristics, and the e-voting system then
checking the incoming votes for their structure and integrity before again
encrypting them, with the votes stored within a database by a pair of
redundant subsystems. On the day of the vote, the results from the regular
ballot box are fed into the vote registration software, and the e-voting
system transfers the e-vote to the voting system that manages the regular
votes once the regular voting ballot box is closed. Rather than making
source code available, the e-voting system depends on the ACM Statement on
Voting Systems, which recommends that e-voting systems "embody careful
engineering, strong safeguards, and rigorous testing in both design and
operation."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top