Google and I.B.M. Join in 'Cloud Computing' 
Research
	New York Times (10/08/07)  P. C8; Lohr, Steve
	
	Google and IBM have announced a major research initiative to help 
universities provide the technical training needed for powerful and highly 
complex computing, which even the most elite universities currently lack.  
The two companies will build large data centers that students will be 
allowed to access on the Internet for remote programming and research, 
known as "cloud computing."  Online services offered by Google, Yahoo, 
Microsoft, Amazon, and eBay are simple, consumer-based examples of cloud 
computing, but cloud computing is also being used to handle increasingly 
large computing challenges, which often involve searching the Internet and 
data sources for patterns and insights.  Corporations have been responsible 
for most of the innovation in cloud computing, but computer scientists say 
a lack of skills and talent in graduates could limit further growth.  "We 
in academia and the government labs have not kept up with the times," says 
Carnegie Mellon University's dean of computer science Randal E. Bryant.  
"Universities really need to get on board."  Six universities--Carnegie 
Mellon, MIT, Stanford University, UC Berkeley, the University of Maryland, 
and the University of Washington--will participate in the project.  Google 
and IBM will each set up a data center, which will run an open source 
version of Google's data center software and will be large enough to run 
ambitious Internet research.  "This is a huge contribution because it 
allows for a type of education and research that we can't do today," says 
University of Washington computer science professor Edward Lazowska.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Data Storage Discovery Earns Nobel
	Washington Post (10/10/07)  P. A3; Vedantam, Shankar
	
	The Royal Swedish Academy of Sciences awarded the 2007 Nobel Prize in 
physics to Peter Gruenberg of Germany and Albert Fert of France for their 
discovery of giant magnetoresistance (GMR), the phenomenon that allows 
ultra-thin slices of metal to have different electrical properties in a 
magnetic field, which led to the ultra-small hard drives that have made 
iPods and high-capacity laptops possible.  While Gruenberg and Fert worked 
independently, they will share the award of about $1.5 million.  GMR 
describes a phenomenon at the junction of electricity and magnetism.  When 
two layers of metal, such as iron, are separated by a thin layer of a 
second metal, such as chromium, the electrical resistance of the structure 
can be manipulated by a magnetic field.  In GMR devices, a mechanical 
reader "head" moves over the data, altering the resistance within the head 
and controlling the flow of electricity, which is translated into the ones 
and zeros used for digital information.  Without the discovery of GMR it 
would still be possible to store massive amounts of information in a tiny 
space, but it would be impossible to read it.  Gruenberg and others credit 
IBM researcher Stuart Parkin with discovering the practical application of 
GMR, and some had expected Parkin to share the prize.  IBM's vice president 
of research Mark Dean says Parkin was key to identifying the structures and 
materials used to create hundreds of devices that allow the computer 
industry to make dramatic leaps forward every year.  "The raw understanding 
of how nature works is a great thing," Dean says.  "The application of that 
knowing how nature works in the creation of something my mother can use is 
another great breakthrough--and as significant."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	New HPC Ph.D. Fellowship Program Announced
	HPC Wire (10/08/07) 
	
	Full-time faculty members at Ph.D. granting institutions have until Oct. 
30 to nominate Ph.D. students for the new High Performance Computing Ph.D. 
Fellowship Program.  ACM, the IEEE Computer Society, and SC Conference 
Series announced the fellowship in response to recent reports that have 
stressed a need for highly trained HPC scientists and engineers.  "The 
ACM/IEEE-CS HPC Ph.D. Fellowship Program is designed to directly address 
this recommendation by honoring exceptional Ph.D. students throughout the 
world with the focus areas of high performance computing, networking, 
storage and analysis," says Bill Kramer, general manager of the National 
Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley 
National Laboratory.  "The ACM/IEEE-CS HPC Ph.D. Fellowship Program also 
supports our long-standing commitment to workforce diversity and encourages 
nominations of women, members of under-represented groups, and all who 
contribute to diversity."  Kramer is a member of the standing committee 
that will lead the fellowship program.  Selections will be based on Ph.D. 
students' research potential, how well their technical interests match 
those of the HPC community, their academic progress, and a demonstration of 
how they are likely to use HPC resources.  Complete information on the 
fellowship program is available at 
http://www.sigarch.org/HPC_Fellowships.html.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Are America's Software Skills Getting Soft?
	Wisconsin Technology Network (10/06/07) DiRomualdo, Tony
	
	Globalization represents a formidable challenge to the software industry, 
which leads to questions about whether American leaders are responding 
effectively to the challenge so that the U.S. economy can stay on top.  An 
ACM study projects the continued growth of software offshoring as IT, work 
and business processes, national policies, and education undergo a shift, 
which will necessitate a grounding in computing basics, comprehension of 
the global community, and up-to-date business process and platform 
knowledge by the workforce.  Villanova University professor Steve Andriole, 
a member of the ACM task force that collected and analyzed data for the 
study, says American dominance of IT research and development is 
"challenged" and the United States is in danger of losing its lead position 
to other nations if current trends continue.  Even more distressing is the 
low score Andriole gives U.S. policy makers in terms of their emphasis and 
support for technology-focused education and training, and he notes that 
the U.S. academic community has made little effort to address a major 
fall-off in undergraduate computer science and management-information 
systems majors.  In a policy brief, the Brookings Institution's Lael 
Brainard and Robert E. Litan suggest strategies for addressing U.S. 
leaders' failure to effectively deal with software globalization, including 
a careful examination of America's tax policies; an increase in funding for 
science and engineering education and training; greater enforcement of 
trade agreements; enforcement of regulations that shield against risks; 
more accurate government collection of official offshoring data; and above 
all else, the extension of wage insurance, adjustment aid, and training to 
cover permanently displaced workers.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	New Image Search Rules Out Guesswork
	New Scientist (10/05/07) Barras, Colin
	
	Southampton University computer scientist Jonathon Hare has developed an 
image search engine capable of returning results based on the content of 
the image rather than the words that are near the image on the Web page, 
which is how most image search engines currently work.  The search engine 
uses a small set of images that have been manually tagged with keywords.  
Off-the-shelf software then creates a virtual mathematical space of all the 
features in the image.  The search engine then finds images that contain 
similar geometrical shapes.  "The idea is you construct the space from a 
training data set and then apply it to new images," says Hare.  The search 
engine is able to find images based solely on their visual features, 
creating more accurate and extensive search results.  For example, a search 
for "water" using Hare's search engine would return more pictures of 
oceans, rivers, and lakes than a typical search engine.  Despite being able 
to return unique results without relying on keywords, Hare's search engine 
does have several downsides, as it would be difficult to update the 
engine's stock set of images and it may struggle with large amounts of 
varied images, such as what a Web search engine needs to sort through.  
Hare is currently working on improving the search engine's ranking 
system.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	100 Gb/s Internet2 Completed
	TG Daily (10/09/07) Gruener, Wolfgang
	
	The Internet2 consortium announced that its updated infrastructure is 
ready to go online and will launch with an initial capacity of 100 Gbps in 
dedicated chunks of 10 Gbps next January.  The group demonstrated its new 
infrastructure during its Fall 2007 meeting, in which a third of a terabyte 
was transferred within five minutes over a 10 Gbps connection involving the 
University of Nebraska-Lincoln and Fermilab in Batavia, Ill.  The optical 
improvements help to provide a "uniquely scalable platform on which to 
build side-by-side networks that serve different purposes, such as network 
research and telemedicine," according to the consortium.   Internet2 
President Doug Van Houweling adds, "More importantly, we believe the 
Internet2 network and its new capabilities will play an integral part in 
enabling our members to provide the robust cyber-infrastructure our 
community requires to support innovative research and education."  The 
network will continue to offer an advanced Internet Protocol network that 
supports IPv6, multicast, and other high-performance networking 
technologies.  The consortium also has plans to develop new 40 Gbps and 100 
Gbps technologies.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Intel Completes Photonics Trifecta
	Technology Review (10/10/07) Greene, Kate
	
	Intel researchers recently announced that they have developed a 
silicon-based light detector that can be combined with a silicon-based 
laser and a silicon modulator to form a completely silicon-based photonic 
chip.  The silicon-based light detector is less expensive than traditional 
detectors as well as more effective, and is able to detect flashes at a 
rate of 40 gigabits per second.  The silicon photonic chip could also be 
manufactured using techniques commonly used in the microchip industry, 
further reducing costs.  In most light detectors, light is detected when a 
thin layer of gallium arsenide or indium gallium arsenide has a hole 
punched through it by certain forms of energy.  Silicon, however, does not 
react the same way so a layer of germanium was added on top of the silicon 
to detect light.  Germanium is also used in some current silicon devices, 
so adding germanium to the manufacturing process would not be exceedingly 
difficult.  Intel says the next challenge is developing a process for 
integrating the detector and the other silicon devices onto a single chip.  
The integration is not expected to create any significant problems, but 
problems could arise as the devices have only been tested in labs so far.  
Director of Intel's silicon-photonics lab Mario Paniccia estimates that 
silicon photonic devices could be available to consumers in about five 
years.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Tracing Computer History From "Ancient" Times to the 
Latest Technology
	University of Wisconsin-Milwaukee (10/09/07) Quirk, Kathy
	
	University of Wisconsin, Milwaukee assistant professor of information 
studies Thomas Haigh has been studying how computers have changed business 
and society.  He has interviewed hardware and software pioneers, examined 
the development of databases, and written about the history of word 
processing.  Haigh has found that although companies are constantly trying 
to capitalize on the latest technology, that it is not always the best 
approach.  "There's this feeling that anything more than five years old is 
irrelevant, but one of the things I've found is that people are facing the 
same types of problems now as they did in the mid 1950s--projects using new 
technology are usually late and filled with bugs, the return on investment 
is hard to measure, and computer specialists are expensive and speak an 
alien language," Haigh says.  Haigh is currently working on a social 
history of the personal computer.  "Despite the shelves of books on the 
history of the personal computer there has been no serious historical study 
of how people used their computers or why they bought them," Haigh says.  
Haigh has also examined how early explanations and promises of computers 
were exaggerations and did not accurately reflect the reality, which led to 
disappointment among the public.  "It's a platitude, but if we don't 
understand who we are and where we're coming from, how can we understand 
where we're going," Haigh says.  "That's true of religion, culture, Iraq, 
and it's equally true of science and technology."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Adobe Shows Off 3D Camera Tech
	CNet (10/08/07) Shankland, Stephen
	
	Adobe Systems is demonstrating new technology that would allow users to 
reach into the scene of a photograph and adjust the focus.  Such a 
three-dimensional brush would depend on knowing the 3D nature of every 
pixel, according to Adobe's Dave Story.  The 3D camera technology makes use 
of a lens that is able to transmit smaller images to a camera.  Adobe's 
technology could also be used to remove items in the background of a 
photograph, without the artful selection that Photoshop and other robust 
software require.  The idea is to provide computers with an understanding 
of the depth of images by giving photographs multiple sub-views, which 
would enable computers to reconstruct a model of the scene in 3D.  
Computers would be able to make substantial transformations of an image, 
including an artificial shift in focus from the original photograph, based 
on the information taken from slightly different vantage points at the same 
time.  "With the combination of that lens and your digital darkroom, you 
have what we call computational photography," Story says.  "Computational 
photography is the future of photography."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	NSF Announces $26 Million Solicitation for Projects That 
Advance Innovative Computational Thinking
	National Science Foundation (10/01/07) 
	
	The National Science Foundation plans to spend at least $26 million for 
Cyber-Enabled Discovery and Innovation (CDI) research in fiscal year 2008, 
and up to $50 million on CDI projects in each of the next five years.  The 
NSF is soliciting science and research projects that will lead to even more 
innovative computational thinking about concepts, methods, models, 
algorithms, and tools.  Areas of focus include transforming data into 
knowledge to improve human understanding and produce new knowledge from 
heterogeneous digital data.  The NSF is also looking for multidisciplinary 
research into social systems that have multiple interacting elements, and 
virtual organizations that cater to people and resources across 
institutional, geographical, and cultural boundaries.  CDI proposals should 
describe how the impact of computational thinking on their project will 
lead to a paradigm shift in advances in several fields of science or 
engineering, and make a compelling case for how innovation in computational 
thinking will produce the anticipated results.  They should also make use 
of intellectual partnerships that will benefit from synergies of knowledge 
and expertise in several fields and from different types of organizations, 
including foreign and not-for profit entities.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Dragonfly or Insect Spy? Scientists at Work on 
Robobugs
	Washington Post (10/09/07)  P. A3; Weiss, Rick
	
	Researchers are developing insect-sized robots and insects augmented with 
robotic systems, or robobugs, that could be used for rescue missions, 
spying, or guiding missiles in combat.  Although no government agency has 
admitted to successfully creating such a system, several, including the CIA 
and the Defense Advanced Research Projects Agency, have admitted to working 
on projects that, if successful, would result in such spy devices.  In 
fact, the nation's use of flying robots has increased more than fourfold 
since 2003, with over 160,000 hours of robotic flight logged in 2006.  
However, creating insect-sized robots is a significant challenge.  "You 
can't make a conventional robot of metal and ball bearings and just shrink 
the design down," says University of California Berkeley roboticist Ronal 
Fearing.  The rules of aerodynamics change at such small scales and any 
mechanical wings would need to flap in extremely precise ways, a huge 
engineering challenge.  Researchers at the California Institute of 
Technology have developed a "microbat ornithopter" that is capable of 
flight but smaller than the palm of a person's hand, and Vanderbilt 
University has developed a similar device.  At the International Symposium 
on Flying Insects and Robots, Japanese researchers revealed 
radio-controlled flyers with four-inch wingspans that look like hawk moths. 
 A known DARPA project, called the Hybrid Insect Micro-Electro Mechanical 
Systems project, is inserting computer chips into moth pupae, the stage 
between caterpillar and adult moth, with the hopes of creating insects with 
nerves that have grown around internal silicon chips, creating a "cyborg 
moth," so they can be controlled and used to take surveillance 
photographs.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	'Google 101' Class at UW Inspires First Internet-Scale 
Programming Courses
	University of Washington News and Information (10/08/07) Hickey, Hannah
	
	A University of Washington pilot course designed to teach students how to 
program using massive numbers of computers has been turned into a national 
program by Google and IBM.  The new programming methods learned will help 
students and researchers manage Internet-scale applications.  "This is a 
new style of computing in which the focus is on analyzing massive amounts 
of data, using massive numbers of computers," says University of Washington 
computer science and engineering professor Ed Lazowska.  "Universities 
haven't been teaching it in part because the software is really complex, 
and in part because you need a big rack of computers to support it."  The 
success of the pilot class has led Google and IBM to donate and manage 
hundreds of processors that students will be able to access for large-scale 
computing on the Web.  The program was conceived when Christophe Bisciglia, 
a sensor software engineer at Google and a graduate of the University of 
Washington, noticed while interviewing potential Google employees that 
applicants were able to solve difficult problems that could be done on a 
single computer but were unable to solve more complex problems.  Bisciglia 
designed the pilot program and served as course director during the test 
phase.  "One of my big intentions was to close the gap between how industry 
and academia think about computing," says Bisciglia.  The program will now 
be available to students at Carnegie Mellon, MIT, Stanford, UC Berkeley, 
and the University of Maryland, and has become a full-time job for 
Bisciglia.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	MIT Research Helps Convert Brain Signals Into 
Action
	MIT News (10/02/07) Thomson, Elizabeth A.
	
	MIT researchers have developed an algorithm to enhance prosthetic devices 
that convert brain signals into actions.  The MIT approach unifies several 
different approaches used by experimental groups that created prototypes 
for neural prosthetic devices in animals or humans.  "The work represents 
an important advance in our understanding of how to construct algorithms in 
neural prosthetic devices for people who cannot move to act or speak," says 
Lakshminarayan Srinivasan, lead author of a paper on the technique 
published in the October issue of the Journal of Neurophysiology.  Previous 
efforts to create such devices have focused on boundaries related to brain 
regions, recording modalities, and applications.  The MIT researchers used 
graphical models composed of circles and arrows that represent how neural 
activity results from a person's intentions for the prosthetic device they 
are using.  The diagrams represent a mathematical relationship between the 
person's intentions and the neural manifestation of that intention, and 
could come from a variety of brain regions.  Previously, researchers 
working on brain prosthetics have used different algorithms depending on 
what method they were using, but the new MIT model can be used no matter 
what measurement technique is being used, Srinivasan says.  Srinivasan 
emphasizes that neural prosthetic algorithms still need significant 
improvement before such devices are available for common use, and that the 
MIT algorithm is unifying but not universal.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	UMass Amherst Researchers Improve Security for Credit 
Cards and Other Devices
	University of Massachusetts Amherst (10/03/07) 
	
	University of Massachusetts Amherst researchers Kevin Fu, Wayne Burleson, 
and Dan Holcomb have created a cheap and efficient methodology for 
augmenting the security of radio-frequency identification tags.  "We 
believe we're the first to show how a common existing circuit can both 
identify specific tags and protect their data," says Burleson, who 
presented the research at the annual Conference on RFID Security.  "The key 
innovation is applying the technology to RFID tags, since they're such tiny 
devices with very small memories."  Within the tags are passive systems 
that respond automatically to electromagnetic fields generated by radio 
antennas attempting to read the devices' memories, and this technology can 
be vulnerable to security breaches.  The UMass Amherst researchers' 
security technique exploits the concept of random numbers, which are used 
to encrypt data transmitted by the tags, and a string of random numbers can 
be easily produced by machines with the appropriate hardware and software.  
However, RFID tags are not designed for random number generation, so the 
researchers' work takes specific machinery committed to that function out 
of the equation and instead employs special software that allows the tag 
readers to siphon out unique data from the tags' existing hardware.  
Variations in each tag's cells can also be tapped as individual tag 
identifiers, generating a unique fingerprint, Burleson says.  The RFID 
Consortium for Security and Privacy is a collaborative effort between 
engineers and cryptographers that forms part of a research initiative 
underwritten by a $1.1 million National Science Foundation grant to enhance 
security for wireless "smart tag" gadgetry.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	UC San Diego Opens Doors to Internet2 Participants
	UCSD News (10/02/07) Froelich, Warren R.; Ramsey, Doug
	
	Some of the nation's leading researchers and educators in advanced 
networking are expected to gather at the University of California, San 
Diego for the fall 2007 Internet2 Network Performance Workshop, which will 
include tours of the San Diego Supercomputer Center and the California 
Institute for Telecommunications and Information Technology (Catlit2).  The 
workshop will also have some of the latest innovations and applications in 
the Internet2 community on display.  Some of the demonstrations at the 
conference will include delivering remote sensing data through Internet2, 
enabling virtual organizations, and video communication using scalable 
video coding.  San Diego Supercomputer Center director Fran Berman will 
moderate a panel called "Cyberstructure: The Way Forward," and Catlit2 
director Larry Smarr will deliver a keynote address focusing on "lambda 
networking."  The conference will run from Oct. 8 to Oct. 11.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	FSU Researchers' Material May Lead to Advances in Quantum 
Computing
	Florida State University (10/04/07) Ray, Susan
	
	Florida State University scientists in the National High Magnetic Field 
Laboratory and the university's Department of Chemistry and Biochemistry 
have developed a new material that could provide a technological 
breakthrough and rapidly accelerate the development of quantum computing.  
The material is a compound made from potassium, niobium, oxygen, and 
chromium ions and could potentially be as important to computers of the 
future as silicon is to today's computers.  High magnetic fields and 
radiation was used to manipulate the spins on the material to see how long 
the spin could be controlled.  Based on experiments, the material could 
allow for 500 operations in 10 microseconds before losing its ability to 
retain information.  "This material is very promising," says Naresh Dalal, 
a FSU professor of chemistry and biochemistry and an author of the paper 
describing the material.  "But additional synthetic and magnetic 
characterization work is needed before it could be made suitable for use in 
a device."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	GTISC Releases Emerging Cyber Threats Forecast
	Georgia Institute of Technology (10/02/07) 
	
	The Georgia Tech Information Security Center has published its annual 
forecasting report, the GTISC Emerging Cyber Threats Report for 2008, which 
describes the five key areas of security risk for enterprise and consumer 
Internet users.  In 2008, cyber security threats are anticipated to grow 
and evolve in the areas of Web 2.0 and client-side attacks, such as social 
networking attacks, and targeted messaging attacks, including malware 
proliferation through video-sharing online and instant messaging attacks.  
Botnets, particularly the expansion of botnet attacks into peer-to-peer and 
wireless networks, are another significant area of concern.  Threats aimed 
at mobile convergence, including vishing, smishing, and voice spam, are 
anticipated to be substantial, as are threats targeting RFID systems.  The 
primary driver behind all five major threat categories in 2008 continues to 
be financial gain.  GTISC recommends improved synchronization among the 
security industry, the user community, application developers, Internet 
service providers, and carriers.  GTISC director Mustaque Ahamad 
anticipates that enterprise and consumer technologies will continue to 
converge in 2008, making it even more essential to protect new Web 
2.0-enabled applications and the IP-based platforms they increasingly 
depend upon.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Alternative Reality: UW Prof Touts Computer Game 
Learning
	Wisconsin Technology Network (09/29/07) Vanden Plas, Joe
	
	University of Wisconsin-Madison professor David Williamson Shaffer 
believes K-12 education is still centered around an industrial-era learning 
paradigm that teaches children basic skills for passing tests while 
short-changing them in terms of critical thinking and problem solving, and 
he advocates computer games as a tool for stimulating such thinking.  
"We've believed for 150 years to learn basic facts their first year, then 
do something more sophisticated, but computers allow us to do this before 
we master the basics," Shaffer explains.  "Kids should learn these basic 
things just in time and on-demand.  You need these skills, but you also 
need to learn them in a way that tells you why."  UW-Madison is channeling 
resources into the concept of computer game education via its Academic ADL 
Co-Lab and its Games, Learning, and Society conference.  Shaffer is a 
founding member of the GAPPS research group for games, learning, and 
society, and author of "How Computer Games Help Children Learn."  He has 
examined the impact of new technologies on people's thinking and learning 
processes, with a concentration on epistemic games, which are computer and 
video games where players become professionals and are given the chance to 
tackle challenges in virtual work environments.  Shaffer says the better 
kinds of computer games nurture a form of collaboration typical of the 
premier work places.  "Children need to do things that are valued in a 
high-tech economy, and our schools are not very good at fostering 
innovative thinking," he says.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Baby's Errors Are Crucial First Step for a Smarter 
Robot
	New Scientist (10/06/07) Vol. 196, No. 2624,  P. 30; Reilly, Michael; 
Robson, David
	
	The comprehension of artificial intelligence could be improved through 
research in which machines experience the same errors of cognition that 
human infants do.  For instance, University College London researchers 
recently announced their creation of a computer program that could be 
fooled by optical illusions, raising the possibility that robots imbued 
with human-like capabilities may consequently be prey to human weaknesses.  
Scientists are elated at the idea, as errors committed by humans performing 
one task often signify strong ability at another task.  Therefore, an 
important step toward building true AI could be producing software and 
eventually machines that make human-like cognitive mistakes.  "Intelligent 
behavior requires that you have stability--which you get from past 
experience--and flexibility so you can turn on a dime when you need to," 
observes Indiana University's Linda Smith.  Put another way, human 
intelligence comes from an equilibrium between memory of past experience 
and adaptability to changing conditions.  This theory lies at the root of 
the A-not-B error that babies make, and researchers attempted to recreate 
this error in software.  The results of their work, presented at the 
European Conference on Artificial Life in September, indicate that a robot 
with the ability to learn from past experience not only makes the same 
errors as a human infant, but can also learn to adapt.  The software 
programs with this ability were equipped with homeostatic networks, which 
may be the best tools for balancing stability and flexibility in robots.
Click Here to View Full Article
	   - Web Link May Require Paid Subscription 
	  
	
to the top