Software That Learns From Users
	Technology Review (11/30/07) Naone, Erica
	
	University of Washington computer science professor Pedro Domingos is 
developing CALO, a massive, four-year-old artificial intelligence project 
to help computers understand human intentions.  The DARPA-funded project 
involves researchers from 25 universities and corporations focusing on many 
areas of artificial intelligence, including machine learning, 
natural-language processing, and Semantic Web technologies.  CALO, which 
stands for "cognitive assistant that learns and organizes," tries to help 
users by managing information about key people and projects, understanding 
and organizing information from meetings, and learning and automating 
routine tasks.  For example, CALO can learn about projects and who is 
involved in those projects, so emails from those people can be given 
priority and categorized based on subject matter.  CALO can also be used to 
make transcripts of meetings through voice recognition, or perform routine 
tasks such as purchasing books online, searching for a hotel that meets 
specific criteria, scheduling meetings, and coordinating people's 
schedules.  The ultimate goal is to build an artificial intelligence that 
can serve as a personal assistant that can learn about a user's needs and 
preferences and adapt to them without having to be reprogrammed.  "It's an 
amazingly large thing, and it's insanely ambitious," Domingos says.  "But 
if CALO succeeds, it'll be quite a revolution."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	20 Percent of Election Printouts Were Unreadable
	Plain Dealer (Cleveland) (11/28/07) Guillen, Joe
	
	Recently discovered problems with the paper records produced by electronic 
voting machines in Cuyahoga Country, Ohio, could make a recount after next 
year's presidential election a disaster.  More than 20 percent of the paper 
printouts from touch-screen voting machines were found to be unreadable.  
The recount was necessary because the vote counting software crashed twice 
on election night and the margin of victory was one-half of one percent or 
less.  Election workers found the unreadable ballots while conducting a 
recount of two races, which involved only 17 of the county's 1,436 
precincts.  Board of Elections director Jane Platten says recounting the 
ballots for the entire county in the 2008 presidential election could take 
more than a week.  Cuyahoga County uses Premier Elections Solutions 
(formerly Diebold) touch-screen voting machines that store votes on a 
memory card inside each machine.  During the election a paper record of 
each vote is printed on a long reel of paper that is stored inside the 
machine.  The paper record is used during recounts, but can be damaged or 
unreadable, usually because of a paper jam while printing.   Premier 
Elections Solutions' Chris Riggall says the company will investigate the 
situation.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	General Motors, Virginia Tech Scientists Collaborate to 
Advance Neuroinformatics
	Virginia Tech News (11/28/07) Daniilidi, Christina
	
	Technological advancements in sensing technology makes it possible to take 
more accurate measurements of brain activity, something computer scientists 
and neuroscientists say could lead to the discovery of the complex neuronal 
networks in the brain that allow for simple, automatic movements such as 
reaching for a glass of water.  Virginia Tech and General Motors Research 
are opening the Laboratory for Neuroinformatics for the purpose of creating 
algorithms that process the massive amounts of data neuroscientists collect 
from the brain.  The lab will be co-directed by Virginia Tech computer 
science professor Naren Ramakrishnan and General Motors research scientist 
K.P. Unnikrishnan.  "Neuroscientists are making the transition from 
studying neurons to studying networks--the sequences of firings and spikes 
of activity across big groups of neurons," Ramakrishnan says.  "What we are 
trying to do is analyze all this data and discover something about the 
network--the connections and relationships."  Unnikrishnan says the many 
possible applications of neuroscience-related research include analyzing 
data from cars and maintaining vehicle health.  But even greater 
applications are possible, Unnikrishnan says.  "Creation of brain-machine 
interfaces is the next frontier," Unnikrishnan says.  "Giving senses to 
people who have lost them--vision, touch, hearing, and motor--would be a 
contribution to humanity."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	New Grant Program Designed for 'Transformative' Computing 
Research
	Chronicle of Higher Education (11/30/07) Vol. 54, No. 14,  P. A23; 
Carnevale, Dan
	
	The Cyber-Enabled Discovery and Innovation program will award $26 million 
in grants next year to support research into a wider range of uses for 
high-powered computing.  Recipients will be called on to apply 
computational thinking to real-world problems involving engineering and 
computer science, as well as for biology, economics, and other social 
sciences.  Sirin Tekinay, head of the National Science Foundation's new 
program, says advanced computational thinking is about the process of 
sorting out data, deriving knowledge, gaining an understanding of 
complexity, and then developing new sociotechnical systems.  All areas of 
science and engineering stand to benefit from this type of computing, which 
has helped clear the way for the decoding of the human genome, the 
production of complex real-time, satellite-aided maps, and the development 
of the Internet.  For the program, innovation is defined as research that 
has the potential to produce transformative outcomes.  Research 
institutions can apply for more than one grant, but a researcher cannot be 
named in more than two proposals during a competition cycle.
Click Here to View Full Article
	   - Web Link May Require Paid Subscription 
	  
	
to the top
	
			
		
	
	Continued Growth in Science and Engineering Doctorate 
Production
	CRA Bulletin (11/28/07) Vegso, Jay
	
	The number of doctorates awarded in science and engineering (S&E) fields 
has risen for the fourth consecutive year, according to the National 
Science Foundation.  Last year the United States produced 29,854 doctorate 
degrees in S&E fields, an increase of nearly 7 percent from the previous 
year.  Computer science doctorates led the way with a 28 percent increase 
to 1,452 degrees, following a double-digit increase in CS doctorates from 
the previous year.  CS doctorates are up 79 percent since 2002 and now 
represent a considerable share of not only S&E doctorates but all doctorate 
degrees.  Non-U.S. citizens have been key to the growth in CS doctorate 
degree production.  In the mid-to-late 1990s permanent or temporary visa 
holders received about half of CS doctorates, but last year they accounted 
for 61 percent.  CS doctorates to U.S. citizens rose 42 percent from 2002 
to 2006, but jumped 115 percent for non-U.S. citizens over the same 
period.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Software Strikes a Chord for Disabled Students
	eSchool News (11/29/07) 
	
	Rensselaer Polytechnic Institute's "Adaptive Use Musical Instruments for 
the Physically Challenged" program enables students with severe physical 
disabilities to make music by just moving their heads.  The system uses a 
digital video camera to track a student's head movements on a computer 
screen and then translates the movements into piano scales or drum beats.  
Zane Van Dusen, a RPI undergraduate student in computer science and 
electronic media arts and communication, developed the idea of using a 
digital video camera to track the user's head.  A cursor is digitally 
placed on a portion of the student's head, usually the tip of the nose, to 
follow the user's movements.  As the cursor moves, sounds are created based 
on the user's movements.  Moving the head completely in one direction will 
create a scale climb on the piano or a quick series of drum beats or a drum 
roll.  The project's ultimate goal is to eventually enable students to 
compose their own pieces to help students learn the creative process and 
build communication skills.  "The client or patient doesn't have to be a 
musician to participate," says the American Musical Therapy Association's 
Al Bumanis.  "The goal is not usually a performance, it's increasing 
communication skills, understanding, relearning lost skills."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Robots Dazzle at Japanese Exhibit
	Associated Press (11/29/07) Tabuchi, Hirkoko
	
	At the 2007 International Robot Exhibition, Japan's largest robotics 
convention, several revolutionary robots were on display, showing why Japan 
is a world leader in service and industrial robotics.  One robot, called 
Simroid for "simulator humanoid," is a human-like robot that dentistry 
students can practice procedures on.  Simroid has realistic skin, eyes, a 
mouth fitted with replica teeth, and sensors where nerve endings would be 
to alert the student when he or she is drilling too close to the nerve.  
Simroid designers are still ironing out several bugs, including a function 
that allows students to inject anesthetic into the robot's gums.  Another 
robot, called Mr. Cube, uses color sensors and a pair of dexterous hands to 
solve a Rubik's Cube puzzle.  Although Mr. Cube is significantly slower 
than humans at solving the puzzle, the ability to quickly detect and 
differentiate between colors is a breakthrough in industrial robotics.  
Meanwhile, a panda-shaped robot developed by Waseda University uses a Web 
camera and software to scan a person's face for smiles to help relieve 
stress by making people laugh.  When a hint of a smile is detected the 
robot joins in the celebration by giggling and wiggling its arms and legs.  
Japan had more than 370,000 robots in use in 2005, about 40 percent of the 
global total, or about 32 robots for every 1,000 Japanese manufacturing 
employees.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Microsoft Preps Parallel Developer Tool
	eWeek (11/29/07) Taft, Darryl K.
	
	Microsoft has released an early preview of ParallelFX (Parallel Extensions 
to the .Net Framework), a set of programming tools designed to help 
developers approach issues related to coding for parallel environments.  
ParallelFX contains new APIs to make programming on the .Net Framework 
simpler and to support documentation and samples.  Microsoft's S. "Soma" 
Somasegar wrote in a blog post that ParallelFX runs on the .Net Framework 
3.5 and relies on features available in C# 3.0 and Visual Basic 9.0.  
ParallelFX also includes imperative data and task parallelism APIs, 
including parallel "for" and "foreach" loops, to make the transition from 
sequential to parallel programs simpler, as well as declarative data 
parallelism in the form of data parallel implementation of LINQ-to-Objects, 
which allows users to run LINQ queries on multiple processors.  A new MSDN 
center dedicated to concurrent programming was also launched with the 
ParallelFX release and features a collection of whitepapers, including one 
that describes the broader vision for parallel computing at Microsoft. "The 
shift to multi- and many-core processors that is currently underway 
presents an exciting opportunity for everyone in the software industry," 
Somasegar writes in his blog.  "With an expected increase of 10 to 100 
times today's compute power, the opportunities to deliver powerful and 
immersive new user experiences and business value are just awesome."
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Petascale Computers: The Next Supercomputing Wave
	IT News Australia (11/29/07) Tay, Liz
	
	Academics are focusing their attention on petascale computers that can 
perform 1 quadrillion, or 1 million billion, operations per second, almost 
10 times faster than today's fastest supercomputers.  Petascale computing 
is expected to create solutions to global challenges such as environmental 
sustainability, disease prevention, and disaster recovery.  "Petascale 
Computing: Algorithms and Applications," by Georgia Tech computing 
professor David A. Bader, was recently released, becoming the first 
published collection on petascale techniques for computational science and 
engineering.  Bader says the past 50 years has seen a fundamental change in 
the scientific method, with computation joining theory and experimentation 
as a means for scientific discovery.  "Computational science enables us to 
investigate phenomena where economics or constraints preclude 
experimentation, evaluate complex models and manage massive data volumes, 
model processes across interdisciplinary boundaries, and transform business 
and engineering practices," Bader says.  However, petascale computing will 
also create new challenges in designing algorithms and applications.  
"Several areas are important for this task: scalable algorithm design for 
massive concurrency, computational science and engineering applications, 
petascale tools, programming methodologies, performance analyses, and 
scientific visualization," Bader says.  He expects to see the first peak 
petascale systems in 2008, with sustained petascale systems following 
shortly behind.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	An Open Approach to Smarter Homes
	ICT Results (11/29/07) 
	
	Today's advanced electronic devices could become the foundation for the 
smart homes of the future if they could be designed to work together 
intelligently.  Home automation systems have become more common and 
consumer electronics are increasingly network compatible, but so far no one 
has united all of the technology in a home, which could lead to fridges 
that can send a message to the television announcing that the door has been 
left open or heating systems that turn on or off automatically when someone 
enters or leaves the house.  "People are finding themselves with all these 
networkable devices and are wondering where the applications are that can 
use these devices to make life easier and how they could be of more value 
together than individually," says Philips researcher Maddy Janse.  The 
major obstacles preventing such smart homes are a lack of interoperability 
between individual devices and the need for context-aware artificial 
intelligence to manage the devices.  The European Union-funded Amigo 
project, coordinated by Janse, is developing a middleware platform that 
will allow all networkable devices in a home to communicate as well as 
provide artificial intelligence to control the devices.  The Amigo system 
consists of a base middleware layer, an intelligent user services layer, 
and a programming and development framework so developers can create 
individual applications and services.  Amigo's software is open source to 
encourage consumer electronics and telecom firms to develop products and 
services for home networks and to ensure interoperability with different 
brands.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Getting to Core of the Problem
	Chicago Sun-Times (11/28/07) Guy, Sandra
	
	A Ph.D. student at the University of Illinois at Chicago (UIC) has 
developed CoreWall, software that scientists at the Antarctic Drilling 
program in Antarctica are using to study rock cores more effectively.  When 
the researchers drill for core samples to determine what the climate of the 
Earth was like millions of years ago, they do not have much time to collect 
data because the cores shrivel up in about a week.  With CoreWall, 
developed by Julian Chen, the researchers are able to develop 
full-resolution digital images of the core and upload the images back to 
the United States overnight using a satellite Internet connection.  
CoreWall also features a visualization tool that is capable of enlarging 
the high-resolution core photos for closer examination as well as 
annotation.  "It's something new that commercial Photoshop packages don't 
have," says UIC computer science professor Jason Leigh, director of UIC's 
Electronic Visualization Laboratory.  "Now that we have low-cost computers 
and display screens, scientists can look at the cores in their perfect form 
when they are first dug out" and preserve the images.  The UIC team has 
received a grant from the National Science Foundation to add new 
capabilities to the software, and has proposed developing "mashup" software 
that would allow the scientists to gather and share data quickly over 
superfast networks.
Click Here to View Full Article
	   - Web Link May Require Paid Subscription 
	  
	
to the top
	
			
		
	
	Canadian Student Maps Brain Power to Image Search
	Computerworld Canada (11/28/07) Schick, Shane
	
	University of Ottawa Master's student Kris Woodbeck is mapping how the 
human brain interacts with technology to power a search engine for visual 
images.  The search engine mimics how the brain processes visual 
information and capitalizes on the processing capabilities of graphics 
processors.  "The brain is very parallel.  There's lots of things going on 
at once," Woodbeck says.  "Graphics processors are also very parallel, so 
it's a case of almost mapping the brain onto graphics processors, getting 
them to process visual information more effectively."  Woodbeck believes 
his research has potential for use in medical and military applications as 
well as facial recognition.  Search engine specialist Guy Creese says 
vendors are struggling to find the right kind of artificial intelligence to 
extract the content of an image to create accurate metadata.  "In text, 
you've got a lot of metadata compared to images," Creese says.  "For 
images, it might be when you took it, with what camera, with what exposure, 
that's about it ... How do you surface that metadata so it becomes much 
more searchable?"  Creese says the biggest problem is that indexing image 
content is a manually intensive job that most organizations do not have the 
manpower to accomplish.  Woodbeck says he has been testing his search 
engine on academic datasets that include between 60,000 and 100,000 
images.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Argonne's Nuclear Energy Research Moves Toward Greater 
Reliance on Computer Simulation
	Innovations Report (11/29/07) Hardin, Angela
	
	The U.S. Department of Energy's Argonne National Laboratory is 
increasingly relying on computer simulation and modeling to carry out 
nuclear energy research.  "The traditional approach to developing nuclear 
energy technologies is to do a bunch of experiments to demonstrate a 
process or reaction," notes Argonne's program manager for the Global 
Nuclear Energy Partnership Mark Peters.  "What Argonne is doing is creating 
a set of integrated models that demonstrate and validate new technologies, 
using a smaller number of experiments."  Argonne's nuclear simulation 
project leader Andrew Siegel adds that virtual experimentation can 
substantially lower facilities' costs by improving the identification and 
targeting of the physical experiments underlying their design.  He says 
Argonne computational researchers are developing SHARP (Simulation-based 
High-efficiency Reactor Prototyping) software components that digitally 
emulate physical processes that transpire within a reactor core.  The SHARP 
toolkit has been devised to exploit the lab's Advanced Leadership Computing 
Facility featuring IBM's Blue Gene/P computer, which runs at a sustained 
rate of 1 petaflop per second.  SHARP could ultimately supplant computer 
codes that are used to carry out safety assessments of aging nuclear 
reactors, and Siegel says simulation tools such as SHARP could potentially 
save millions of dollars in reactor design development and assembly.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Futurologist Predicts Life in 2030
	VNUNet (11/26/07) Williams, Ian
	
	Futurologist and author Ray Hammond predicts that by 2030 the Internet 
will evolve into a super-intelligent network, our bodies will contain 
neurological interfaces, robots will play a major role in our daily lives, 
and replacement organs will help extend the average life span to 130.  
Hammond's predictions are part of a report, "The World in 2030," which was 
produced independently following a year-long study.  "If you think this 
picture of life in 2030 sounds unrealistic, consider this: how many people 
in 1985 would have thought that computers and mobile phones would play such 
a central role in our lives today?" Hammond says.  He says that no one can 
accurately predict the future, but that the report identifies key trends 
that are likely to shape the coming decades leading to 2030.  "One thing is 
certain: the rapid change that we have seen since the 1980s will not slow 
down," Hammond says.  "It will speed up so much that, in some ways, our 
lives in 2030 will be unrecognizable today."  People will be wirelessly 
tagged for their own protection, with data on location and health 
constantly being transmitted so help can be called in case of emergency or 
sudden illness.  The Internet will develop into a "super combined Web" that 
is always on and always connected to every device.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Torvalds on Where Linux Is Headed in 2008
	InformationWeek (11/25/07) Babcock, Charles
	
	Linus Torvalds says he believes Linux development is a lot more efficient 
than any other commercial development method, not only for the kernel but 
also for satellite products surrounding the kernel.  Torvalds says that 
Linux virtualization efforts particularly benefit from an open source model 
because virtualization can mean many different things to different people.  
The open source model prevents one person's, or company's, interest from 
dominating the project.  Torvalds says that current work on upcoming 
kernels includes a lot of hardware-related work, both in terms of 
peripheral drivers and platform changes.  Graphics and wireless networking 
and weaknesses in current Linux systems are receiving a lot of attention, 
as is virtualization and switching to solid-state drives (SSD) disks.  
While SSDs are currently too expensive to create a major change, Torvalds 
expects them to play a bigger role in 2008 as they can make a significant 
difference in reducing latency.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Digital Preservation: Alliance Set to Tackle Science's 
New Frontier
	European Science Foundation (11/22/07) 
	
	The creation of a European digital information infrastructure that 
maintains accessibility to scientific works is the goal of the Alliance for 
Permanent Access, a coalition of major national and international 
scientific organizations dedicated to digital preservation that was 
launched at the Second International Conference on Permanent Access to the 
Records of Science.  The alliance includes such groups as the European 
Science Foundation, CERN, ESA, the Max Planck Society, and libraries.  
Meeting the coming challenges of creating the digital information 
infrastructure requires the support of scientific communities, according to 
the alliance.  The organization will also focus on the development of 
funding models and economic analyses to evaluate the cost of sharing and 
accessing data and find ways to embed these costs within all funding 
mechanisms for science.  Stakeholders generally concur that data must be 
retained in a manner that ensures open access, interoperability so that 
datasets can be compared within and across scientific disciplines, and 
repositories must be furnished to fulfill these requirements in a 
quality-controlled and sustainable way.  The European Union realizes that a 
cultural shift is required, and the European Commission has assumed the 
role of leveraging stakeholders and devising policy efforts on a strategic 
and technical level, with an emphasis on digitization and digital 
preservation.  Projects the commission is undertaking include the 
establishment of economic incentives for preserving data, and a proposal to 
develop a study on the socio-economic drivers and ramifications of 
longer-term digital preservation is underway.  Up next for the alliance is 
the creation of a forum on preservation and access, and the development of 
a manual of good practices.
Click Here to View Full Article
	  
	  
	
to the top
	
			
		
	
	Computer Simulations Advance Beyond Hollywood
	New Scientist (12/01/07)No. 2632,  P. 28; Marshall, Jessica
	
	University of California, San Diego computer scientist Henrik Wann Jensen 
says photorealistic computer graphics have advanced to the point where they 
can be used in other industries besides gaming and special visual effects 
for movies.  For example, the software developed for realistic hair 
simulation in "King Kong" could be used to virtualize hair product 
applications, sparing companies the expense of manufacturing the products 
before trying them out.  Skin rendering has also made enormous progress, 
and an important advance was the recognition of skin's translucency, which 
led to the development of software that takes this property into account.  
Jensen is now boosting the realism of skin models with a version that 
divides sub-surface light into an epidermis layer containing models of two 
kinds of melanin and a dermis that contains hemoglobin.  "Without this, 
there's a uniformity to the skin that may not be quite right," he notes.  
"Things start to look a bit like wax."  Jensen plans to embed individual 
fibers and cells within the skin layers, and he says such models could be 
employed by the cosmetics industry to generate more natural-looking 
foundation.  Jensen adds that a future version of the skin model could be 
used to simulate the propagation of light of particular intensities through 
the skin of cancer patients, which could be used to ascertain the proper 
dosage of laser or radiation therapy.
Click Here to View Full Article
	   - Web Link May Require Paid Subscription 
	  
	
to the top
	
			
		
	
	Hire Learning
	Redmond Developer News (11/07) Richards, Kathleen
	
	A major decrease in U.S. computer science enrollment is leading to a 
paucity of enterprise-level graduates, sparking concerns and projections 
about the makeup of the future IT workforce.  Experts such as Northwest 
Cadence's Jeff Levinson note that CS majors' coursework generally fails to 
equip students with the real-world experience and business skills that are 
an increasingly critical component of IT positions.  "When CS graduates 
come out of school, 95 percent of the time they haven't seen or heard of 
use cases, have never written or read a requirements document, and don't 
possess any soft skills or understanding of business consequences," he 
laments.  Some institutions are working with tech companies such as Google 
and IBM to overhaul curriculums, bolster research programs, and draw a 
wider range of students, particularly women and minorities.  Associate dean 
of California State University Fullerton Dorota Huizinga maintains that 
there has been very little change in the software engineering curriculum 
over the years, and in many undergraduate programs students are exposed to 
a few descriptive courses that do not adequately train them for the 
discipline, while there is also little concentration on the design of user 
interfaces and improving the friendliness of software and services.  
Director of Microsoft Research's External Research and Programs Group 
Sailesh Chutani believes CS enrollment can be boosted by placing computing 
in a socially relevant context or in the context of interesting fields such 
as robotics and gaming.  Some people are suggesting that students should 
receive at the very least a Master of Science in CS or software engineering 
in order to qualify as a professional.  "Once you've gone to the master's 
level, chances are you have more depth and you're more likely to fit right 
into what the industry is trying to do," says Chutani.
Click Here to View Full Article
	  
	  
	
to the top