Intel Prototype May Herald a New Age of Processing
New York Times (02/12/07) P. C9; Markoff, John
Intel today will demonstrate its 80-core Teraflop Chip, which the company
believes will serve as the model for chips used in desktops, laptops, and
servers within five years. A commercial version, compatible with current
Intel chips, is in the works and is expected to have tens or hundreds of
microprocessors. A manufacturing breakthrough has enabled Intel to shrink
transistors down to sizes that allow for higher speeds and lower power
consumption. At an IBM briefing, an air-cooled computer based on the chip
was shown to run basic scientific calculations at speeds over one trillion
calculations a second, equal to the world's fastest supercomputer a decade
ago. Systems with so many cores present an incredible amount of potential,
but no proof exists of how these chips can be programmed for many
applications. University of California, Berkeley computer scientist and
former ACM President David A. Patterson says, "If we can figure out how to
program thousands of cores on a chip, the future looks rosy. If we can't
figure it out, then things look dark." A group of Berkeley computer
scientists made a formal request that microprocessor manufacturers begin
producing chips with thousands of cores, claiming that if software is not
given the chance to catch up with hardware advances, the chip companies
will find themselves up against a wall of diminishing returns. In response
to this request, Intel CTO Justin R. Rattner said the Teraflop chip was the
best solution for problems such as "recognition, mining, and synthesis."
He added that the "network-on-chip" processor would be ideal for
heterogeneous computing in the corporate world. Intel researchers were
able to move data between tiles in as little as 1.25 nanoseconds, meaning
80 billion bytes per second could be transferred among internal cores. The
chip could also have a memory chip stacked directly on top of the
microprocessor, which would allow data to be moved back and forth between
memory and processor at a much faster rate than today's chips.
Click Here to View Full Article
to the top
Report Predicts Job Losses to Offshoring
Mercury News (02/12/07) Wong, Nicole C.
A report released by the Brookings Institute is the first of its kind to
focus on expected job losses in specific locations due to offshoring, and
shows that over the next decade Silicon Valley is at risk of losing one in
five of its computer programming, software engineering, and data-entry jobs
that existed in 2004. "The offshore phenomena is not something like a
peanut-butter sandwich--spread evenly across the country, says Information
Technology & Innovation Foundation President Robert Atkinson. "It's very
spiky. Federal, state, and regional policy hasn't caught up to that fact,
and we need to take that seriously." The report predicts jobs lost to
offshoring in 246 U.S. metropolitan areas between 2004 and 2015. Silicon
Valley was shown to have the highest potential for job losses with 20 to 24
percent; San Francisco, Boulder, Colo., Lowell, Mass., and Stamford, Conn.
were also among the cities with the highest percentage of predicted job
loss. However, many are not taking the report as a reason for alarm: due
to San Jose's recent restructuring, "it's not clear that what's left are
those easily offshorable jobs," says UC Berkeley economist Cynthia Kroll.
Others are skeptical of the report's regional accuracy since sample sizes
were small. The study is meant to be taken as "the Ghost of Christmas
Future," says co-author Howard Wial. "It's not a prophecy that all these
jobs will be lost ... It's an opportunity while there's still time to use
public policy do to something about it.''
Click Here to View Full Article
to the top
Wireless Sensors Extend Reach of Internet Into the Real
World
Associated Press (02/12/07) Chang, Alicia
The growth of wireless sensor networks presents the possibility of
connecting people with physical locations and conditions in the same way
the Internet connects people with computers. Thanks to an NSF grant, a
UCLA building has been converted into a testing ground for wireless sensor
technology, monitoring traffic, weather, and acoustics, among other things.
"I see this as the next wave of extending the Internet into the physical
world," says Deborah Estrin, a UCLA computer scientist and head of the
Center for Embedded Networking Sensing, a six-school consortium. Many
companies are beginning to manufacture cheap and reliable sensors, while
other groups are focusing on the privacy and security issues that would be
brought about by large-scale sensor networks. Today's sensors range in size
from one square inch to the size of a matchbox, but some have envisioned
"smart dust," or sensors the size of dust particles. The global market for
sensor network technology could rise from several hundred million dollars
currently to $8 billion by 2010, on the strength of home, agricultural, and
health care use. However, "If poorly secured networks are deployed and
exploited, people may have significant concerns about sensor technology,"
explains Carnegie Mellon University electrical and computer engineering
professor Adrian Perrig. The ZigBee Alliance, which has about 150
companies as members, has been formed to make network interoperability
rules, although such standards are still years from being completed.
Click Here to View Full Article
to the top
The Human Factor in Gadget, Web Design
CNet (02/12/07) Olsen, Stephanie
The success of products like the iPod and Web services such as YouTube has
made it very clear that effective user design is every bit as important as
technical advancement. Gone are the days where "If I had a better
algorithm, I would win," according to NASA scientists Alonso Vera. When
the field of usability was pioneered by Jakob Nielsen in 1983, there were a
few hundred, obscure usability consultants, and now there are several
thousands of them, and more corporations are beginning to hire them. NASA
usability experts were able to reduce the time it takes scientists to plan
the Mars rover's activities from 90 minutes to 10 minutes, simply by
redesigning the interface. Effective design "must be optimized for body or
brain, it has to be deeply human, something that you desire and aspire to,"
says MIT Media Lab computer scientist John Maeda. The field of
human-computer action began as "human factors," the post-WWII effort to
improve airplane cockpits for ease of use, but has taken off as computers
have become ubiquitous in society, and the demand is skyrocketing for
computing professionals that understand the way people interact with
technology. Imagination and insight have proven incredibly profitable in
products such as the Nintendo Wii, which owe more to creative design than
to technological breakthrough. AnnaLee Saxenian, dean of the School of
Information at the University of California at Berkeley, says the role of
technology in modern society requires engineers with broad skills. She
says, "U.S. engineers need a broader training than simply programming and
engineering. They increasingly need to have an understanding of working
with multicultural teams and being able to understand the social components
of the products. We believe those types of people will add the most value
in the coming decades."
Click Here to View Full Article
to the top
Supercomputing's Super Storage Issue
InternetNews.com (02/09/07) Schiff, Jennifer
Phase III of IBM and Cray's work on DARPA's High Productivity Computing
Systems (HPCS) program will address performance, programmability,
portability, robustness, and especially, cutting-edge petascale storage
systems. "Petascale applications require a very high-performance scratch
file system to ensure that storage is not a bottleneck preventing these
applications from obtaining full system performance," Cray's Rigsbee
explains. "The permanency of this storage requires it be cost-efficiently
archived and protected for many years." Cray will address these concerns
by making use of technologies such as NFSv4, pNFS, scalable NAS, and
MAID-based virtual tape, according to Rigsbee. IBM aims to build a
highly-scalable multi-tiered petabyte storage system. Primary storage will
be handled by IBM's General Parallel File System (GPFS), and backup duties
will be handled by its High Performance Storage Subsystem (HPSS). IBM
believes that this "integration of its GPFS and HPSS products" should
address "the most significant problems of traditional archive systems,"
according to IBM engineer Rama Govindaraju. "As files age, migration
policies move them to cheaper storage automatically, transparently and in
parallel," explains Govindaraju. The company predicts that this technology
will make its way into low end, enterprise computing. The supercomputers
being designed by Cray and IBM will be smaller than the DARPA HPCS goal of
two petaFLOPS sustained performance, but will be more capable than today's
supercomputers.
Click Here to View Full Article
to the top
Building the Cortex in Silicon
Technology Review (02/12/07) Singer, Emily
A Stanford University project to construct a model of the cerebral cortex
in silicon could help scientists gain a better understanding of the brain,
in order to create more capable computers and advanced neural prosthetics.
"Brains do things in technically and conceptually novel ways--they can
solve rather effortlessly issues which we cannot yet resolve with the
largest and most modern digital machines," says Rodney Douglas, a professor
at the Institute of Neuroinformatics in Zurich. In the 1980s CalTech's
Carver Mead had the idea of using transistors to construct computer chips
that could replicate the electrical properties of neurons, which
communicate using electrical pulses. The Stanford project, led by
neuroengineer Kwabena Boahen, will first build a circuit board consisting
of 16 chips, each with a 256-by-256 array of silicon neurons. Researchers
will be able to model different types of neurons as well as different areas
of the cortex. Where past work has used hundreds of thousands of neurons,
the project will use a million-neuron grid that will have the equivalent of
a processing speed of 300 teraflops, meaning real-time operation. Douglas
compares Boahen's work of assembling a structure on a scale never before
attempted to the transition from using logic gates to building computer
chips. Boahen plans to make his chips available to other scientists to
test theories on the cortex's functioning and use the information to create
the next generation of computer chips.
Click Here to View Full Article
to the top
The Graying of the IT Workforce
Network World (02/07/07) Musthaler, Linda
As the American IT workforce ages, the percentage of IT workers over the
age of 55 is expected to increase from 13 percent in 2000 to 17 percent in
2010, according to the U.S. Department of Labor. Former U.S. Secretary of
Labor Robert Reich predicts that while 21 million new IT workers will be
needed in the next five years, the field will come up 4 million workers
short. This is largely due to the 39 percent decrease in the number of
students choosing computer science as a field of study between 2000 and
2004. On a positive note, 70 percent of workers between the age of 45 and
74 told an AARP survey that they want to continue working. Companies can
prepare for the impending shortage by offering current employees options
such as telecommuting to keep them contributing for longer; recruiting
entry-level employees that can be prepared to step into critical roles when
needed; establishing mentoring programs so more experienced workers can pas
along their skills; training current employees, especially older workers,
since they are more likely to stay around longer if they're skills are kept
up-to-date; and consolidating technologies so less people are required to
operate and maintain them. Competition for skilled IT workers will only
get more intense in coming years, so companies should not be caught off
guard when older employees begin retiring.
Click Here to View Full Article
to the top
'Augmented Reality' Helps Kids Learn
eSchool News (01/31/07) Devaney, Laura
The researchers behind a project that incorporates 'augmented reality'
(AR) into an educational setting believe it could change the way students
learn in the future. The Handheld Augmented Reality Project (HARP) is a
joint effort between Harvard, MIT, and the University of Wisconsin that
allows students to traverse an actual landscape, gathering information at
specific "hot spots." The idea is the result of "trying to think about
where society is going, what students will need, what the educational
properties of these devices are, and how we can design something
interesting with these devices," says Harvard professor of learning
technologies Chris Dede. AR, which layers virtual images over actual
images on a portable device, can either be place-dependent or
place-independent. For the pilot "Alien Contact" project, the researchers
designed a place-independent system, thinking that it is much easier for
schools to implement if they don't have to travel. High school students
were put into groups that walked around the school's athletic field using
an AR map on a handheld computer that showed different "hot spots." Each
of these locations presented them with puzzles and math problems via AR.
Their goal was to use the information gleaned from the "hot spots" and form
a theory as to why the aliens have come to Earth. The game-like approach
of "Alien Contact!" is thought to be something that grabs the interest of
students. While most schools do not own such handheld devices, Dede is
confident that AR technology will be incorporated into cell phones in the
near future.
Click Here to View Full Article
to the top
Work Visas May Work Against the U.S.
Business Week (02/08/07) Elstrom, Peter
Information from the federal government suggests that the H-1B temporary
visa program appears to be doing the most good for Indian outsourcing firms
rather than U.S. companies. The firms often recruit workers from India to
train in United States for jobs waiting for them back home. It is the
opinion of some experts that although the H-1B program may have been set up
to help U.S. companies hire workers with much-needed skills, what it is
actually doing is helping the offshoring of domestic jobs. Some prominent
American tech companies are concerned that the visa program could be abused
by outsourcers and impede their ability to draw overseas talent. The
Indian outsourcing firms counter that the program enables them to help U.S.
companies' improve their flexibility and competitiveness in the global
economy. The H-1B program has no provision requiring employers to try to
hire American employees first before looking overseas, but there is a
requirement that companies pay the prevailing wages and benefits for
specific jobs in specific markets; the government says this creates a
financial incentive to hire Americans. But government officials admit that
there is nothing to prevent companies from giving preference to foreign
workers, in theory. Major technology companies as well as the president
want the cap of H-1B visas to be raised, but Kara Calvert of the
Information Technology Industry Council says that "it's important to ensure
that the visas are used for the purpose for which they were intended."
Click Here to View Full Article
to the top
New U.S. Cybersecurity Chief Lays Out Guidance
IDG News Service (02/09/07) McMillan, Robert
Gregory Garcia, the new assistant secretary for cybersecurity and
telecommunications at the U.S. Department of Homeland Security (DHS), said
at the RSA Conference that U.S. companies and federal agencies need to do
more to correct problems in their computer networks. Garcia said the
majority of world communications will probably be handled by the Internet
within the next 10 years, and outlined two objectives for the coming year.
The first is for all federal agencies to adapt common security practices,
and the second is for his office to get private companies to adhere to a
process called the National Infrastructure Protection Plan. Garcia was
adamant that the DHS expects U.S. companies to participate in the
industry-by-industry effort to evaluate security risks and develop a
process to eliminate them, saying 90 percent of critical infrastructure is
owned by the private sector, and it is up to them to make sure it is
secure. "There are a lot of plans in Washington. This one is going to
stick," Garcia said. "The private sector owns and operates 90 percent of
the critical infrastructure, and it's up to you all, not just the DHS, to
secure this infrastructure."
Click Here to View Full Article
to the top
Nice Talking to You, Machine
New Scientist (02/10/07) Logan, Tracey
Researchers are working to overcome the irritation people feel toward artificial voices
by developing natural-sounding synthesized voices. Stanford University communication
professor and author Clifford Nass says the lack of emotion in even the most advanced
artificial voices is the reason people are so turned off by them, and he conducted
experiments between 2000 and 2005 to determine if modifying the qualities of artificial
voices--pitch, intonation, volume, speed--could make them more friendly-sounding and a
cceptable to people. In one experiment Nass and colleagues ascertained that people
are more likely to follow the advice of an artificial voice that sounds like their own gender,
while another study determined that a voice's "personality" rather than its actual speech can be
more important if it is making a sales pitch. A third experiment showed that motorists were
less likely to have accidents, at least in the virtual realm, if the in-car digital voices
that help them navigate more closely match their emotional state. Research indicates that
the most successful artificial voice is that which is similar to the user, but systems
will need to be capable of detecting and recognizing human moods in order to furnish such a voice.
Work on mood detection software has focused on the emotions of anger and stress, and the former
is slightly harder to detect than the latter. T-Systems' Felix Burkhardt, whose company
supplies businesses with communications systems, says an effective anger-detection system
must be right 98 percent of the time. "Faithful mimicry of human speech, while helpful,
is not sufficient to overcome the annoyance of [this kind of] service," says Ben Shneiderman
at University of Maryland, Dept of Computer Science & Human-Computer Interaction Lab.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Robots Draw Girls to Science
Victoria Times Colonist (BC, CAN) (02/08/07) Wilson, Carla
The upcoming Lego Robotics Festival will give the University of Victoria
in Canada an opportunity to show young girls that math, science, and
technology are fun. Scheduled for Feb. 17, 2007, 30 girls, from grades six
to 12, working in teams of five, will assemble a Lego Mindstorms NXT robot
and program it for a hovercraft rescue mission across a simulated river.
Anissa Agah St. Pierre, coordinator for Women in Engineering and Computer
Science at the university, says the one-day event will introduce some
technical skills and programming concepts to the girls. "They learn that
math is cool and important," she says. St. Pierre adds that by the sixth
grade girls may no longer have a favorable opinion of math, but on Monday a
group of 25 sixth-graders had the opportunity to run robots through
obstacle courses at the university and really enjoyed it. "You should have
heard the noise level," says a laughing St. Pierre. Women only account for
about 17 percent of computer science students and 11 percent of engineering
students at universities, she says. Career opportunities exist, but there
are not a lot of role models for girls.
Click Here to View Full Article
to the top
Let the Games Begin
Sacramento State Bulletin (02/05/07)
Gaming could become a focus of the computer science department at
Sacramento State in the near future, but currently the university only
offers the course, "Computer Game Architecture & Implementation." Taught
by professor John Clevenger, the class recently wrapped up the fall
semester by having its 11 computer science students present games they
developed as part of small teams. Clevenger says playing games was not the
focus of the course, considering students needed to have a working
knowledge of advanced data structures, 3D computer graphics, artificial
intelligence techniques, sound, animation, and Newtonian mechanics. Some
students were more interested in the science, but others were thinking
about a career in the booming 3D computer game industry. However, 3D games
are not limited to the entertainment industry, considering the various
applications corporations, the military, the medical profession, and
emergency response groups have found for 3D games. One student, Tyler
Karaszewski, believes his game could become a commercial product with five
more years of improvement. "I've got enough knowledge after taking this
class to get started working in the industry," he says.
Click Here to View Full Article
to the top
Beyond the Box
CIO Insight (02/07) Alter, Allan E.
An entirely new way of thinking will be necessary for personal computing
to take a long overdue evolutionary step, according to Turing Prize winner
and computing pioneer Alan Kay. Technical limitations are not solely
responsible for the PC's status as a chronic underachiever, Kay says. Also
playing a role is a dearth of imagination and fascination among computer
scientists; resistance from users in making an effort to employ computers;
the dulling bombardment of popular culture; and the initiative to instill
more ease of use in the technology. A disregard for or disinterest in past
research--such as the work of computing innovator Doug Engelbart--is for
Kay a major indicator of the computing profession and the Web's stagnation,
and he observes that this is also a symptom of pop culture. "People who
live in the present often wind up exploiting the present to an extent that
it starts removing the possibility of having a future," Kay warns. He says
the PC's operating systems are not all they could be, because developers'
comfort and familiarity with the layered OS architectures runs so deep that
their acceptance is unconscious. Kay cites several projects his nonprofit
Viewpoints Research Institute is involved in, including MIT scientist
Nicholas Negroponte's One Laptop per Child initiative to build ultracheap
computers for the world's poor, a project funded by the National Science
Foundation that plans to encompass the entire spectrum of the
personal-computing experience in less than 20,000 lines of code, and the
construction of a new kind of user interface that helps people learn. "If
you were to change the approach to the user interface ... to a more
learning-curve-oriented system, then you would be able to accelerate the
acceptance of the newer ideas about what computers can do," Kay explains.
He believes the best way to ensure the adoption of this new approach to
computing is to cultivate a different kind of thinking in young, questing
minds.
Click Here to View Full Article
to the top
Split Decision
Government Technology (02/07) Vol. 20, No. 2, Opsahl, Andy
Direct-recording electronic (DRE) voting machines have been fraught with
controversy because of allegations that they do not, as advertised, boast
adequate security or reliability. Advocates claim that e-voting systems'
primary advantage is their ability to substantially reduce voter error, but
observers say there are still lingering vulnerabilities that must be
addressed before the systems can be widely accepted by election officials.
Critics blame the lack of openness of the systems' technology and
procedures for the inability to determine the cause of irregularities such
as mass undervoting recorded in a recent congressional race in Florida.
Electronic Frontier Foundation attorney Matthew Zimmerman says it is
difficult to hold DRE machine vendors and election officials liable for
errors because vendors are permitted to shield the systems' proprietary
code so competitors cannot duplicate their work, and this allowance ruins
transparency in government. "It's only by going through public record
requests and fighting election officials across the country that we get a
better idea of what kind of performance these machines have," he notes.
Vendors have responded to claims that elections could be rigged by
undetectable malware with counter-arguments that no real-world election
environment offers sufficient system access for such a breach to be
successful, and they believe a test election prior to actual voting could
tell whether the DREs have been compromised. Zimmerman cites inadequacies
in the certification awarded to e-voting systems, maintaining that "There
isn't a very substantive review of the code and the components that go into
these systems." There is much support for the inclusion of a paper trail
in DRE machines, but the existing models need reliability-boosting design
improvements, according to Doug Lewis with the National Association of
State Election Directors.
Click Here to View Full Article
to the top
Chipping In
Scientific American (02/07) Vol. 296, No. 2, P. 18; Griffith, Anna
Scientists are working on a "brain chip" designed as a memory aid,
especially in cases where the patient has suffered neural damage. A team
from the University of Southern California is getting ready for live tests
of a neural prosthesis in brain-damaged rats, which may be carried out in
the spring. In January 2006 USC researcher Theodore W. Berger and his team
engineered a silicon chip that imitates biological neurons in tissue slices
of rat hippocampus as a replacement for a section of brain that was
surgically removed, and that returns function by processing neural input
into appropriate output with a 90 percent rate of accuracy. The expense
and timeframe for constructing the brain chip necessitates the spring test
actually using a mathematical model of the chip in the form of a field
programmable gate array (FPGA). One of the study's collaborators, Wake
Forest University professor Sam Deadwyler, has shown that stimulating the
hippocampus of living rats with a specific activity pattern can boost
performance on a memory task, and in several months he will employ the FPGA
model to predict hippocampal activity; memory restoration in rats with
drug-induced amnesia via the neural prosthesis should be possible if the
model is correct. USC physicist Armand Tanguay thinks a module that uses
light beams to transmit signals between neuron units on multiple chip
layers may be necessary for more complex animal models. Factors that may
need to be addressed include the avoidance of rejection by the immune
system and neural plasticity, according to USC chemist Mark Thompson.
There is also the possibility that such implants could make memory
indelible because circumventing damaged hippocampal neurons might also
circumvent connections with other areas of the brain that filter memory.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Open vs. Closed
ACM Queue (02/07) Vol. 5, No. 1, Ford, Richard
Richard Ford of the Florida Institute of Technology writes that resolving
the debate of whether open source or closed source software involves
defining open source, closed source, and, perhaps most critical of all,
security. The traditional definition of security is the maintenance of
confidentiality, integrity, and availability (CIA) of information, but Ford
notes that this offers little guidance in terms of measuring security; he
cites the two obvious measurement approaches of quantifying the
vulnerabilities in a product and estimating the chances of a CIA component
being compromised, neither of which offers an objective measure of
security. The current inability to measure the deep-seated security
outcomes of open/closed source processes in an ordinal manner "means that
our 'experimental' approach to determining which approach leads to better
security is off the table: Until the science matures, we will have to
examine the pros and cons of each approach independently and try to balance
them ourselves," the author reasons. Ford makes the case that closed
source, simply put, does not allow access to source code while open source
does; similarly, most open source advocates support the legal modification
and redistribution of distributed source code, while closed source
proponents tend to oppose derivative works. Open source has advantages to
both software hackers and defenders--for hackers, open source offers
complete disclosure on the implementation of software features and
transparent discussion of vulnerabilities and design decisions, while
defenders can inspect the code to determine how secure features are.
Meanwhile, closed source only provides code access to the small segment of
a given community, meaning hackers must undertake an arduous process of
reverse engineering, while users have little choice but to trust the vendor
as to the product's security. Ford observes that software vendors lack
inherent trustworthiness, and notes that in such a scenario open source at
least provides the means by which an entity can check that all is well.
The author concludes that "both development methodologies have intrinsic
properties: Which set of properties most appropriately fits for a
particular application is contextual."
Click Here to View Full Article
to the top