Behind Bush's New Stress on Science, Lobbying by
Republican Executives
New York Times (02/02/06) P. C4; Markoff, John; Leary, Warren E.
(Access to this site is free; however, first-time visitors must register.)
President Bush's call for doubling the federal funding of basic scientific
research comes as a response to several meetings that White House officials
held with technology executives and educators. Bush's plan to request $910
million in the first year, and $50 billion over 10 years was welcome news
for computer scientists who have long warned against the destructive impact
of eroding federal funding. Bush identified nanotechnology,
supercomputing, and alternative energy sources as long-term initiatives
that the administration would now support, in a departure from its
traditional focus on short-term research. Under Bush's plan, spending
would likely increase by 7 percent annually, roughly doubling over 10
years. While the details remain vague, ACM President David Patterson is
excited: "This is really a huge deal and I'm very encouraged," though he
noted with concern that many legislators attending the State of the Union
address were not moved to applause by Bush's announcement. "It just shows
the challenge we have." In two high-profile discussions where the
administration was urged to heed the warning of the National Academy of
Sciences that science and technology education are eroding rapidly, Intel
Chairman Craig Barrett met with Vice President Dick Cheney, and Charles
Vest, the former president of MIT, met with OMB director Joshua Bolten.
The executives and educators who had attended those meetings were still
unsure if the administration would act on their recommendations, so Bush's
announcement came as a welcome surprise. "We haven't seen this interest in
basic research from this president before," said the American Association
for the Advancement of Science's Albert Teich. The growing problem of
funding for research and education has also attracted Congressional
interest, as two bipartisan bills addressing the matter have recently been
introduced.
Click Here to View Full Article
to the top
Data Mining Tells Government and Business a Lot About
You
Knight-Ridder Wire Services (02/01/06) Boyd, Robert S.
Data mining is at the center of the debate over the government's
warrantless eavesdropping program, as the same complex algorithms that
identify consumer preferences and search tendencies are being used to
ferret out suspicious relationships that could point to terrorist activity
within the nation's borders. The Senate Judiciary Committee will open its
investigations into the government's eavesdropping program on Monday. Data
mining is dependent on computers to cull through an ocean of meaningless
data to arrive at one potentially meaningful relationship, at which point a
human investigator takes over. Determining whether two members of a list
of 10 suspects have ever stayed in the same hotel at the same time would
require a search of at least 250,000 records, well beyond the capacity of a
human, notes Stanford University computer science professor Jeffrey Ullman.
"Before data aggregation and data mining came into use, personal
information on individuals contained in paper records stored at widely
dispersed locations, such as courthouses or other government offices, was
relatively difficult to gather and analyze," according to a Congressional
report issued by the Government Accountability Office (GAO) last year. The
GAO estimates that 52 government agencies had implemented or were planning
data mining initiatives in 2004. Should a data mining initiative be
employed on a national scale, it might include a search engine that would
scan for suspicious words, such as "bin Laden" or "nuclear plant" in
telephone and email communications. Data visualization techniques could
plot out these words, giving due weight to their frequency. Data mining is
also widely used in the insurance industry to detect fraud, as well as in
marketing, politics, and finance. Increased data mining usage has
naturally drawn criticism from privacy advocates, who argue that while the
program cloaks itself in national security concerns, it nonetheless is
overly invasive.
Click Here to View Full Article
to the top
U.N. Tech Summit Promotes Middle East
TechWeb (01/31/06) Jones, K.C.
This April, Oman will host the first World Summit on Innovation and
Entrepreneurship, which will bring together more than 1,000 leaders in
education, business, science, and policy to promote technology and
innovation in the Middle East. The summit, which plans to develop a
workable model that can be applied to other emerging markets such as Africa
and Eastern Europe, will draw representatives from Microsoft, Google, MIT,
Hewlett-Packard and many other organizations. One summit participant,
Cisco, has already made significant inroads in the Middle East, having
established all-women's technical academies in countries such as Jordan and
Saudi Arabia. Oman was selected to host the summit for the progressive
policies of its monarchy, which has pursued a program of "Omanization" to
deliver education and job opportunities to its citizens. Global Leadership
Team Chairman Sam Hamdan said the summit would focus on empowering
developing societies with technology and overcoming the digital divide.
The summit will address six principle issues: improving existing
infrastructure; accelerating knowledge partnerships through empowerment;
inspiring human talent to sustain development; legal reforms and improved
governance through innovation; information sharing to spur growth; and
examining the effect of innovation on future generations. Among the other
issues on the summit's agenda are bringing women and children into the
technology constellation and a host of environmental and geopolitical
issues, such as renewable resources, managing oil wealth, and intellectual
property rights. Technological topics will include nanotechnology,
e-government, telecommunications, and open source applications.
Click Here to View Full Article
to the top
The Open-Source Programmer Who Means Business
CNet (02/02/06) Marson, Ingrid
In a recent interview, Red Hat's Alan Cox outlined his thoughts on a host
of open-source issues, including the Linux kernel, software patents, and
the new version of the GPL. Cox applauds the GPL for its compatibility,
and describes the update as a more sensible version of GPL 2.0. In the
wake of the Sony rootkit scandal, Cox believes that digital rights
management may ultimately be determined by government regulation, as a
consensus has yet to emerge on the relationship of computers and private
property. An ardent opponent of software patents, Cox expresses shock that
the European Commission would consider altering the patent process after
the European Parliament flatly rejected software patents last year.
Meanwhile, pressure from IBM and Microsoft to reform the U.S. patent system
is progressing slowly, but Cox believes that it is evolving in the right
direction. He is pleased with the development of the Linux kernel, noting
that it has evolved to a point where it offers all the functionality that
its users require, and that the modifications and updates that are
currently being issued are mostly tweaks--subtle improvements on speed and
efficiency. Cox praises the unstructured development process of the Linux
kernel, though he notes that every change is reviewed, and that there are
few prominent independent developers anymore. Migration to Linux is
gaining steam, particularly in larger operations and environments where PCs
are used mostly for word processing. Cox also notes that OpenOffice has
been a key driver of Linux adoption.
Click Here to View Full Article
to the top
U.S. High-Tech Jobs Waking From Long Slumber
Investor's Business Daily (02/02/06) P. A4; Much, Marilyn
The productivity gains of the U.S. technology sector is prompting the
industry to hire more tech workers, according to Mark Zandi, chief
economist of Moody's Economy.com. Job growth rose to 125,000 positions
last year, up about nine times from 14,000 in 2004, says Zandi, who expects
high-tech companies to add 217,000 jobs this year. Going forward, he
anticipates a significant run up in the demand for tech products globally
over the next few years, prompting him to forecast the creation of 126,000
jobs in 2007, 123,000 in 2008, and 150,000 or more a year in 2009 and 2010.
Meanwhile, the creation of 2,000 tech jobs in Silicon Valley in 2005 marks
the first time the region added new jobs in four years, according to the
trade group Joint Venture: Silicon Valley Network. Russell Hancock, chief
executive of the organization, says most of the new tech jobs in the region
were high-end positions, and that growth is likely to continue in areas
such as biotechnology and nanotechnology. In another good sign,
outplacement consulting firm Challenger, Gray, & Christmas reports that the
number of tech layoffs fell to 174,744 in 2005, down from 176,113 the
previous year. However, the firm believes mergers and acquisitions,
including private equity buyouts, will provide an element of uncertainty in
the years to come.
Click Here to View Full Article
to the top
Georgia Tech, Oak Ridge and UT-Battelle Collaborate on
High-Performance Computing
EurekAlert (02/01/06)
Large-scale research using supercomputing technology in the United States
should receive an enormous boost as a result of a collaboration between the
College of Computing at Georgia Tech (COC), Oak Ridge National Laboratory
(ORNL), and UT-Battelle. The partnership calls for the research bodies to
share their facilities, scientific resources, and staff. COC is a leader
in using computer technology to advance social and scientific research,
ORNL is the largest multipurpose laboratory of the U.S. Department of
Energy, and UT-Battelle is a nonprofit partnership between the University
of Tennessee and Battelle that manages ORNL. Dr. Thomas Zacharia,
associate laboratory director for ORNL's Computing and Computational
Sciences Directorate, is set to become a professor at COC. In the months
to come, there will be other appointments involving faculty and staff
members, and computer resources will continue to be shared. Also, COC's
Computational Science and Engineering Division will open a campus at ORNL.
"We firmly believe that this partnership with ORNL and UT-Battelle will
create a one-of-a-kind environment for high-performance computing research
and help reinvigorate U.S. capabilities in supercomputing," says Richard A.
DeMillo, dean of the College of Computing at Georgia Tech.
Click Here to View Full Article
to the top
Spyware Poses a Significant Threat on the Net, According
to New Study
UW News (02/02/06)
A recent University of Washington study of more than 20 million Internet
addresses has found that spyware, while slightly less prevalent today than
last spring, is nonetheless alive and well. The researchers found that
more than one in 20 executable files carried spyware, while one in 62
domains attempt to forcibly upload spyware for users who merely visit a
site. Piggybacked spyware appeared most frequently on celebrity and game
Web sites. "For unsuspecting users, spyware has become the most 'popular'
download on the Internet," said computer science and engineering professor
Hank Levy. Spyware severity runs a wide spectrum, as some programs simply
inundate users with popups, while others can steal personal information,
install malicious programs without the user's knowing, or even corrupt the
computer beyond use. Of the piggybacked spyware that the researchers
found, 14 percent carried potentially malicious applications, while the
rest was relatively harmless adware. Drive-by download attacks were down
93 percent from May to October of last year, which could be explained by
the increasing use of antivirus software and automatic update programs.
The researchers recommend that users have at least one anti-spyware program
installed on their computers and that they keep it up-to-date, and that
users exercise common sense and only download from reputable sites.
Click Here to View Full Article
to the top
High Tech, Under the Skin
New York Times (02/02/06) P. E1; Bahney, Anna
(Access to this site is free; however, first-time visitors must register.)
The convergence of man and machine is taking a bold step forward as people
implant RFID devices under their skin to log on to their computers, unlock
their cars, and open doors with a wave of the hand. Some enthusiasts argue
that the cell phone has essentially become an appendage of the human body,
and implanting a chip is simply the logical extension of our intertwined
relationship with technology. RFID tags have been implanted in livestock
for years, enabling owners to scan the animal from two to four inches away
to determine if it belongs to them. While the blending of humans and
computers has long been the dystopic fantasy of science fiction
visionaries, a growing body of pro-convergence technologists see its
reality in the redesigns of cameras, MP3 players, and storage devices to
resemble jewelry and blend into a user's wardrobe, as well as the jackets
and sunglasses that now come with Bluetooth capabilities, enabling them to
function as digital devices. For the many people who see the cell phone as
an extension of themselves and carry flash drives on their key chains and
iPods with their entire music library, an implanted RFID chip is no
different than having a filling put in, says the Institute for the Future's
Alex Soojun-Kim Pang. RFID tags can be obtained on the Internet for as
little as $2, and devices such as computers and car doors can be modified
with wire scanners to interface with the chip. Implanting RFID chips is
not a new practice, as the Florida company Verichip has implanted more than
2,000 people with chips to link to their medical records since 2004. The
practice has drawn criticism from privacy advocates, who claim that while
the technology may be an appealing novelty today, the future could bring an
RFID-dependent climate where people will be required to have the chips
implanted. There are also health concerns about the procedure, as many
have had their chips implanted in non-medical settings.
Click Here to View Full Article
to the top
PC Industry Looks to Transform Firmware
eWeek (01/31/06) Spooner, John G.
The computer industry is moving toward the United Extensible Firmware
Interface (UEFI) to standardize the interaction between a PC's firmware and
its operating system. UEFI also standardizes the operating system's
loading method and the execution of pre-boot applications, which promises
to reduce the software conflicts that undermine system stability. Firmware
is an oft-neglected sector of the PC industry that typically does not get
the same press that rollouts of new processors and operating systems
attract. UEFI will shoulder much of the load that had been carried by the
existing BIOS software in the first major revision of the method for
writing firmware. A UEFI 2.0 specification is due out in the near future,
and will likely pervade the PC landscape by 2007. The United EFI Forum, a
group that draws support from Intel, AMD, and Microsoft, produced the new
specification, building on Intel's Extensible Firmware Interface
specification 1.1. The transition is expected to gather steam this summer,
when Microsoft releases Windows Vista, which is expected to be compatible
with firmware based in both BIOS and EFI. UEFI 2.0 is not likely to catch
on in earnest until 2007, however, given that it is only rumored to boot
64-bit EFI operating systems. Once it is standardized, Dell is expected to
convert to UEFI, said the company's Dick Holmberg. UEFI also allows
incremental changes to be made bit by bit, while the rest of the package
remains unchanged, minimizing PC instability. The new interfaces also have
such appealing features as a boot manager, which enables PCs to boot from
multiple devices and toggle between operating systems, as well as a network
stack that links a PC to a network before its operating system has
loaded.
Click Here to View Full Article
to the top
DOE's Office of Science Awards 18 Million Hours of
Supercomputing Time to 15 Teams for Large-Scale Scientific Computing
U.S. Department of Energy (02/01/06)
The Energy Department has granted researchers 18.2 million hours of
computing time at some of the fastest supercomputers in the world to
advance a host of research projects through its Innovative and Novel
Computational Impact on Theory and Experiment (INCITE) program. "Through
the INCITE program, the department's scientific computing resources will
continue to allow researchers to make discoveries that might otherwise not
be possible," said Energy Secretary Samuel Bodman. INCITE has already
funded research in fields such as chemistry, astrophysics, and genetics,
and will support new initiatives in disease research, aerospace, molecular
simulations, and atomic-level protein structuring throughout the coming
year. INCITE is also encouraging proposals from the private sector for the
first time in its three years. Each of the last two years, the Energy
Department has only issued three grants, though it is offering 15 this year
thanks to an expansion of its program to include five supercomputers in the
Argonne National Laboratory, Lawrence Berkeley National Laboratory, Oak
Ridge National Laboratory, and Pacific Northwest National Laboratory. The
department's call for proposals received responses from 11 scientific
disciplines requesting more than 95 million hours of computing time.
Boeing, DreamWorks Animation, General Atomics, and Pratt Whitney were the
four grant recipients from industry, with the remaining 11 grants awarded
to colleges and universities. Of the grant recipients already using their
allocation, a University of Chicago group is simulating cosmos accretion,
another is studying numerical simulations of flame, and a team from the
University of Washington is cataloging the dynamical shapes of proteins
using an IBM supercomputer.
Click Here to View Full Article
to the top
Self-Improving Software
Technology Research News (01/30/06)
Software that is able to learn on its own, and without a separate training
process, could result from new algorithms developed by researchers at
Princeton University. The algorithms are able to learn from data that they
do not know anything about beforehand, and then make adjustments to handle
such data. The researchers designed the algorithms to focus on the way
pieces of data represent a range of possibilities, instead of the details
of the data. Algorithms are able to learn from individual pieces of data,
and make an improvement after dealing with a few samples. The algorithms
have the potential to allow software to alter its default configurations on
its own while it learns how it is used. The research, "Self Improving
Algorithms," was presented at the ACM-SIAM Symposium on Discrete
Algorithms, January 22-24, 2006.
Click Here to View Full Article
to the top
Words Help Us See and Talk
University of Chicago, Ill. (01/31/06)
Research in January's Proceedings of the National Academy of Sciences
suggests that language may determine half of what we visualize. The study
wades into the controversial issue of whether language influences
perception, but adds to the discussion by suggesting that what we speak
affects the right half of the visual field. The left hemisphere of the
brain is largely responsible for processing language. "So it would make
sense for the language processes of the left hemisphere to influence
perception more in the right half of the visual field than in the left
half," says Terry Regier, associate professor of psychology at the
University of Chicago. Regier authored the paper, "Whorf Hypothesis is
Supported in the Right Visual Field but not in the Left," along with Paul
Kay, professor emeritus of linguistics and a senior research scientist at
the International Computer Science Institute in Berkeley, Calif.; Richard
Ivry, professor of psychology and director of the University of California
at Berkeley's Institute of Cognitive and Brain Sciences; and Aubrey
Gilbert, a graduate student in the Helen Wills Neuroscience Institute at UC
Berkeley. The study was based on experiments on Berkeley undergraduates
who were shown a ring of colored squares, which were the same color except
for an "odd man out." Participants were charged with identifying which
side of a circle (right or left) the odd-man-out was on, as the odd-man-out
was given either the same name as the other squares (e.g. a shade of
"green" while the others were a different shade of "green") or a different
name (e.g. a shade of "blue" while the others were a shade of "green").
Responses came more quickly when the odd-man-out had a different name and
was in the right half of the visual field.
Click Here to View Full Article
to the top
Ontology Construction From Online Ontologies
University of Southampton (ECS) (01/30/06) Alani, Harith
Harith Alani of the University of Southampton's Electronics and Computer
Science Department posits that the high cost of building ontologies is a
major obstacle toward their wide acceptance, and notes that existing
ontology reuse, though less expensive, is a challenging proposition because
of the relative newness of ontology reuse tools. Alani notes that the
emergence of ontologies and libraries for storing and indexing ontologies
on the Web, along with search engines that can expedite the search and
retrieval of online ontologies, is a positive development, and he proposes
a system for automatically building ontologies through the identification,
ranking, and integration of online ontological fragments. The first step
toward constructing an ontology is to identify terms to be represented in
the ontology, and Alani's proposed system would analyze the ontologies
retrieved in a search query to obtain as much representational data as
possible about the given term. If an excessive number of ontologies is
found, the system would rank them by certain criteria, such as the degree
to which they represent concepts in the search query, user ratings of the
ontologies, or how well they address the needs of specific assessment
tests. The system would either take the ontology as a whole or only the
segment describing the term in question, and then compare the various
ontologies it finds to identify any additional representations that can be
blended into the first ontology, resulting in an ontology that is richer
and more refined than any existing ontologies. Evaluating the resulting
ontology may be necessary to ensure it fulfills some minimum quality
requirements. "Facilitating reuse of other people's ontologies should
encourage more individuals and organizations to participate in the Semantic
Web," Alani reasons.
Click Here to View Full Article
to the top
Forum Tackles Internet Regulation
ISN--Security Watch (01/31/06) Lyman, Eric J.
Representatives from leading high-tech companies around the globe met at
Italy's Ministry of Culture in Rome in late January to discuss online
security, reaching a general consensus that the less the government gets
involved, the better. The International Conference on the Future of the
Digital Economy was hosted by the Organization for Economic Cooperation and
Development. Attendees noted that companies have taken security-related
steps independently to avoid government intervention. "If companies take
steps to assure security and legal issues, then there will be no need for
government regulators," said David Sifry, the president of Technorati, a
company that monitors traffic on Internet blog sites, on the sidelines of
the event. Representatives from Google were on hand to discuss their
much-maligned Google Book Search initiative, noting that only books in the
public domain will be fully available. "We are operating on the assumption
that regular copyright law is the regulatory structure that is relevant
here," said Patricia Moll, Google's European policy manager. "We are not
breaking any copyright laws and we don't see there being a need for
additional regulation in this area." Jung Ju Kim, the CEO of South Korean
multiplayer game developer Nexon Corporation, weighed in, saying, "We
believe the government should step in only when there is no other way."
But speaking at the event, Italian Innovation and Technologies Minister
Lucio Stanca said that the government will indeed step in when needed.
"The government's job is to protect its citizens in every way possible,"
Stanca said. "If the need arises in the future, that could include
increased regulation of the online world." Italy in the past has shown a
willingness to get involved in Internet affairs, strictly limiting the use
of its .it domain, policing defamatory Web sites hosted in Italy, and
limiting the sale of illegal products, as well as goods linked to extreme
political groups over the Internet.
Click Here to View Full Article
to the top
The Race Is on to Debug Dual-Core Deadlocks
Software Development Times (02/01/06)No. 143, P. 1; DeJong, Jennifer
Deadlocks or race conditions that are hard to detect can cause a
multithreaded application to halt in its tracks, but dual-core desktop
processors from Intel and AMD are renewing interest in developing an
antidote to such errors. Multithreaded applications contain threads that
frequently vie for shared resources, along with variables and functions; in
one example, a deadlock can crop up when a multithreaded application locks
a resource and does not unlock it once its task is finished, causing a
system freeze, according to Coverity's Andy Yang. Competition between
threads for resources when the application fails to specify the order in
which the threads can access the resources gives rise to race conditions,
and Intel engineer James Reinders says synchronization can address this
flaw. Multithreaded applications perform tasks concurrently, and avoiding
errors requires the developer to know a program's resource-sharing
principles. "Don't write multithreaded code if you don't know what you are
doing," recommends Fortify CTO Roger Thornton. Collisions can be reduced
through the use of "thread-safe" tools such as C# and Java, while Ken Cowan
of Compuware says concurrency issues should be tackled in the design stage.
Thornton believes multithreading should be used to increase performance,
not just for its own sake, and adds that "you have the responsibility to
weigh the trade-offs." Even when concurrency is necessary, developers need
to consider a way to distribute the work among multiple processors without
generating excessive synchronization, and tasks that do not exhibit a high
degree of interdependence are the best candidates for parallelization.
Click Here to View Full Article
to the top
Robot Special: Walk This Way
New Scientist (02/04/06) Vol. 189, No. 2537, P. 40; Ball, Philip
Only recently has the concept of passive walking robots--machines that can
imitate a human's entire gait cycle by relying on leg motion and gravity
rather than motors--gained prominence, and an important sign of the
technology's increasing distinction was the unveiling of three passive
walkers at an American Association for the Advancement of Science
conference last February. A team led by Cornell University's Andy Ruina
built a robot that mimics people's ability to pump their gait by pushing
off their back foot at the beginning of each step by using a spring in each
lower leg; a small motor stretches the spring, which causes the ankle joint
to flex when released. Another robot, built by MIT researchers supervised
by Russ Tedrake, can sense the tilt of its body and other factors in order
to "learn" how to walk, using an on-board computer that adjusts command
signals transmitted to electric motors that flex the ankles. The third
robot was the product of the Delft University of Technology in the
Netherlands: The device is driven by compressed-air actuators in the hips,
and features an ankle design influenced by skateboard suspension principles
that adds stability. All three machines represent a significant advance
because they simplify leg design and control, and lower energy consumption.
Passive walkers' inability to steer or avoid obstacles, as well as their
difficulty in dealing with uneven or pliable surfaces, makes them
inappropriate models for robust walking robots, according to Carnegie
Mellon University roboticist Chris Atkeson. He believes the best solution
will mix principles of both passive dynamic walkers and powered walkers,
and expects practical robots will be driven by most if not all of their
joints. Passive-walking robots are also inspiring research into advanced
prosthetics that could perhaps reduce the energy cost to the wearer and
amplify human performance.
Click Here to View Full Article
to the top
Unified--and Agile
Software Development (01/06) Vol. 14, No. 1, P. 49; Ambler, Scott W.
Ambysoft software process improvement consultant Scott Ambler believes
software development could be revolutionized by combining agility with the
Unified Process (UP), and his work in this area has yielded the Agile
Unified Process (AUP), a simplified approach to software development based
on the Rational Unified Process (RUP). AUP shares with RUP a serial,
iterative nature and an architecture-centric scheme, but the chief
difference is AUP's simplicity. AUP possesses a small number of
deliverables--source code, a regression test suite, and other artifacts
that must be produced as part of the system--which can speed up time to
market. AUP's model discipline consolidates RUP's Business Modeling,
Requirements, and Analysis and Design disciplines, and less up-front
modeling is required in AUP. In AUP, RUP's Configuration and Change
Management discipline is supplanted by a Configuration Management
discipline, while RUP's change management activities are incorporated into
Model and Project Management disciplines. Ambler notes that agile
techniques such as test-first programming, code and database refactoring,
continuous builds, and continuous regression testing are adopted by AUP's
Implementation discipline. The need for close collaboration between
developers and stakeholders and agile modelers is explicitly depicted, as
is database and administration initiatives. AUP encompasses an evolution
of the Project Manager's role to reflect agile project management
strategies.
Click Here to View Full Article
to the top