Task Force Advocates Innovation Focus in Military
National Journal's Technology Daily (11/16/06) Greenfield, Heather
The Task Force on the Future of American Innovation presented President
Bush with a request for basic defense research to be included in his
American competitiveness initiative. Although military R&D spending is at
a record high, recent expenditures have simply applied current technology
to new equipment: "We have been under-investing in the basic research
needed for the next-generation military technology," says the report. A
pattern revealed by a report last year was confirmed and updated, showing
further decreases in federal investment in physical sciences and
engineering, says task force chairman Doug Comer of Intel. Also cited is
the fact that China is now the largest technology exporter and that North
American companies comprised only 41 percent of U.S. patents filed, while
59 percent were from Asian companies. Newt Gingrich, a member of the task
force, warns that the U.S. must aim to take the lead in science through
investment in national security advancements; "otherwise, we'll have
opponents that have scientific capabilities we don't understand," he says.
Although both President Bush and incoming House Speaker Nancy Pelosi
(D-Ca.) support more funding for basic research and programs to produce
more science, technology, and engineering graduates, Gingrich says the
bipartisan support has not produced visible gains, while the proposals are
not enough to keep pace with other countries.
Click Here to View Full Article
to the top
CRA Calls for Advice on the GENI Science Council
Computing Research Association (11/20/06) Bernat, Andy
The National Science Foundation has invited the Computing Research
Association (CRA) to establish a Computing Community Consortium (CCC) to
help the computing research community build compelling long-term research
visions and the mechanisms to realize them. One of the first
responsibilities of the CCC is to create a council to help guide the design
of the science plan for the Global Environment for Networking Innovations
(GENI) initiative. The purpose of GENI is to enable the research community
to invent and demonstrate a global communications network and related
services that will be qualitatively better than today's Internet. The GENI
Science Council (GSC) will provide broad research community involvement for
GENI, and the CRA is now calling on the community to help set the GSC
agenda. They are seeking input on such matters as the research areas the
GSC should address; overall characteristics the GSC should possess; and
recommendations of specific individuals deemed strong contributors to the
GSC. Responses should be submitted to CRA Executive Director Andy Bernat
at [email protected].
Click Here to View Full Article
to the top
U.S. Technology Czar Says More IT Workers Needed
eWeek (11/17/06) Gibson, Stan
The United States needs to produce more engineering graduates and allow
more foreigners to work in the country in order to prevent a shortage of
workers in its IT workforce, U.S. technology czar Robert Cresanti said
during an exclusive interview with eWeek. On the issue of foreign workers,
Cresanti, undersecretary of commerce for technology, expressed support for
the H-1B visa program and for making it easier for students from other
countries to obtain visas. A recent trip to China allowed Cresanti to see
first hand the enormous amount of money that has been spent on research
facilities and schools in the nation, and the observation helped him to
realize that "math and science are ingrained" in the culture. "Virtually
every senior government official I met was an engineer," he added. Trade
with China remains an option, but the issue of intellectual property still
needs to be addressed. He also answered questions on the topic of software
patents, and said the patents need to be clearly defined. Looking to the
future, Cresanti is very optimistic about nanotechnology and its potential
impact on how things will be made in the years to come. However, he
acknowledged that their needs to be further study of the potential health
risks of nanotechnology.
Click Here to View Full Article
to the top
Hyperlinking Reality via Phones
Technology Review (11/20/06) Greene, Kate
Mobile Augmented Reality Applications (MARA) are being developed by Nokia
researchers as a way to superimpose virtual information on a real-time cell
phone video stream. Nokia Research Center engineer David Murphy says the
technology could be used to locate and learn about nearby restaurants,
hotels, or other MARA users, even providing hyperlinks to menus or blogs,
if available, as more people and places are incorporated into the MARA
system. The prototype uses GPS technology and a system of three sensors
that pinpoint the phone's exact location and orientation. Murphy says that
MARA's see-through annotation projects purely virtual objects on the
screen, such as a mural superimposed on the side of a building, an
improvement over past augmented reality technology. While Nokia does not
plan to develop the technology into a commercial product because of power
consumption problems, privacy issues, and other matters, it will open up
MARA to outside developers who will be able to tweak it for their own
purposes.
Click Here to View Full Article
to the top
A Quantum (Computer) Step
University of Utah News (11/19/2006) Siegel, Lee
University of Utah physicist Christoph Boehme has accomplished "a
breakthrough in the search for a nanoscopic [atomic scale] mechanism that
could be used for a data readout device," as he explains, representing a
major breakthrough in building a phosphorus-and-silicon quantum computer.
"We have demonstrated experimentally that the nuclear spin orientations of
phosphorus atoms embedded in silicon can be measured by very subtle
electric current passing through the phosphorus atoms," says Boehme, an
associate professor of physics. "For this concept, data readout is the
biggest issue, and we have shown a way to read data." Boehme's team was
able to "read" the net spin of 10,000 electron, and while a true quantum
computer would read the spin of a single electron, the research represents
a million-fold improvement over previous work, proves the possibility of
reading a single electron's spin, and most importantly according to Boehme,
shows that electrical techniques can read data stored on the more stable
spins of atomic nuclei. He was able to "measure the spins of the nuclei of
individual phosphorus atoms in a piece of silicon when the phosphorus is
close [within about 50 atoms] to the surface," and should soon be able to
"read a single phosphorus nucleus." However, Boehme says that if he were
to compare current progress towards the goal of quantum computing to
classical computer: "We would probably be just before the discovery of the
abacus."
Click Here to View Full Article
to the top
Enrollment of Foreign Students Levels Off, Falls in
CIS
CRA Bulletin (11/17/06) Vegso, Jay
For the third straight year, the number of foreign students pursuing
degrees in computer and information science (CIS) in the United States has
fallen, according to the 2006 Open Doors report from the Institute of
International Education (IEE). In the 2005-2006 academic year, there were
34,418 international students enrolled in CIS programs, which represents a
12 percent decline from 38,966 foreign students in 2004-2005, and a 41
percent decline from 57,739 in 2003-2004. In the latest academic year,
international students in CIS programs accounted for 6.1 percent of all
foreign students pursuing degrees in the United States, which is down from
6.9 percent in 2004-2005, and 10.1 percent in 2003-2004. The decline in
foreign students in CIS programs comes at a time in which the total number
of international students in the country was nearly the same as a year ago,
at 564,766. The report also reveals that 88,460 foreigners were studying
engineering, down from 92,952 in 2004-2005 and 95,220 in 2003-2004.
International students in engineering programs accounted for 15.7 percent
of all foreign students. And there were 11,200 foreign students in
mathematics programs, which accounted for 2 percent of the total number of
international students.
Click Here to View Full Article
to the top
Did Florida Foul Another Ballot?
Wired News (11/17/06) Zetter, Kim
Critics contend that touch-screen voting machines may have lost over
18,000 votes cast last week in Sarasota, Fla., for a congressional seat,
and are calling the recount currently underway a joke because e-voting
systems lack a paper trail and questions about the missing votes have not
been addressed. A planned legal challenge that will probably be filed next
week could help to finally, clearly demonstrate the unreliability of
e-voting machines, according to critics. Voters who cast ballots before
the election claimed the machines were not recording their selection in the
congressional race, and noted that the screen seemed to record their vote
when they cast it, but showed no vote cast on the review page. A potential
calibration problem with the touch screens was also indicated by reports of
vote-switching difficulties. "We're hoping this situation in Sarasota is
going to show how absolutely insane it is to have these machines recording
our votes...or not recording our votes," declared the Florida Fair
Elections Coalition's Susan Pynchon. Rep. Rush Holt (D-N.J.) and other
lawmakers are using the Florida debacle as an opportunity to support a bill
pending in Congress that would make voter-verified paper trails a
requirement for all e-voting systems in the United States.
Click Here to View Full Article
to the top
Not YouTube, HUGETube: Purdue Researchers Stream Massive
Internet Video
Purdue University News (11/16/06) Tally, Steve
Purdue University researchers have developed a technique that has allowed
them to stream a 10 GB animation over the Internet in two minutes. The
researchers at the university's Envision Center for Data Perceptualization
demonstrated the project, which involved streaming cell structure animation
at 7.5 Gbps over the super-fast National Lambda research network, at the
SC06 conference in Tampa, Fla. They are unsure if there has been a larger
video streamed over the Internet, and they add that the animation was not
compressed. Moreover, the technique they used still allowed them to stop,
replay, and zoom in in real time on the video that measured 4,096 pixels by
3,072 pixels, which is about 12 17-inch computer monitors organized in a
grid three monitors high and four monitors wide. With a peak speed of 8.4
Gbps, the researchers could have sent another 12 movie DVDs during the same
time. Laura Arns, associate director and research scientist at the center,
says "the equipment could have been purchased off the shelf for less than
$100,000." Though researchers could use the technique as a cost-effective
way to collaborate on scientific visuals in real time, the film industry
also could use it to produce movies and stream them to theaters.
Click Here to View Full Article
to the top
Converging Virtualization With Distributed
Computing
HPC Wire (11/16/06) Vol. 13, No. 4,
Argonne National Laboratory scientist Kate Keahey, who is currently
working on the Globus Toolkit and other aspects of Grid technology, took
some time to discuss developments in virtualization as it is used to
implement Grid computing before the first IEEE/ACM International Workshop
on Virtualization Technologies in Distributed Computing was held at SC06 on
Friday. Her work focuses mainly on techniques to "dynamically provision
well-defined execution environments--a.k.a., 'virtual workspaces'--and the
various resource and policy management issues that it entails," she says.
Keahey explains virtualization "as a vehicle to realize the dream of Grid
computing...virtualization introduces a layer of abstraction that turns the
question around from 'let's see what resources are available and figure out
if we can adapt our problem to use them' to 'here is an environment I need
to solve my problem--I want to have it deployed on the grid as described."
She even envisions portable "pluggable virtual environments" existing in
the future. Convergence would create an unprecedented degree of
"simplicity, making resource sharing easier, greater manageability--in
other words things that improve your 'quality of life,' as a Grid user and
make on-demand resource provisioning applicable to a broader set of
applications." The Virtualization Technologies and Distributed Computing
workshop is her way to bringing together these two communities, which
rarely interact, into a common forum that "lead to better understanding of
the challenges [facing the industry], increase the synergy and iterations
on solutions--and most of all, provoke new ideas," said Keahey.
Click Here to View Full Article
to the top
Software Distinguishes Between Online Namesakes
New Scientist (11/16/06) Simonite, Tom
Researchers at the University of Tokyo in Japan have developed a new
search tool that has the potential to be beneficial for language processing
and the Semantic Web. The software is designed to conduct a search of
someone with a common name on the Web, and automatically analyze the
details of the results to determine which individual is the correct one.
The system would be able to distinguish the pop singer Michael Jackson from
the many other people with the same name in the search results, and suggest
the use of certain keywords such as "music" to obtain the results for the
entertainer. Developed by Danushka Bollegala and colleagues Yutaka Matsuo
and Mitsuru Ishizuka, the program analyzes the first 100 results in a
Google search, looks for common words in summaries, clusters results
together for each individual, and finds words and phrases in full-page
results that are relevant to each person. As a result, ambiguous
information does not confuse the program. "We are working to extend the
method to disambiguate other types of named entities, such as products,
organizations, and geographical information," says Bollegala. The Semantic
Web needs computers to understand the meaning of information online, adds
Victoria Uren of Britain's Open University. The accuracy rate of the
algorithm is between 70 percent and 95 percent so far.
Click Here to View Full Article
to the top
Cracked It!
Guardian Unlimited (UK) (11/17/06) Boggan, Steve
UK Identity and Passport Security claims that the new passports it is
issuing are sufficiently encrypted to prevent fraudulent activity, but some
experts have found flaws in the new system. The International Civil
Aviation Organization (ICAO) set new standards for passports in 2003 that
mandated a RFID microchip included in passports that can only be read with
a key consisting of the passport number, the holder's date of birth, and
the passport's expiration date, all of which are printed on a "machine
readable zone" of the passport; when the passport is swiped by an
immigration official, the key is fed into the scanner that is then allowed
to read the RFID chip; the passport holder's information is displayed on
the official's screen. Bunker Hosting Security technical director Adam
Laurie explains, "The information in the chip is not encrypted, but to
access it you have to start up an encrypted conversation between the reader
and the RFID chip in the passport." He was able to write software in 48
hours that allowed him to communicate with the chip; Laurie says that
although the Home Office used state of the art encryption technology, it
also used non-secret information (actually written in the passport) as a
"secure key," a potentially fatal, and foolish, flaw. The Home Office
points out that the information that can be extracted from the chip is that
which is already on the passport and in order to access it you need visual
access to the passport, but German DN-Systems Enterprise Solutions founder
Luke Grunwald has been able to create a RFID clone that could be used to
enter a country illegally, a technique others agree is a dangerous
possibility. Pictures on the RFID chip cannot be altered, but simple
visual confirmation of a person's appearance has not been proven as an
effective security measure.
Click Here to View Full Article
to the top
What Do Robots Dream Of?
Science (11/17/06) Vol. 314, No. 5802, P. 1093; Adami, Christoph
A testbed for self-awareness models could be supplied by robots capable of
devising and updating internal models of their own physical structure,
which yields more robust navigation of their environment and better
autonomous injury recovery, according to Christoph Adami of the Keck
Graduate Institute of Applied Life Sciences. He cites the work of J.
Bongard et al., which seeks to prove that a robot's robustness in
surroundings that may include damage to the machine can be improved, using
a four-legged robot that builds an internal model of itself by first
executing actions on a flat surface and recording responses. "The robot
then computationally tests candidate self-models, by re-imagining the
actions it just performed and comparing the behavior of the model with its
memory of the results--that is, the robot tries to explain the observed
relationship between sensory data and leg actuation by making assumptions
about its own configuration," Adami notes. The robot plays a dynamic role
in ascertaining its optimal self-model. Bongard and colleagues employed an
algorithm that utilizes the important information theory tenet that maximum
predictive power can be yielded from the minimization of entropy. Adami
reasons that through the use of such algorithms, the robot could "dream up"
strategies for successfully navigating its environment. "We ought to be
able to record the changes in the robot's artificial brain as it
establishes its beliefs and models about the world and itself, and from
those infer not only its cognitive algorithms, but also witness the
emergence of a personality," he writes. "Thus, perhaps the discipline of
experimental robot psychology is not too far off in the future."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Communicating Even When the Network's Down
Network World (11/16/06) Cox, John
Mobile disruption tolerant networks (DTNs) are being developed by
researchers, who note that the sustained communications such networks
provide entails slower data transmission and reception. The Defense
Advanced Research Projects Agency is investing $8.7 million into BBN
Technologies' SPINDLE DTN research project; BBN has devised a network
protocol and code that transfers data between nodes as connections become
available, and can store data persistently until a connection is open. To
compensate for the unavailable or broken-down infrastructure that is
typical of a disrupted network, the BBN researchers are blending the new
routing protocol with the late binding method. A new DTN caching model is
also being studied by the researchers, so that cached content can be
tracked and information requests can be responded to even when disruptions
make search-and-access capabilities unavailable. A DTN scheme in which
information requests pass through the network and encounter information
advertisements is visualized by SPINDLE project manager Stephen Polit and
Internetwork Research Group chief scientist Rajesh Krishnan. Meanwhile,
the University of Massachusetts Amherst's Privacy, Internetworking,
Security, and Mobile Systems Lab has created a working DTN, DieselNet, that
integrates commercially available single-board computers, GPS receivers,
and radios on 40 UMass Transit System buses. Since DieselNet was launched,
the median data transmitted between buses fell from 1 MB in 10 seconds to
about 0.5 MB in eight seconds, and researchers are using stationary,
standalone wireless nodes or "throw boxes" to boost throughput.
Click Here to View Full Article
to the top
'To Microsoft, We're a Source of Smart People'
Guardian Unlimited (UK) (11/16/06) Schofield, Jack
Microsoft's Cambridge Research Lab is a unique and vital part of the
company's $6.6 billion a year R&D budget, says the lab's head Andrew
Herbert. When Herbert began his new job at the lab, its areas of focus
were programming languages and tools, machine learning, and systems and
networks, but since his arrival, they have added applications. Among many
current projects in the lab is SenseCam, which can be worn all day to
record experiences, and is being tested with memory-loss patients.
Microsoft Research's technology transfer team is integral in making sure
researchers and product developers are aware of what each other are working
on and the challenges they face, allowing them to help each other when
needed, says Herbert. When Microsoft decides to get into an already
established market area, it is the Research lab that tackles the problem.
For example, when Microsoft decided to build a search engine, it was the
lab that said, "Here's how to build one," according to Herbert. "We
clearly a source of new technology and we're a source of smart people. We
also see one of our roles as to give the company agility, by researching
things that don't relate to current products. As a research lab, we're
allowed and encouraged to go off on those tangents, because you never
know."
Click Here to View Full Article
to the top
Of VPNs and Peer-to-Peer SIP: IETF Chair Speaks
Out
Network World (11/02/06) Vol. 23, No. 43, P. 1; Marsan, Carolyn Duffy
Internet Engineering Task Force Chairman Brian Carpenter spoke with
Carolyn Duffy Marsan about recent projects and happenings before the
group's 67th meeting in San Diego. Carpenter identified peer-to-peer
session initiation protocol as "the most interesting thing" to be on the
table in San Diego; "SIP was originally designed as a session protocol, and
it assumes there is some sort of SIP service provider," Carpenter says.
"Skype came along, and people started asking, why can't we do SIP in
peer-to-peer mode? That's generating a lot of interest." In other topics,
a working group known as Network Endpoint Assessment was recently chartered
to address issues of security for determining when a system shows up on the
network whether or not it has necessary security configuration, in hopes of
defining an information exchange protocol regarding the system or network's
security protocol. In the past year, the IPv6 working group announced that
it had finished its job, no longer needing to meet in person, and the
Lightweight Directory Access Protocol working groups have also finished
their task of creating Version 3 and extension documents. Currently,
tunneling and VPNs are receiving a lot of attention, but according to
Carpenter, "the work is spread over a bunch of working groups." In the
future, he said, IETF has "a lot of scaling ahead in routing and
addressing," and while the task will be a challenging one, requiring
extensive communication within the field, Carpenter is confident that it
will be handled.
Click Here to View Full Article
to the top
Malware Goes Mobile
Scientific American (11/06) Vol. 295, No. 5, P. 70; Hypponen, Mikko
It was inevitable that increasingly sophisticated mobile phones or smart
phones would become susceptible to malware, writes F-Secure chief research
officer Mikko Hypponen. More than 300 kinds of malicious programs that
target smart phones, including worms, spyware, and Trojan horses, are at
large today. Hypponen says there must be a unified effort by the security
community, cellular network operators, smart phone designers, and phone
users to check the spread of mobile malware before it reaches epidemic
proportions. The decreasing cost and increasing sophistication of smart
phones is boosting their popularity to the point where such devices could
conceivably comprise most of the world's computers in the near future, and
this will offer an irresistible target to malware creators seeking to
exploit smart phone users' unfamiliarity with computers and their
vulnerabilities. "Carriers would be wise to begin educating cellular
customers now about how to identify and avoid mobile viruses, rather than
waiting until these infections become epidemic," Hypponen suggests. "Phone
makers should install antivirus software by default, just as PC
manufacturers now do. And regulators and phone companies can also help
avoid the monoculture problem that plagues PCs by encouraging a diverse
ecosystem for smart phones in which no single variety of software dominates
the market." Hypponen also supports the inclusion of firewalls into
phones, and argues that governments should play a more prominent role in
addressing the threat of mobile malware.
Click Here to View Full Article
to the top
A Conversation With Douglas W. Jones and Peter G.
Neumann
Queue (11/06) Vol. 4, No. 9, Jones, Douglas W.; Neumann, Peter G.
Examining the security of electronic voting machines yields insights on
the challenges of developing and running trustworthy systems for other
applications, and advocates of election process integrity Douglas W. Jones
and Peter G. Neumann discuss the matter. Jones notes that "any attempt to
scientifically investigate elections has unavoidable political
implications" regardless of the technologies in use. He says the need for
a transparent election system lies at the root of much of the technological
difficulties inherent in assuring election integrity. Jones contends that
"the entire system must be sufficiently open and comprehensible that
nontechnical observers can believe the results." Redundancy itself offers
no assurance without carefully planned placement and transmission of
copies, and clear techniques for spotting and addressing discrepancies
between copies; Jones also calls for the support of auditability in voting
system design, secure authentication methods to prevent fraud as well as
accidental error, trusted ways to transport all system elements, and a way
to assess how well the systems fulfill design requirements. Jones observes
that while the Help America Vote Act has spurred migration to statewide
voter registration databases, the trade-off is statewide ramifications for
mismanagement. When asked by Neumann to elaborate on embedding
transparency into the electoral process, Jones cites the need to make
voting-system failures a matter of routine investigation and to publicize
the results of such investigations, as well as ensure that the
documentation needed to interpret any public records is also public. As
far as using the Internet is concerned, Jones thinks it is a viable option
for functions that currently employ wireless systems or other public
networks, but he urges more use of satellite voting places for early voting
rather than unrestricted postal voting.
Click Here to View Full Article
to the top
Scaling System-Level Science: Scientific Exploration and
IT Implications
Computer (11/06) Vol. 39, No. 11, P. 31; Foster, Ian; Kesselman, Carl
System-level science involves the integration of heterogeneous sources of
knowledge concerning the constituent elements of a sophisticated system in
order to comprehend the system's properties as a whole, and the trend has
significant implications not only for many scientific disciplines, but also
for information technology; this is because system-level science usually
blends software systems, data, computing resources, and people in addition
to multiple disciplines, write Ian Foster of Argonne National Laboratory
and Carl Kesselman of the USC Information Sciences Institute. Addressing
the challenge of building the people, infrastructure, software, and
policies needed to accommodate a complex system-level problem entails the
determination of what can be done to scale system-level science in terms of
the extension of the problems addressed, the number of engaged resources,
the population of participants, and the span of its application.
System-level science's end-to-end nature calls for the provision of
independent and persistent science capability services that can be
incorporated into a wide array of larger-scale investigations, and the
authors note that extremely large amounts of computation and data are
required for certain questions and components. The performance of
scientific investigation in a system-level science context carries certain
consequences that impact the infrastructure undergirding the process: The
need for multidisciplinary, collaborative, and distributed team-oriented
exploration, which makes sharing essential; dynamic variance of the
resources the team employs over the investigation's lifetime; and the
dynamic range of the problem. The need to support these requirements is
fueling a migration toward service-oriented architectures (SOAs), according
to Foster and Kesselman. Successfully applying SOA methods involves
erecting a partition between the tasks of deploying a scientific capability
from the details of hosting that capability. "If researchers can adopt SOA
technologies successfully, then they can, in principle, decompose our
monolithic application over the network, with different groups developing
and operating different components," the authors write.
Click Here to View Full Article
to the top