Hold Off on WiMax Investments, Gartner Cautions
Network World (07/18/08) Reed, Brad
A new report from Gartner Research says that business should not invest in
WiMax technology until the technology establishes a greater deployment base
in the United States and until vendors produce more dual-mode
cellular/WiMax handsets. Gartner predicts that WiMax networks in the
United States will start operating commercially over the next two years,
but that WiMax itself will remain a "niche technology" that will be best
used serving emerging or rural markets that do not already have access to
broadband services. One of the major hurdles WiMax faces is that WiMax
will not be able to provide national coverage for some time, as only Sprint
and Clearwire partners will be launching commercial WiMax services for the
first time this September. Gartner analyst Phillip Redman says business
will have to wait until coverage extends to more cities than the ones that
will be covered by the end of the year. Additionally, Redman says that
enterprises that want both WiMax data and cellular voice capabilities will
have to wait at least a few years until more dual-mode handsets are
available. Redman says that because WiMax is starting as a data-only
service, businesses will have to rely on VoIP for mobile voice needs, and
they should look elsewhere until WiMax devices include cellular coverage.
"In competitive markets, WiMax is going to have a very tough row since it's
starting from scratch," says Redman. "But WiMax still has great
opportunities in different markets. I think it makes sense in developing
markets and developing economies that don't have broadband comp from
wireline carriers."
Click Here to View Full Article
to the top
Software Maps Rwandan Health
BBC News (07/16/08)
The Geographic Information Systems (GIS), led by Max Baber from the
University of Redlands in California, is using a system of electronic
mapping to lay many different types of data onto a single image to track
and predict disease outbreaks in Rwanda. GIS can be used to help
developing countries best utilize their limited resources, such as drinking
water. Baber says roads, power lines, and buildings can be digitized in
GIS, along with attribute information on the buildings, such as if they are
residential or commercial. Combining such information on a map allows for
correlations that otherwise might have been missed to be found and
exploited. Information collected in Rwanda includes the locations of
health services, water, and electricity supplies, and how many cases of
illnesses such as malaria have occurred in different parts of the country.
The interactive layers of the map can be used to plan where specific health
services should be deployed. "Once you start to gather the data and tie it
down to its location, then you can start to see relationships between
things like access to unclean water and the impact unclean water is having
on health in those locations" says Baber. So far, the system has allowed
Rwandan health workers to track the number of malaria cases at each health
facility, where malaria is increasing or decreasing, and where people are
most at risk. GIS can also be used to determine the energy needs of a
town, the effect of heavy industry on the environment, or the impact of
deforestation on carbon dioxide emissions. The biggest challenge of using
GIS is collecting enough information to make the databases reliable.
Click Here to View Full Article
to the top
DNS Flaw Discoverer Says More Permanent Fixes Will Be
Needed
Computerworld (07/17/08) Vijayan, Jaikumar
Dan Kaminsky, a security researcher at IOActive who recently discovered a
previously unknown cache-poisoning vulnerability in the Internet's Domain
Name System (DNS) protocol, warned IT managers at a press conference on
July 17 that while patches have been released to address the flaw, more may
need to be done to address the issue over the next several months.
Kaminsky noted that the patches that were issued in the wake of the
discovery of the flaw earlier this month are at best a temporary measure
aimed at protecting the DNS infrastructure from hackers trying to exploit
the flaw, which exists in a transaction identification process that the DNS
protocol uses to determine whether responses to DNS queries are legitimate.
Kaminsky said that while DNS messages include what are supposed to be
random identification numbers, only about 65,000 different values are
currently being used as identifiers. Compounding the problem is the fact
that the process of assigning identifiers to packets is not especially
random and can be guessed, Kaminsky said. If hackers are able to identify
the identification numbers on DNS messages, they could introduce forged
data into the DNS system and redirect Web traffic and email to systems they
control. Although the patches that aim to correct this vulnerability
appear to be working, there are people who have gotten very close to
exploiting it, Kaminsky said. As a result, IT managers should expect to
see more security patches that aim to correct the flaw over the next
several months.
Click Here to View Full Article
to the top
Robots Deemed Less Intelligent Than Humans, Still
iTnews Australia (07/15/08) Tay, Liz
New research at Germany's RWTH Aachen University found that humans enjoy
interacting more with each other than with machines, and think they are
more intelligent than robots. As part of the study, the brain activity of
20 participants was monitored as they played a simple game against a
regular computer notebook, a functionally designed Lego-robot, the
anthropomorphic robot BARTHOC Jr., and another human. The participants
also preferred to interact with more human-like opponents, as they
considered them to have rational decision-making abilities and strategy.
Humans also thought the more human-like opponents were intelligent. They
considered physical traits such as movement and facial attributes to be
human-like. Robots that work in close contact with humans, such as a nurse
or caretaker, would need to have human-like features, the study suggests.
"Recent research proves that the more social a situation is the more
human-like features are demanded by the subjects," says RWTH psychologist
Soren Krach. "If the robot works within a factory where there is no direct
contact to human beings, the shape does not matter."
Click Here to View Full Article
to the top
Immersed in Imagery, Analysts Get a Deeper View of
Intelligence Data
Federal Times (07/14/08) Vol. 44, No. 21, P. 15; Singer, Jeremy
Trend analysis is one of the applications of immersive visualization, a
system in which analysts examine imagery from various sources projected
onto four walls, wearing special eyewear that facilitates a
three-dimensional view. The Defense Intelligence Agency (DIA) uses
immersive visualization to study trends. Troy Gilbert, leader of DIA's
immersive visualization team, says immersive displays are currently being
used for 17 projects. He says other users of the technology include
automakers, ship builders, and companies seeking natural resources. Among
the sources of imagery that immersive displays incorporate are aerial
sensors, satellites, and products developed from intelligence collected by
human sources. The displays can be used to visualize past events as well
as simulate expected events before they take place, says Tom Cooke with the
National Geospatial-Intelligence Agency support team to DIA. Advanced
geospatial intelligence (AGI) techniques are specially developed computer
processes and algorithms that can help intelligence analysts perceive
features that the human eye might miss, says Brian McIntosh of the National
Geospatial-Intelligence Agency's office of science and methodology.
Applications of AGI methods include searching for chemicals that may have
been leaked into a specific area, while the use of AGI tools with
commercial imagery could help civil agencies contend with wildfires and
other emergency situations by coordinating response strategies, McIntosh
says.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Bridging the CE-PC Gap: (Compute) Power to the
People
EE Times (07/14/08) Williston, Kenton
A new category of powerful Internet applications will be driven by
consumer computing devices that blend low cost and user-friendliness with
PCs' limitless flexibility, and most of these gadgets will be pocket-sized
and can be powered by a battery for an entire day thanks to their
efficiency. In addition, the devices will be ubiquitous through their
affordability, which will allow them to be used by children and in
developing countries. Challenges to making this vision a reality include
bolstering the hardware; de-fragmenting operating systems and making them
more user-friendly; vastly improving the Internet experience; and
instituting good business models. True Internet usefulness depends on the
reliance of consumer computing devices on specialized applications or
widgets that meld device data with online data and services, and OS
fragmentation is one key reason for the dearth of widgets in contemporary
devices. Open OSes such as Symbian, Android, and Windows Mobile could be
more widely embraced than proprietary OSes thanks to their availability for
license to any manufacturer, while the biggest hurdle facing consumer
computing devices is the lack of a business model that encourages
wide-open, unrestricted Internet connectivity. Carriers are unsurprisingly
reluctant to accept such a model because they are fiercely protective of
their fee-based proprietary systems. The consumer computing revolution is
expected to proceed gradually and subtly as the Internet continues
penetrating more consumer devices.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
IBM Slims Down the Web for Your Phone
IDG News Service (07/14/08) McMillan, Robert
Researchers at IBM's Almaden Research Center have developed Highlight,
software that enables mobile users to create slimmer Web pages that are
easier to view on small devices such as mobile phones. Highlight is an
extension for the Firefox browser that enables users to record the steps
required to perform simple tasks on the Web, such as looking up flight
arrival information on a Web site. Users can then "clip" sections from a
Web site and save them to another Web server, which sends the slimmer page
to a mobile device. Developer Jeffrey Nichols says Highlight works well
for task-driven jobs such as shopping or getting local restaurant
recommendations. However, not all Web sites are willing to have their
content copied onto other servers. Highlight, which has not been publicly
released, uses code from another IBM project called CoScripter, which
offers a way to record repetitive Web tasks and share them with others on a
Web page. CoScripter developer Allen Cypher says CoScripter still needs
more work before it will be ready for widespread use, but it could be ready
by the end of the year. Cypher says CoScripter uses "sloppy programming"
to turn a series of clicks into a script that can be shared and edited by
other CoScripter users, making repetitive Internet tasks much easier.
Click Here to View Full Article
to the top
DOD Aims for Supercomputer Upgrades Every Two to Three
Years
Computerworld (07/14/08) Thibodeau, Patrick
The U.S. Department of Defense's (DOD) latest supercomputer will be one of
the 20 fastest supercomputers in the world. The $12.65 million,
water-cooled IBM system has 4,700 Power processors, and will operate at
about 80 teraflops, or 80 trillion calculations per second. The new
supercomputer will have about four times the computing ability of the IBM
system it will replace. The new supercomputer "will provide the
computational capability needed by higher resolution ocean and atmospheric
models for improved accuracy of forecasts," says Dave Cole, assistant
director of the Naval Oceanographic Office Major Shared Resource Center at
the Stennis Space Center in Mississippi. Such an application "is essential
to more effectively support Navy flight and sea safety, search-and-rescue
operations, optimal aircraft and ship routing, and mission planning." DOD
plans to have its new IBM supercomputer up and running in October.
Click Here to View Full Article
to the top
Google Is Watching, Perhaps Soon in Your Home
InformationWeek (07/11/08) Claburn, Thomas
A recent paper, co-authored by Google researcher Bill N. Schilit and
computer scientists Jeonghwa Yang from the Georgia Institute of Technology
and David W. McDonald from the University of Washington, proposes "home
activity recognition," a system that would track people's activities at
home through home network interactions. "Activity recognition is a key
feature of many ubiquitous computing applications ranging from office
worker tracking to home health care," the paper says. "In general,
activity recognition systems unobtrusively observe the behavior of people
and characteristics of their environments, and, when necessary, take
actions in response--ideally with little explicit user direction." Home
monitoring could be used to remind people to perform forgotten tasks, help
them remember information, or encourage them to act more safely. However,
the concept raises several privacy questions, including how the data will
be protected, who will have access to the data, and what will prevent the
data from being subpoenaed or stolen. The paper provides a sample of the
type of data that could be collected, similar to a Web history log that
records the use of devices attached to a home network. "Going forward we
are eager to find alternative sources for interaction event capture," the
paper says. "Rather than just waiting for the desktop operating systems to
accommodate user activity tracking, we see the Web platform as a potential
shortcut to a friendlier environment for activity capture."
Click Here to View Full Article
to the top
Software Helps Developers Get Started With PIV
Cards
National Institute of Standards and Technology (07/09/08) Brown, Evelyn
Two software programs have been developed by the National Institute of
Standards and Technology (NIST) that demonstrate how Personal Identity
Verification (PIV) cards can be used with Windows and Linux systems to
perform logon, digital signing, verification, and other services. The
software is intended to assist software developers, system integrators, and
computer security professionals in the development of products and
solutions in response to Homeland Security Presidential Directive 12 and
the FIPS 201-1 standard. NIST collaborated with the industry to develop
the standards for the PIV cards that will be used for the directive. Each
card contains a unique number, two of the employee's biometric fingerprint
templates, and cryptographic keys stored on an embedded chip. NIST's Donna
Dodson says the agency wanted to provide IT professionals with a model of
how PIV cards can be used to support authentication to federal information
systems. Each federal agency will implement the use of PIV cards on its
own schedule. NIST developed the demonstration software to show that PIV
cards can work with common computer activities. For example, user name and
password can be replaced with the user inserting his or her PIV card in a
reader and entering a personal identification number, which could eliminate
the need for passwords for other applications and provide access to secure
databases for authorized users.
Click Here to View Full Article
to the top
Carnegie Mellon Launches New Research Center to Grow
Mobile Device Technologies and Services
Carnegie Mellon News (07/11/08) Swaney, Chriss
The CyLab at Carnegie Mellon University recently launched the Mobility
Research Center, which is dedicated to studying business, organizational,
and technical issues surrounding mobility in managing systems in cell
phones, home appliances, and building infrastructures. The new center will
develop underlying technologies that will ensure privacy, security, and the
reliability of sensitive and valuable information. CMU's Information
Networking Institute has launched a new master's degree program in mobility
to complement the new research center and to educate and train students.
The ubiquity of handheld devices has made demand for new technologies to
manage data and streamline connections extremely high, and the Mobility
Research Center will focus on improving hardware and software technology
for mobile devices, including studies on how people work, play, shop, and
collaborate on mobile devices, and how new applications and services can
change their lives, according to CyLab founding director Pradeep K. Khosla.
Several mobile device manufacturers, including Motorola and Nokia, will
work with the center. The Mobility Research Center will also collaborate
with CMU's Human-Computer Interaction Institute and the School of Computer
Science. "This anywhere-anytime computing capability has prompted a need
for increased emphasis on how all this novel mobile technology will benefit
consumers," says Mobility Research Center co-director Martin Griss. "We are
moving from the plain old mobile phone to the truly mobile companion."
Click Here to View Full Article
to the top
Open-Source Quality Tester Out in Alpha
IDG News Service (07/14/08) Kanaracus, Chris
Alitheia Core, an alpha version of a tool for measuring the quality of
open source software, is now available from the Software Quality
Observatory for Open Source Software (SQO-OSS) project. "Whilst core
functionality is provided, performance issues remain and customization is
currently disabled," says a SQO-OSS press release. SQO-OSS has made a Web
interface available, but also has plans to plug into the Eclipse IDE. "By
analyzing public data sources relating to open source projects, the system
utilizes metric-based assessment techniques to assess quality
characteristics," says the project's Web site. A group of academic
institutions, companies, and open source projects in Europe is behind
Alitheia Core. The European Commission has provided support for the
SQO-OSS project, which has made Alitheia Core available under the
two-clause BSD open source license.
Click Here to View Full Article
to the top
Multithreaded Supercomputer Seeks Software for
Data-Intensive Computing
Pacific Northwest National Laboratory (07/14/08)
A multi-institutional group of researchers has been awarded $4 million to
develop software for supercomputers and to create the Center for Adaptive
Supercomputing Software, a joint project between the Department of Energy's
Pacific Northwest National Laboratory (PNNL) and Cray, Inc. PNNL director
of Computational Sciences and Mathematics Moe Khaleel says the new software
will allow for much faster analysis of complex problems, such as
understanding and predicting how the power grid behaves, one of the most
complex engineering systems ever built. Researchers from Scandia National
Laboratories, the Georgia Institute of Technology, Washington State
University, and the University of Delaware will also be working on the new
software. New supercomputers are being built with multithreaded processors
that allow for multiple, simultaneous processing. In traditional
supercomputers, each processing chip gets a piece of memory to use for its
computations. A multithreaded system lumps all the memory together,
enabling the processor to access the larger memory pool, but each processor
has multiple threads. One thread allows the processor to perform a
calculation while another thread accesses the next piece of memory,
creating a faster and more power-efficient process. "Traditional
supercomputers are not well suited for certain kinds of data analysis, so
we want to explore this advanced architecture," says PNNL computational
scientist Daniel Chavarria.
Click Here to View Full Article
to the top
Infusing Petascale Thinking
NCSA (National Center for Supercomputing Applications) (07/08/08) Jewett,
Barbara
The development of a petascale computer promises to revolutionize
education, which is the goal of the Great Lakes Consortium for Petascale
Computation (GLCPC), of which the nonprofit Shodor research and education
organization is a member. "The consortium education plan focuses on the
substantial transformation of the undergraduate experience to include not
only computational thinking but the computational thinking that leads to
the ability to work with petascale technologies," says Shodor executive
director Bob Panoff. He notes that the improvement of pre-college and
graduate education requires investment in undergraduate education, and
GLCPC's Blue Waters petascale computing project will concentrate on
alliances with the National Science Digital Library's Computational Science
Education Reference Desk, TeraGrid, and the National Computational Science
Institute. Panoff says the development of effective computational modules
designed to serve as a platform for multiscale modeling and petascale
computing will be facilitated through these partnerships. Another Blue
Waters component is the Virtual School for Computational Science and
Engineering co-founded by University of Michigan professor Sharon Glotzer
and University of Illinois National Center for Supercomputing Applications
director Thom Dunning as a site where students can learn about petascale
computing for science and engineering. "Many aspects of the nuts and bolts
of computational science ... fall between the cracks, and as a result, it
is not easy for today's students to learn all they need to know to become
tomorrow's innovators in high-performance scientific computing," observes
Glotzer. She says the mission of the virtual school is to fill in the gaps
in students' knowledge, particularly in petascale computing. "For many of
our most important scientific applications, petascale computing will force
us to rethink how we structure our codes to take full advantage of the
architecture of these new machines," Glotzer says.
Click Here to View Full Article
to the top
The End of Theory: The Data Deluge Makes the Scientific
Method Obsolete
Wired (07/08) Vol. 16, No. 7, P. 108; Anderson, Chris
Thirty years ago, statistician George Box said "all models are wrong, but
some are useful." At that time imperfect models were the only option to
explain complex theories involving topics such as cosmological equations
and human behavior. However, researchers operating in today's era of
massively abundant data do not have to settle for imperfect models, and can
go without models completely. Speaking at the O'Reilly Emerging Technology
Conference, Google research director Peter Norvig updated George Box's
maxim to say, "All models are wrong, and increasingly you can succeed
without them." The massive amounts of data that are readily accessible in
today's high-tech, petaflop industry enable researchers to replace
traditional tools with actual data and applied mathematics. The new
information age is making the traditional approach to
science--hypothesizing, modeling, and testing--obsolete. Petabytes of
readily available information allow researchers to analyze data without
hypotheses about what the data might show, and to instead simply submit
massive amounts of information to the world's biggest computing clusters
and let statistical algorithms find patterns. The best example is the
shotgun gene sequencing done by J. Craig Venter. Using high-speed
sequencers and supercomputers to statistically analyze data, Venter went
from sequencing individual organisms to sequencing entire ecosystems. By
sequencing the air, Venter discovered thousands of previously unknown
species of bacteria and other life forms, without hypothesizing that they
were there. Experts say that such techniques are about to become
mainstream.
Click Here to View Full Article
to the top