Taking Snooping Further
New York Times (02/25/06) P. B1; Markoff, John; Shane, Scott
Officials from the National Security Agency met with a group of venture
capitalists to outline their wish list for new data-mining systems that
would support and advance the Bush administration's surveillance efforts by
better uncovering connections between seemingly unrelated communications.
Privacy advocates have vigorously protested the surveillance program,
claiming that privacy is violated whether it is a human or a machine that
is doing the snooping. Data mining is not a new practice, as insurance and
credit card companies have been using it for years to conduct risk
assessments and detect fraud, though by applying advanced software analysis
tools intelligence agency systems go a step further. Costing up to
millions of dollars for an agency-wide deployment in an organization such
as the FBI, software tools enable investigators to compile and
cross-reference financial data and phone records to look for patterns of
suspicious activity. Critics claim that the government has misdirected its
surveillance activities, spending vast sums on intercepting the phone calls
of American citizens while neglecting to monitor obvious and available
resources such as chat rooms frequented by al Qaeda operatives. The
Electronic Frontier Foundation has filed a suit against AT&T, alleging that
the company's storehouse of phone records and information about Internet
messaging, the Daytona system, provides the foundation for the NSA's
surveillance, though a company official noted that the system has in place
strict access controls. Among the new technologies that the government is
developing are a technique to identify the physical location of an IP
address and an application that compiles a list of topics by analyzing
computer-generated text, while Virage has provided the government with a
program that captures up to 95 percent of the spoken content of television
programming, with potential applications for monitoring phone
conversations.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
A 1,000-Processor Computer for $100K?
CNet (02/24/06) Kanellos, Michael
To address the time lag between hardware development and software design,
researchers from the University of California, Berkeley, Stanford
University, and MIT have developed the Research Accelerator for Multiple
Processors (RAMP) program, to create a laboratory computer with FPGAs. ACM
President David Patterson notes that while hardware simulators do exist,
software developers rarely use them to their fullest extent, but that an
FPGA-based computer could be easily and inexpensively assembled. "If you
can put 25 CPUs in one FPGA, you can put 1,000 CPUs in 40 FPGAs," Patterson
said, estimating the cost for such a system at around $100,000, while only
taking up about one-third of a rack and consuming 1.5 kilowatts of power.
A comparable cluster would cost around $2 million, spread over 12 racks,
and consume 120 kilowatts. "What we are not trying to do is build an FPGA
supercomputer," Patterson said. "What we are trying to do is build a
simulator system." An eight-module version of the computer is expected in
the first half of this year, with the complete 40-module version expected
in the second half of next year.
Click Here to View Full Article
to the top
Researchers Developing Precise Search Software
News-Gazette (02/26/06) Kline, Greg
Computer Science professor Anhai Doan and his colleagues as the University
of Illinois are developing new search techniques that aim to refine the
broad-brush approach of Google and Yahoo! through knowledge and patterns
embedded in the massive repositories of digital records. Privacy advocates
have raised concerns about how much data people should disclose about
themselves in order to improve search results, who controls the
information, and where and how it is stored. "All of these things add up
eventually in the public consciousness to sort of a justified paranoia
about releasing any kind of information," said John Unsworth, dean of the
Illinois Graduate School of Library and Information Science, noting the
Bush administration's controversial domestic surveillance program and the
highly publicized data breaches at major credit card companies and other
organizations with large databases of sensitive information. The
proliferation of data in competing digital formats confounds search
technologies, though increased processing power has led to the creation of
computers that are capable of synthesizing and applying that information in
novel ways. Computers can work through entire data sets without having to
make inferences from a statistical sample. Unsworth is helping to develop
an international test project called NORA (No One Remembers Acronyms) to
apply data mining techniques to a digital repository of 18th and 19th
century English and American literature. Another University of Illinois
project is the Evolution Highway, an international data mining system that
cross-references human and mammal genomes to detect the origins of
diseases. While new search technologies promise to improve data mining,
computers still have trouble with telling the semantic difference between
like words through their context, and users are becoming more adamant that
they receive assurances of anonymity before giving out personal
information.
Click Here to View Full Article
to the top
Fast Chips, Kill, Kill, Kill
Wired News (02/24/06) Gain, Bruce
Representatives of some of the world's leading chip companies at last
week's SPIE Microlithography Conference in San Jose, Calif., announced a
host of new products that promise to sustain Moore's Law over coming
generations with smaller, faster processors. IBM intends to print circuits
with 30-nm features, one-third the size of those on the market today,
through existing lithography and imaging techniques, while ASML Holding
unveiled a production process for 42-nm chips, claiming that it has the
technology to scale to 35 nm. Having already reported the production of a
45-nm SRAM chip, Intel expects to produce 30-nm wafers next year. The
Semiconductor Industry Association's technology roadmap calls for the
number of transistors on a CPU to double to 2 billion within two years, and
to 4 billion in four years, with densities increasing and sizes shrinking
through 2020. AMD and Intel report that the increase of clock speeds will
fall off as power consumption and heat become increasingly prohibitive
factors, though increased chip densities will enable the manufacturers to
improve performance with multiple cores: Intel has said that a single
processor could hold up to 100 cores within 10 years. While 5 GHz clock
speeds are on the horizon within four years, Insight64's Nathan Brookwood
believes that multicore processors are the key to achieving performance
gains on par with previous years. "Processor makers will focus on
architectural approaches such as parallelism," Brookwood said. "They will
go from dual core, to quad core, to octal core." According to the roadmap,
DRAM per chip is also expected double twice over the next four years,
reaching and potentially exceeding 4 GB per chip.
Click Here to View Full Article
to the top
Municipal Mesh Network
Technology Review (02/27/06) Savage, Neil
The city of Cambridge, Mass., this summer will unveil a city-wide wireless
Internet project based on an experimental system called Roofnet, an
unplanned, multiroute mesh network developed at MIT's Computer Science and
Artificial Intelligence Laboratory. Roofnet has already been operating for
about three years across an area of roughly four square kilometers near
MIT, using a few dozen transmitting/receiving nodes and one wired Internet
connection through MIT. The nodes--which consist of a small box containing
a hard drive, software written by MIT researchers, a Wi-Fi card, an
Ethernet port, and a connection to a rooftop antenna--have been located in
the homes and offices of volunteers, most of whom are MIT students and
staff. The original idea behind the network was to take advantage of the
benefits of a random, unplanned network. With simple-to-use equipment that
requires minimal maintenance, the Cambridge-wide network could be
inexpensive and grow organically, although service can be unacceptably poor
in areas where a node is far away from its nearest neighbor. The city
plans to rectify these coverage problems by attaching antennas to as many
tall buildings as possible. In addition, some of Roofnet's nodes may get
radio interference from a passing truck--which can reflect radio signals
and cause interference--or from water--which strongly absorbs the network's
transmissions sent on the Wi-Fi frequency of 2.4 GHz. In order to
circumvent these problems, Roofnet's nodes constantly broadcast status
reports that signal where they are and which nodes they are in
communication with. By tracking these status reports, the network can
select the best route between any two nodes at a particular moment.
Click Here to View Full Article
to the top
Tech Gurus Drum Up Ideas for a Better World
San Francisco Chronicle (02/26/06) P. A1; Abate, Tom
Leaders from the technology industry gathered in Monterey, Calif., for the
Technology Entertainment Design (TED) conference, historically a discussion
ground for cutting-edge technologies that began in 1984, when Apple
showcased its new Macintosh. Attracting such influential attendees as
former Vice President Al Gore and Google founders Sergey Brin and Larry
Page, this year's TED kept its traditional focus on the environment and
social consciousness. Gore gave an hour-long presentation on global
warming, imploring the members of his influential audience to take up the
challenge of changing the world in their daily work. Hans Rosling of
Sweden's Karolinska Institute gave a presentation on U.N. income and
mortality statistics, calling for a search tool to make similar data
throughout the world available. He later met with Google executives, who
invited him to present his ideas to the company's engineers. The
conference also saw presentations from the evangelical minister Rick Warren
and the atheist scholar Daniel Dennett, both of whom exhorted the crowd to
pursue meaning beyond corporate interests in their work, albeit invoking
different philosophical justifications. Among the new gadgets unveiled at
the conference was New York University computer scientist Jeff Han's new
touch-screen technology that enables multiple users to manipulate maps and
images by running their fingers over the objects.
Click Here to View Full Article
to the top
IANA Up for Grabs?
Computer Business Review (02/27/06) Murphy, Kevin
In a step that could foreshadow the placement of a contract out to
bidding, the U.S. Commerce Department last week posted a request for
information (RFI) soliciting interest from anybody interested in running
the Internet Assigned Numbers Authority (IANA), which is currently
controlled by ICANN. The IANA oversees functions that include the
maintenance of available IP address space, the protocol and port number
database, and the list of which servers the global domain name system
top-level codes operate on. The last function requires governments that
wish to change who runs the local ccTLD to ask ICANN's permission, and the
latest RFI from the Commerce Department defines that function as "receiving
requests for and making routine updates of the country code top level
domain contact and nameserver information." The reason for the
government's decision to look for other potential bidders after ICANN's
long-term stewardship of IANA remains unclear, though ICANN observer and
blogger Bret Fausett suggests that .us domain and North American numbering
plan operator NeuStar lobbied for the rebidding of the IANA contract.
However, a definite theoretical conflict of interest would arise should
NeuStar, a ccTLD operator, win the contract. Other entities likely to be
interested in bidding on the IANA contract include the Council of European
National Top-Level Name Registries, Afilias, and VeriSign. The possibility
also exists that the IANA functions could be split up among different
organizations.
Click Here to View Full Article
to the top
DOD Funds National Information Fusion Center
University at Buffalo News (02/21/06)
The Department of Defense is establishing the National Center for
Multi-Source Information Fusion Research at CUBRC and the University at
Buffalo to advance the technology underpinning critical national security
endeavors in the area of information fusion, a technology that aids in the
understanding of complex situations confounded by disparate and at times
conflicting information. Through its $1 million grant, the Defense
Department is making the center the focal point for information fusion
research and development for national security and defense, in addition to
advancing applications for the field in business and medicine. Researchers
at the institution will devise algorithms and software programs to track
moving targets, such as a plane or a ship, and predict their destinations.
"These tools will be able to provide enhanced situational awareness to a
commander so he or she can make a decision, determining not only what a
particular object or target is, but what it might be trying to do," said
Michael Moskal, research associate professor of engineering and applied
sciences at Buffalo. The center intends to develop prototype software for
both military and civilian applications within the next 18 months, while
also addressing more long-term security issues, such as infrastructure
threats and the development of widespread computer networks. The "Event
Correlation for CyberAttack Recognition System" is under development to
improve analysts' ability to defend against widescale, organized attacks on
networked computer systems. Information fusion draws heavily on data
mining, giving it a broad range of applications, particularly in medicine,
where the center hopes to use it to predict and contain disease outbreaks.
The center is also developing new techniques for signal processing,
estimation and inference methods, and technologies for visualization and
human/computer interaction.
Click Here to View Full Article
to the top
Universities Diffused Internet Technology in
Mid-1990s
University of Toronto (02/22/06) Hall, Jenny
The Internet shows how important universities can be in transferring
technology to the larger society, according to a researcher at the
University of Toronto. Avi Goldfarb, a professor at the university's
Joseph L. Rotman School of Management, credits students with bringing the
Internet to the public, compared to the traditional way in which
universities disseminate technology through research journals and business
partnerships. Goldfarb's paper is published in the March issue of the
International Journal of Industrial Organization. Although there is little
empirical research on the subject, Goldfarb analyzed data from nearly
105,000 surveys, which paints a picture of how the Internet emerged as a
tool in university life in the mid 1990s. Moreover, Goldfarb says students
who graduated from universities began to introduce the Internet to their
families, and even notes that low-income households were over 50 percent
more likely to embrace the Internet if a family member pursued higher
education in the mid 1990s. He wonders whether universities are the unsung
heroes in the emergence of the Internet. "IBM invents a lot of things and
their employees might use them--but they stay at IBM, so it's harder for
those technologies to have a wide impact," says Goldfarb.
Click Here to View Full Article
to the top
Carnegie Mellon Scientists Show How Brain Processes
Sound
Carnegie Mellon News (02/23/06)
New research into how the brain processes sound has the potential to lead
to improvements in digital audio files and contribute to the design of
brain-like codes for cochlear implants. The Carnegie Mellon University
study, which appears in the Feb. 23 issue of Nature, offers a new
mathematical framework for understanding the process, and implies that
optimization of signal coding is involved in hearing a range of sounds.
Signal coding refers to the process the brain uses to translate sounds into
information. The researchers abstracted from the neural code at the
auditory nerve, and referenced sound as a discrete set of time points, or a
"spike code" for representing acoustic components in their temporal
relationships. "We've found that timing of just a sparse number of spikes
actually encodes the whole range of nature sounds, including components of
speech such as vowels and consonants, and natural environment sounds like
footsteps in a forest or a flowing stream," says Michael Lewicki, associate
professor of computer science at the university. Improvements in signal
processing could enhance the quality of compressed digital audio files.
"We're very excited about this work because we can give a simple
theoretical account of the auditory code which predicts how we could
optimize signal processing to one day allow for much more efficient data
storage on everything from DVDs to iPods," says Lewicki.
Click Here to View Full Article
to the top
Make Sure Your Android Went to Finishing School
New Scientist (02/18/06) Vol. 189, No. 2539, P. 30; Marks, Peter
As robots come to occupy a more prominent role in industry and begin
appearing in the office and home, leading edge manufacturers such as
Toyota, Honda, and Toshiba are working to ensure that the devices are built
with safeguards to protect the humans with whom they will interact.
Domestic robots, much like pets, will demand added vigilance from their
owners to avoid charges of negligence and other legal issues. While
autonomous robots base their navigation on maps created through data
gleaned from the environment, Stephen Sidkin, a lawyer for the Law Society
of England and Wales, says that manufacturers will have to do more.
"Designers will have to ensure a robot's software is capable of learning
how to avoid all problems like hot drink spills. The onus has to be on the
manufacturer to get it right." The Japanese government is also concerned
about robot safety, having commissioned a long-term study to form safety
standards for robots employed in health care and domestic settings. If
safety becomes a significant enough issue, it could entail costs that
undermine the commercial viability of the industry. While it is
acknowledged that robots, much like humans, will not be able to handle
every situation that they face, researchers are working principally to
improve sensors, develop more intelligent control strategies, and produce
robots with softer, more rounded designs. Robots could include a "Go Limp"
technique that halts the device's motion once it runs into an object with
sufficient force. Backup motors would give robots more fluid movement to
prevent sudden motion stoppages caused by a system failure. Given that
Honda and Toyota are at the forefront of robotics, some analysts look for
safety and compliance measures that roughly parallel the automobile
industry.
Click Here to View Full Article
to the top
Search and Destroy
Software Development Times (02/15/06)No. 144, P. 30; Handy, Alex
The open-source Bugzilla tool comes highly recommended as a first step for
companies looking to deploy their first Web-based software defect-tracking
system, given its reliability. "With Bugzilla, it's a giant duct tape and
bailing wire tangle of Perl scripts, but it's solid duct tape and bailing
wire," says testing and quality assurance consultant Matt Hargett. He
notes that a major factor behind the weakness of many bug-tracking systems
is the back-end database. Exporting the database itself to a standard file
format can be a godsend in the event of a system crash or corruption.
Hargett also notes that the effectiveness of a widely accessible
defect-tracking system can be hindered by co-workers seeking their own
field, which can lead to excessively elaborate databases and result in
confused information or slower searches. Breaking up defect listings into
tiers can help address this problem. The first recommended step following
a bug's listing in the defect-tracking system is replication, which should
then be written into an automated test. No one in the organization should
be shut out from submitting bugs easily and quickly into the system.
Click Here to View Full Article
to the top
Mobile Downloads Pick Up Speed
BBC News (02/27/06) Ward, Mark
Third-generation (3G) technologies on the horizon promise to significantly
enhance data transfer, although meeting the projected data demands of
future subscribers--such as delay-free mobile downloading--will require
basic improvements in 3G technology itself. The GSM Association's Mark
Smith said operators were fine with upgrading, and acknowledged the need to
upgrade despite their sizable financial investments in license acquisitions
and network build-outs. High-Speed Downlink Packet Access (HSDPA) is the
most likely candidate technology to supply the bandwidth upgrade, and
installed HSDPA networks deliver speeds as high as 1.5 Mbps while on the
move; later versions will offer even faster speeds. In comparison,
available 3G technology offers a basic data rate of 384 Kbps, which is only
slightly higher than Enhanced Data rates for GSM Evolution (Edge)
technology's peak speed. Smith says the first HSDPA networks are beginning
to show up, and the United States will probably host the first complete
HSDPA network. Meanwhile, 3G-Long Term Evolution (LTE) technology could
yield as much as 100 Mbps in bandwidth, and Sound Partners research
director Mark Heath said 3G-LTE may be necessary if mobile TV explodes and
operators have to deliver shows to many people simultaneously. It is more
probable that operators will favor network hardware upgrades rather than
competing technologies such as Wi-Fi or Wimax, given that they could wield
greater control over the former.
Click Here to View Full Article
to the top
Robot Migration
National Journal (02/18/06) Vol. 38, No. 7, P. 53; Munro, Neil
With business owners protesting the tightening of immigration restrictions
on the grounds that their supply of cheap labor will dry up, companies have
been exploring new ways to replace human labor with machinery. Subway has
replaced its cashiers with electronic kiosks at one of its sandwich shops,
enabling the staff to process 70 more sandwiches per hour during its lunch
rush. If immigration caps remain unchanged, companies will have to explore
ways to increase worker production while reducing costs. Technology lifted
the curbs on U.S. economic growth and productivity in the mid 1990s when
retailers such as Wal-Mart and Home Depot began using back-office software
packages to control delivery and just-in-time inventory, ATMs began to
replace bank tellers, and Americans used online services to invest in the
stock market. Retail productivity growth increased at an average annual
rate of 3.8 percent from 1995 to 2002. Online sales, self-checkout
services, and wireless scanners will further increase workplace efficiency.
The construction and restaurant industries have also seen increased worker
productivity from new technologies. Economists have reached a general
consensus that higher labor costs, a product of tighter immigration
policies, lead companies to invest more heavily in technology, though the
relationship is difficult to quantify. Companies are advised to focus
first on technologies that produce revenue, rather than those that cut
costs, such as a car rental company that has invested in wireless devices
that precisely measure the amount of gas used. The trucking industry has
enjoyed a 20 percent increase in productivity due to wireless email and
other technologies, with the next big jump coming in the form of robots to
handle cargo and drive the trucks as early as eight years from now,
according to iRobot CEO Colin Angle, who notes that the technology exists,
but that "the devil is always in the details."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Linux Joins the Consumer-Electronics Revolution
EDN Magazine (02/16/06) Vol. 51, No. 4, P. 57; Webb, Warren
As the user-interface, networking, and multimedia requirements of current
consumer-electronics (CE) products expand, designers are adopting the
open-source Linux operating system to address those requirements. Linux
offers considerable cost savings thanks to its free source code as well as
the lack of a licensing fee and per-unit royalties. Linux is applicable to
a wide range of next-generation CE devices as the price of 32-bit
processors and memory drops, while the Linux kernel can be configured for
small-footprint systems and offers many features of a powerful operating
system. In addition, designers can avail themselves of a broad online
community of Linux developers in order to get advice and solve problems
quickly. The Linux kernel, which usually takes less than 1 MB of RAM,
boasts a memory manager that facilitates the secure sharing of system
memory by multiple programs; a process scheduler to guarantee that programs
will have fair CPU access; a virtual-file system that conceals hardware
details and presents a common file interface; and a user network interface
whose complexity or simplicity can be tailored to the user's preferences.
Design teams that traditionally relied on in-house operating software
development are embracing Linux to tackle the challenge of increasing
device complexity, while Linux vendors make revenues by combining
subscription support, tools, and services with custom distributions.
Issues generating uncertainty include legal challenges contesting the
novelty of Linux code, the possibility of fragmentation via kernel
modification, and General Public License provisions requiring the isolation
of source code for modified GPL software.
Click Here to View Full Article
to the top
The Social Side of Services
Internet Computing (02/06) Vol. 10, No. 1, P. 90; Vinoski, Steve
Technologists are reluctant to admit the substantial role nontechnical
issues play in the success of service-oriented architecture (SOA)
development projects, according to IONA Technologies' Steve Vinoski.
Success cannot be achieved just by making services operational and
interactive through a registry; the human factor must be taken into account
as well. "Pushing your organization to adopt service-oriented approaches
and create a production SOA network requires a multipronged effort that
depends somewhat on where you fit into the organization," reasons Vinoski,
who adds that SOA faces unique challenges on different corporate levels.
In the upper management echelon, SOA adoption hinges on understanding how
such an approach can contribute to the bottom line; middle managers and
technical leaders, meanwhile, may perceive SOA as a threat to their
authority, a challenge against their own trusted techniques, or a perilous
gamble if their reputation for delivering on time and on budget is already
established; finally, developers are known for resisting the adoption of
new technology unless there is no other option. Communication and
socializing is critical to SOA adoption on all levels of the company.
"Essentially, you have to employ marketing and sales tactics to socialize
your ideas and win over the key people who can help make your dreams a
reality," writes Vinoski. Methods and tools to consider using in pursuit
of this goal include wikis, blogs, and "elevator pitches." Ultimately,
face-to-face communication is the most important method, Vinoski
concludes.
Click Here to View Full Article
to the top
The Trouble With the Turing Test
The New Atlantis (02/06)No. 11, P. 42; Halpern, Mark
The Turing Test is considered by many to be the ultimate measure of
machine intelligence, through its reasoning that a machine can be
considered capable of thought if a person cannot distinguish it from a
human during interrogation. But AI researchers have been unable to pass
the test through their inability to produce machines that can reply to
questions responsively, and thus demonstrate that they grasp the remarks
that prompted them. Mark Halpern finds the principle of the test to be
flawed: For one thing, people in general automatically regard each other
as thinking beings on sight rather than judge each other by conversational
skills, as the test assumes. For another, the test's inventor,
mathematician Alan Turing, suggested that computers would be accepted by
the public as thinking machines by the end of the 20th century, which has
not happened. Halpern notes that "while we continue to find it convenient
to speak of the computer as 'trying' to do this or 'wanting' to do that,
just as we personify all sorts of non-human forces and entities in informal
speech, more and more of us are aware that we are speaking figuratively."
The lack of a compelling methodology for rating the success or failure of
artificial intelligence, apart from the Turing Test, has thrown the field
into disarray. Halpern concludes that "it is becoming clear to more and
more observers that even if [successfully passing the Turing Test] were to
be realized, its success would not signify what Turing and his followers
assumed: Even giving plausible answers to an interrogator's questions does
not prove the presence of active intelligence in the device through which
the answers are channeled." He advocates a more agnostic perception of AI,
which accepts that AI has yet to be achieved, with no idea if it ever will
be.
Click Here to View Full Article
to the top