U.S. Investigates Voting Machines' Venezuela Ties
New York Times (10/29/06) P. 1; Golden, Tim
The federal government is looking into last year's takeover of an American
electronic voting machine manufacturer by a Venezuelan company. Smartmatic
Corporation was a fledgling firm before being chosen by the Venezuelan
government to handle the country's election machinery. Several months
before this decision, another small voting machine company, owned by some
of the same people as Smartmatic, received a $200,000 investment from a
government agency and joined Smartmatic in its bid for Venezuela's
electronic voting contract. Smartmatic then acquired Sequoia Voting
Systems, which has voting equipment in place in 17 states and the District
of Columbia. Recent public documents do not clearly show involvement of
the young engineers who started Smartmatic, and the company has been
restructured into an intricate web of offshore companies and foreign
trusts. Carolyn B. Maloney, congresswoman from New York said, "The
government should know who owns our voting machines; that is a national
security concern...There seems to have been an obvious attempt to obscure
the ownership of the company." The Miami Herald revealed that Bitza, the
company that received a $200,000 investment from the government, was
inactive before receiving the money from the Venezuelan Finance Ministry,
which took a 27 percent stake in the company. Only weeks before Bitza and
Smartmatic won their contract, Omar Montilla, former adviser to Chavez on
election technology, was appointed to Bitza's board. Sequoia's Mitch
Stoller insists that "no foreign government or entity, including Venezuela,
has ever held any stake in Smartmatic." Some Sequoia voting machines
experienced delays and irregularities in Chicago during the March primary.
Some of these problems were due to a software component that transmits
results to a central computer that was developed in Venezuela. For
information on ACM's many e-voting activities, visit
http://www.acm.org/usacm
Click Here to View Full Article
to the top
Nanotube Computing Breakthrough
Technology Review (10/30/06) Bullis, Kevin
A major hurdle to the development of ultrafast computers that use carbon
nanotubes has been overcome. Researchers at Northwestern University have
found a way to take material that contains batches of nanotubes and
segregate the nanotubes into groups having the exact specifications needed
for high-performance electronics. While computers that utilize nanotubes
are still many years away, high-definition displays, solar cells, and
devices for nanotoxicity testing are among short-term applications for the
technology. Using this new technique, nanotubes are separated by metallic
or semiconducting properties, and by diameter. Sorting by diameter was
expected by the researchers, but the ability to sort by electronic type
surprised them. Techniques used to accomplish these distinctions and
create logic circuits from carbon nanotubes are "all pretty tedious,"
according to Mildred Dresselhaus, professor of physics and electrical
engineering at MIT. It is not yet possible to increase the scale of
production to manufacture chips with millions of transistors to compete
with present computers. The breakthrough occurred when surfactants were
added to a batch of nanotubes and were found to assemble in different
concentrations, creating density differences that could be measured.
Densities could then be sorted out using ultra-centrifugation. Andrew
Rinzler, professor of physics at the University of Florida at Gainesville,
says this method has produced "the best data I've seen so far," resulting
in batches that are sufficiently pure for high-performance applications.
The ultra-centrifugation process could theoretically be scaled up for
industrial production, says Mark Hersham, materials-science and engineering
professor and one of the Northwestern researchers. Hersham claims to have
developed transistors using thin-film meshes of semiconducting nanotubes,
the type that could be used in controlling pixels in flat screen TVs.
Click Here to View Full Article
to the top
Researchers Trying to Make Control Systems More Reliable,
Autonomous
News-Gazette (10/29/06) Kline, Greg
University of Illinois researchers have put together the Center for
Autonomous Engineering Systems and Robotics (CAESAR) in order to encourage
research across fields, with the goal of making progress in the reliability
of autonomous systems. By bringing together various areas of research
under one umbrella, "all the good problems these days [that] are at the
boundaries between disciplines" can be addressed, according to Mark Spong,
a University of Illinois electrical and computer engineering professor.
Today, autonomy is being incorporated into embedded systems, surgical
procedures, and even household chores; experiencing progress both
theoretically and functionally. The UI center will be part of the trust
institute, which is comprised of 60 faculty and staff members and more than
200 graduate students collaborating to create essential systems that are
verifiably trustworthy and resistant to accidental malfunction or attack.
UI professor Bill Sanders, director of the trust institute, says the center
will broaden our idea of what autonomous systems, including robots, can
accomplish in fields once considered too critical for them. CASEAR will
take on issues such as interaction between one autonomous system and
another, or between an autonomous system and a human, and even who would
prevail in the case of conflicting assessment. One of the first projects
being carried out under the umbrella is that of UI math professor Robert
Ghrist, computer science professor Steven LaValle, and colleagues. They
are developing methods to get networks of simple sensors, such as motion
detectors, to cooperate in assembling the smaller pieces of data each
gathers into a larger picture of their environment. Their results could be
used by the military to monitor troop movements, or by civilians for
assessing agricultural or weather conditions.
Click Here to View Full Article
to the top
'Gambits' Are a Risk to Internet Domain System
International Herald Tribune (10/29/06) Shannon, Victoria
ICANN Chairman Vint Cerf is cautioning against undue haste in integrating
non-Latin characters within the Domain Name System. Not pointing a finger
at anybody, but mentioning China and the International Telecommunication
Union (ITU), Cerf says that politics and allegations that the U.S. has too
much control of the DNS could lead to a splintering of the World Wide Web.
"My concern is the potential for suddenly choosing another path after ICANN
has already put in six years of work on this," says Cerf. "Either they
will fail, or they will break the Internet." Presently, only 37 Western
characters can be used in Internet addresses. ICANN has begun to implement
a plan that would allow tens of thousands of other characters from the
world's various languages to be used, but testing has shown the potential
for problems. "It is turning out to be quite difficult to integrate this
very large character set in a way that is safe and stable and will work
with many applications for many decades to come - to future-proof it," says
Cerf. The comments come on the eve of the first ever U.N.-sponsored
Internet Governance Forum in Athens and a week before the ITU will open a
three-week conference in Turkey in which the internationalization of
Internet governance is sure to be a key topic.
Click Here to View Full Article
to the top
Semantic-Web Technologies for Enhanced Knowledge
Maintenance
IST Results (10/30/06)
An IST project known as SEKT has set out to achieve greater effectiveness
in the knowledge management that is critical to navigating the growing
amount of information available through Internet technologies. In order to
lay the groundwork for Semantic Web development, SEKT has named its three
objectives: ontology and metadata technology, knowledge discovery, and
human language technology. Semantic Web software is being created to
semi-automatically learn ontology and extract metadata, and perform upkeep
on both. Middleware will allow integration of all SEKT machines and
development of methodology for using semantically-based knowledge
management. "The ontology-learning software--which is based on knowledge
discovery techniques--will develop ontologies populated with metadata, by
using software employing human-language technology," says project
coordinator John Davies of British Telecommunications (BT). Three case
studies are being used to evaluate the software components and methodology,
and feedback has been "very positive," says Davies. Newly appointed judges
in Spain employ the SEKT technology to gain assistance from more
experienced judges. BT employees can utilize a more powerful window when
accessing the company's digital library using SEKT, which allows them to
share information in a common framework. "It is clear that Semantic
technologies can help address the challenges that knowledge workers face in
accessing the right information at the right time," Davies says. While
SEKT ends in December 2006, several initiatives are in the early stages of
exploiting its results, in fields such as law and bid management.
Click Here to View Full Article
to the top
At 30, Crypto Still Lacks Usability, Experts Say
CNet (10/28/06) Evers, Joris
Thirty years of public key cryptography were recently celebrated and
remembered by experts in Mountain View, Calif. Much of the discussion
centered on the obstacles presented by the U.S. government, which were
lifted in 1996. Brian Snow, a retired technical director at the National
Security Agency, was present to provide the government's perspective.
"This, for us, was a weapon," Snow said. "And this was possible free
release of weapons and we needed to defend the nation to other nations who
could be opponents at the time." Jim Bidzos, who was chief executive of
RSA in 1986, recalled the difficulty presented by the NSA in moving
cryptography out of the research stage and into development: "We found
ourselves competing with NSA, especially in the 90s." One of RSA's first
customers, Ray Ozzie, currently chief software architect at Microsoft, was
working on securing what would become Lotus Notes in 1986 when he ran into
government restrictions. "I had no clue," he said. "Initially we had
wanted to use hefty keys...We had spent years working on it, and after the
third meeting (with the government), I thought we were dead." With the
rise of Web 1994, borders were eliminated and the need for secure
electronic commerce arose. Government export regulations were eased by
1996, allowing widespread adoption of cryptography. While the government
has taken a completely opposite view on cryptography, often requiring it,
"the remaining issue that is big today on the plate is lack of quality on
the products," said Snow. With Microsoft, Ozzie plans to incorporate
encryption into products, taking compliance issues into consideration. "In
early years, we as an industry could blame the system for controlling the
pace of innovation because the government was throwing up roadblocks,"
explained Ozzie. "At this moment in time, it's laziness on the part of the
industry in terms of not embracing architecture and the importance of human
interface in design of secure systems."
Click Here to View Full Article
to the top
Embedded Microprocessor Design Required System-Level
Approach
Electronic News (10/27/06) Steffora-Mutscher, Ann
Advancements in software-rich embedded systems and the changing
relationship between hardware and software were the topics of discussion
when Electronic News sat down with leading executives of companies in the
field at the Design Automation Conference in July. Serge Leef, general
manager of the system level engineering group at Mentor Graphics spoke of
"a design community that is partitioned between hardware and software teams
that don't really communicate all that well and their design flows are
completely disjointed." VaST Systems CEO Alain Labat claimed that "we are
facing an inflection point...where for the first time the ability for
traditional EDA to move out from the traditional hardware/shrinking
marketplace...and we have to realize that we have an enormous opportunity
to actually address the needs of the embedded software design community."
National Instruments CEO Dr. James Truchard says, "The idea is to create a
development environment that can do both design and the test that goes with
it." His approach aims to improve embedded systems the way the PC improved
the desktop: by developing standard platforms with substantial capability
that run the same software development tools in order to do away with the
present, difficult integration. A major problem has been that software
development occurs much faster than hardware development, so a completed IC
will have to sit and wait for appropriate software. Hardware/software
concurrence has been discussed for years, and the recent changes in
software content is coming from an increasing cost of hardware design. IC
designers no longer handle SoCs, software designers have taken over this
role. According to Truchard, "fundamentally, you're building a framework
or platform for the software then defining the hardware to match it. In my
mind, the hardware is what's left over when you finish the software."
Click Here to View Full Article
to the top
Vision-Body Link Tested in Robot Experiments
New Scientist (10/27/06) Simonite, Tom
More robotics researchers are conducting experiments that combine motor
activity and sensory input with hopes of gaining new insight into how to
build more life-like machines. Researchers say physical movement factors
into what one senses from the environment, and that interaction is a key to
intelligence. Indiana University neuroscientist Olaf Sporns is pursuing
"embodied cognition" research with Tokyo University roboticist Max
Lungarella that involves a four-legged walking robot, a humanoid torso, and
a simulated wheeled robot, each with a computer vision system that is
designed to focus on red objects. The walking and wheeled robots head
toward nearby red blocks, while the humanoid robot clutches the red objects
and brings them closer to its eyes for a better view. Information is
gathered from the joints and field of vision in order to measure movement
and vision, and a mathematical technique is applied to discover whether
there is a causal relationship between sensory input and physical movement.
"Information flows from sensory events to motor events and also from motor
events to sensory events," says Sporns, who adds that taking advantage of
the information flow could allow researchers to develop better robots.
"Using similar approaches, it should be possible to produce more efficient
cognitive systems, like those in nature, without specializing on a
particular task," says Daniel Polani, an artificial intelligence expert at
Hertfordshire University in the United Kingdom.
Click Here to View Full Article
to the top
Rutkowska: Anti-Virus Software Is Ineffective
eWeek (10/26/06) Naraine, Ryan
Stealth malware researcher Joanna Rutkowska recently demonstrated a way to
infect Windows Vista with a rootkit and introduced Blue Pill, a new concept
that uses AMD's SVM/Pacifica virtualization technology to create "100
percent undetectable malware." Hardware virtualization, in her opinion,
"has been introduced a little bit too early; before the major operating
system venders were able to redesign their systems so that they could make
a conscious use of this technology, hopefully preventing its abuse." Blue
Pill operates by creating a hardware virtual machine and moves the native
operating system to this virtual machine, becoming a "hypervisor" itself.
The native system doesn't even realize it's been moved to a virtual
machine. Rutkowska explains that operating systems need to be aware of
such virtualization and have their own hypervisor. In her opinion, "we
need at least two to three years to implement a foolproof protection
against hardware virtualization-based malware." Her ideal solution would
be "integrity checking of all system components," but she realizes the
difficulties involved. Blue Pill is an example of this undetectable, Type
III, malware, which "does not introduce a single byte modification into
kernel, or other processes' memory." The only chance for detection would
be finding side effects. Rutkowska believes it is better to have "a good
integrity-based scanner, even if it's not capable of detecting Type III
malware, rather than having a classic anti-virus product which only tries
to find the known 'bad things.'" Stealth malware can silently subvert an
operating system without being noticed, so to Rutkowska, the most pressing
concern is not the complete prevention of malware infections, but the
ability to detect them.
Click Here to View Full Article
to the top
Motorola Shows Off Future Tech
PC Magazine (10/23/06) Segan, Sascha
Motorola's "Technology Innovation Showcase" in Chicago offered a first
look at what the company has been working on. Motorola sees the cell phone
at the center the next generation of computing. "We're taking a broader
view of the cell phone...finally, the Internet business models, the
experimental lab of the Internet can come to mobile devices. The
technology world is beginning to get the protocols and standards together
for this," says Motorola CTO Rob Shaddock. A customer service avatar was
on display, which uses a camera for facial recognition that identifies
repeat customers. Retail shelves of the future may be filled with boxes
that illuminate when picked up, and are able to track how many times, and
for how long, they are picked up by customers, thanks to an RFID chip.
Motorola has decided to comply with a European Union directive to make
electronics more recyclable, and will create phone casings made from
recycled and biodegradable materials as part of its ECOMOTO initiative.
Motorola also plans to release TV set-top cable boxes, as part of its
Connected Home project, since the FCC has mandates a new standard for
CableCard that will force cable providers to more high-tech boxes. Social
TV, also being developed, is a way for people to talk about a TV show they
are watching from different locations, utilizing a technology similar to
instant messaging. Also on display was "Motorola Messenger Modem," a
PC-to-phone system that uses a PC's modem to route VoIP calls over a land
line to your cell phone.
Click Here to View Full Article
to the top
World Wide Web Consortium Releases First Version of GRDDL
Specification
Business Wire (10/24/06)
The World Wide Web Consortium's (W3C) new Gleaning Resource Descriptions
from Dialects of Languages (GRDDL) specification will enable software to
automatically extract information from structured Web pages to make it part
of the Semantic Web. GRDDL will allow those accustomed to expressing
structured data with microformats in XHTML to increase the value of their
existing data by porting it to the Semantic Web, at a very low cost. The
recent wave of Web 2.0 activity involves applications known as mash-ups
that combine different types of information. GRDDL is an answer to the
demands of the Semantic Web-based communities that have been searching for
a way to increase quality and availability of data on the Web, which makes
way for a higher level of data integration and diversity of applications
that can scale to the dimensions of the Web, making more powerful mash-ups
possible. The basis of the Semantic Web stack, which the set of standards
that supports this work is based on, comply with formality requirements of
applications such as managing bank statements or combining volumes of
medical data. GRDDL allows microformat users to take advantage of their
existing data in more formal applications as they consider more uses that
require data modeling or validation. The recently published GRDDL Uses
Bases presents scenarios such as scheduling a meeting, comparing
information from various retailers, and extracting information from wikis
to facilitate e-learning. Data that has been made part of the Semantic Web
can then be merged with other data. W3C says that the changes to existing
documents required by GRDDL are minimal.
Click Here to View Full Article
to the top
Planning for US Science Policy in 2009
Nature (10/19/06) Vol. 443, No. 7113, P. 751; Kalil, Thomas
Since the next president will begin creating his or her initial policy
priorities in early 2009, the science and technology community should begin
devising a plan to garner the financial attention, and resources, it sees
appropriate, writes former Clinton administration science and technology
policy advisor Thomas Kalil. Becoming part of the initial fiscal policy
laid down by an administration is very important, because as time passes it
is a more difficult task. Both political parties have shown concern for
U.S. economic competitiveness and look to investment in science and
technology as a remedy. However, simply asking for more money than was
previously granted is not as effective as having an organized approach.
One example of the effectiveness of a concerted effort is support for
nanoscale science and engineering technology that will benefit many
disciplines, through wider-ranging increases in the budgets of key science
agencies. Rather than policy makers setting a goal and deciding how to
reach it, they should leave the decision as to how the goal can be reached
to scientific research. The scientific community should identify a number
of candidate topics for new or expanded research initiatives, and fill in
agendas for each of these as necessary. Next, recommendations should be
created on issues affecting science and technology across the board,
specifically, how the major agencies can reverse the trend in which
researchers feel that a grant can only be written once the research is
done, or how the peer-review system can be tweaked to encourage riskier
proposals. The Department of Defense (DOD) currently allocates 0.3 percent
of its annual $1.5 billion budget for basic research, which means that the
next president could very possibly request this portion to be increased, at
least to an equal degree as that which is requested and subsequently
"earmarked" by Congress. A doubling in finances could be achieved over a
five-year period with such presidential support.
Click Here to View Full Article
to the top
The Economics of Information Security
Science (10/27/06) Vol. 314, No. 5799, P. 610; Anderson, Ross; Moore,
Tyler
The economics of information security has recently emerged as a field
characterized by prosperity and rapid momentum, write University of
Cambridge researchers Ross Anderson and Tyler Moore. The assembly of
distributed systems from machines owned by principals with different
interests demonstrates the increasing value of incentives in assuring
reliability. Indeed, incentives are coming close to equaling technical
design in importance. Anderson and Moore note, for instance, that public
disclosure of vulnerabilities gives vendors an incentive to correct bugs in
subsequent product releases. "Consumers generally reward vendors for
adding features, for being first to market, or for being dominant in the
existing market--and especially so in platform markets with network
externalities," the authors write. "These motivations clash with the task
of writing more secure software, which requires time-consuming testing and
a focus on simplicity." The new information security economics discipline
offers key insights into general topics as well as into specific security
issues such as bugs, phishing, spam, and law enforcement strategy. General
issues include peer-to-peer system design, the best balance of effort by
programmers and testers, the reasons behind the erosion of privacy, and the
politics of digital rights management. Anderson and Moore point out that
the work of information security economics researchers has begun to reach
into other disciplines, including general security economics and
dependability economics.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
The Next Voting Debacle?
IEEE Spectrum (10/06) Vol. 43, No. 10, P. 12; Cherry, Steven
Help America Vote Act (HAVA) guidelines disqualify people from voting in
all but one of the 50 states if they are not on the voting rolls, and this
could disrupt the November elections because the databases that contain the
rolls have been around for a short time and were not all built in
compliance with best database industry practices. The HAVA rules were set
up to address the lack of coordination between state and county governments
in maintaining voter rolls. HAVA gives states a variety of options in
responding to mismatches between a voter's registration information in the
database and the data in other databases, and state officials in Texas,
Washington, California, South Dakota, and Iowa have used this latitude to
jettison many registrants, according to the New York University School of
Law's Brennan Center for Justice. Most mismatches are related to new
voters, voters who change their name, and those who relocate; typos made by
election officials can also cause mismatches, which is frustrating to
people such as the Brennan Center's Wendy Weiser, who says such errors
could be avoided through automated techniques developed by database experts
that many states did not use. States are required by law to "verify" the
voter rolls, but they do not have to necessarily take action against
registrants whose names or addresses are unverifiable. The massive
mismatch purges in California and elsewhere may have been partly stimulated
by the opinion of a lawyer in the Justice Department's Civil Rights
Division, who told Maryland officials that mismatches between a
registration application and motor vehicle or Social Security records
should make the applicant ineligible for addition to the voter rolls. The
final decree over a mismatch highlights a basic problem in terms of voter
eligibility as well as HAVA: An excessively simple, law-mandated way to
register and vote makes it easy to cast multiple ballots and commit other
forms of voter fraud, while an overly difficult registration process
results in voter disenfranchisement. To view ACM's report on "Statewide
Databases of Registered Voters," visit
http://www.acm.org/usacm/VRD
Click Here to View Full Article
to the top
The Internet Sucks
Maclean's (10/30/06) Vol. 119, No. 43, P. 44; Maich, Steve
The Internet was envisioned as a free and universally accessible
repository of knowledge, but that idealistic vision has not been realized.
Instead, the Web is rife with scammers, pornographers, sexual predators,
and misinformation, which sadly comprise the tip of the iceberg.
Northwestern University economics professor Robert Gordon puts the state of
the Internet in perspective, commenting that, while useful, the Net has not
produced much of anything that is authentically novel or as far-reaching as
earlier innovations, such as the telegraph. It is a little known fact that
the Internet boom of the late 1990s--which was followed by an equally
precipitous implosion--was fueled by a widely propagated myth that Internet
traffic was growing twofold every 100 days between 1997 and 2000. Rapidly
improving connection speeds and computer storage capacity--and some
enterprising entrepreneurs--have cultivated rampant online piracy of
copyrighted material, while the accuracy of information on the Web ranges
from genuine to exaggerated to outright fabrication because people prefer
cheap and convenient answers. Hard journalism is being devalued and
undermined by an army of biased, amateur commentators and reporters.
According to experts, the majority of information people are looking up on
the Web ranges from salacious (porn, gambling, extra-marital affairs) to
trivial (celebrity gossip, consumer products, TV shows), and evidence
suggests that the Internet's anonymity and lack of consequence is
encouraging such behavior. But garden-variety gamblers, porn addicts, and
plagiarists barely scratch the surface: The Internet has also become a
sanctuary for criminals such as pedophiles, con artists, hackers, and
terrorists.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
P2P: The Next Wave of Internet Evolution
Business Communications Review (10/06) Vol. 36, No. 10, P. 48; Waclawsky,
John G.
Motorola chief software architect John Waclawsky writes that future
Web-based innovation and development will center around peer-to-peer (P2P)
overlays. With P2P, new services and technology-facilitated experiences
can be quickly, easily, and cheaply delivered to edge devices on a wide
array of networks across the globe, and the opportunity is immense when one
considers that there could be a personal area network (PAC) for every
person on Earth, in addition to the number of edge devices that might
cooperate. Waclawsky thinks small-scale P2P environments might develop
into larger and more powerful overlay networks once users start playing
around with them at home or the office and investors start producing new
services and devices. Among the online benefits that can be delivered via
P2P are faster connectivity and development, more creative product
differentiation, and more potential for e-commerce. E-commerce may perhaps
emerge as the most promising area for P2P overlays because P2P will permit
producers and consumers to connect without the need for intermediaries such
as distributors. "Not only will P2P overlays bring new features and
services far faster than traditional service providers will be able to
deliver, but edge device manufacturers that exploit overlays also will beat
core equipment suppliers to new service functionality," Waclawsky predicts.
"P2P changes everything."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
A Case for Peering of Content Delivery Networks
IEEE Distributed Systems Online (10/06) Vol. 7, No. 10, Buyya, Rajkumar;
Pathan, Al-Mukaddim Khan; Broberg, James
Open Content and Service Delivery Networks (CSDNs) that can scale well and
share resources with other CSDNs are possible through a system based on an
open, scalable, and service-oriented architecture presented by University
of Melbourne and RMIT University researchers. By teaming up, Content
Delivery Network (CDN) providers can slash costs and avoid negative
business consequences resulting from Service Level Agreement violations,
and this can be done when a provider establishes peering with other
providers that have caching servers near its client. The researchers offer
a Virtual Organization (VO) model for assembling CSDNs that share Web
servers with other CSDNs in addition to within their own networks. They
also suggest the use of market-based models in resource allocation and
management to stimulate sustained resource sharing and peering schemes
between different CDN providers at a global level. Web servers are most
critical to CSDNs, as they store content and value-added services as
infrastructure services and reliably and supportively deliver them. With a
service registry, VO providers can register and publish their resources and
service details, while a policy repository is used for the storage of
policies generated by administrators. Another important element for the
CSDN is the coordinated VO scheduler, which guarantees collaboration and
coordination with other CSDNs via policy exchange and scheduling of content
and services.
Click Here to View Full Article
to the top