Big Shift Seen in Voting Methods With Turn Back to a
Paper Trail
New York Times (12/08/06) P. A1; Urbina, Ian; Drew, Christopher
Federal election officials and legislators have indicated that major
changes will most likely be seen in the way ballots are cast and counted by
the 2008 elections, including the elimination of voting machines without a
paper trail. New federal guidelines issued this week and legislation
expected to pass next year are causing voting districts to either retrofit
touch-screen machines with printers or otherwise scrap them all together
and implement optical scan machines. Paperless, touch-screen voting
machines were used by about 30 percent of voters in the 2006 mid-term
elections, but scientists and politicians have become increasingly
concerned about the security and reliability of those machines. However,
federal Election Assistance Commission Chairman Paul S. Degregorio points
out that counties that used paper trails ran into problems of their own,
urging officials to think their decision through so that old flaws are not
simply replaced with new ones. Legislation Congress is expected to pass
next year will allocate $150 million to fund the necessary changes in local
voting procedures, but some claim this would not be enough money. As part
of the new election regulations, vote counting software code likely will
have to be made available so it can be checked for vulnerabilities,
although the manufacturers claim that doing so will only help hackers.
Voting machines will also be subjected to new federal tests prior to
elections. VoteTrustUSA's Warren Stewart says, "We're confident that the
accuracy and integrity of voting is going to take some big steps forward
with the legislation in Congress right now. But our big concern is to
avoid replacing old problems with new ones."
Click Here to View Full Article
to the top
Guest Lecturer Focuses on Cybersecurity Threats
UDaily (University of Delaware) (12/07/06) Hutchinson, Becca
ACM fellow Eugene Spafford on Wednesday spoke at the University of
Delaware concerning the precarious state of cybersecurity and the dangers
to come if safety measures are not improved. Spafford, Purdue University
professor of computer science and executive director for its Center for
Education and Research in Information Assurance and Security, made it clear
that cybersecurity is facing a crisis with "overwhelming vulnerabilities in
most commonly used software applications, and well over 130,00 known
viruses and worms." Mentioning that cybercrime in general has becoming
more sophisticated, "because we have not done a very good job of protecting
ourselves," Spafford identified botware as the newest and largest threat.
He stated that "Detection is doomed and the problem is getting worse...two
out of 40 individuals is a victim of identity theft, and [only] one out of
every 10 email messages is valid." The apathy toward the idea that "we're
not simply users, we're victims" is Spafford's biggest concern, and he
attributed blame to a lack of ownership of the Internet, the increasing
abilities of hackers, and the lack of funding for math and computer science
education in American public schools. He claimed that clinging to
yesterday's security measures, firewalls, and virus-protections software is
"insanity." Spafford is chair of ACM's U.S. Public Policy Committee
http://www.acm.org/usacm
Click Here to View Full Article
to the top
Computer Scientists Unravel 'Language of Surgery'
Johns Hopkins University News Releases (12/08/06) Sneiderman, Phil
Johns Hopkins University computer scientists are applying speech
recognition concepts to surgery in order to construct mathematical models
of the most effective techniques used by surgeons. Using the system, new
surgeons could be trained, and experienced surgeons could gain perspective
on their work, including the way that their actions could be made safer or
more efficient. JHU Whiting School of Engineering professor of computer
science and principal investigator for the project, Gregory D. Hager,
explains, "Surgery is a skilled activity, and it has a structure that can
be taught and acquired. We can think of that structure as the 'language of
surgery'...we're borrowing techniques from speech recognition technology
and applying them to motion recognition." Just as speech recognition
breaks down words into their most elemental sounds, the team is breaking
down surgical action to its most basic gestures, which software can
represent mathematically, Hager says. To teach the robot the "language of
surgery," the team is using data recorded by the da Vinci robotic surgery
systems to construct mathematic models for specific procedures. Hager says
that using the da Vinci data, their software has become able to go "from
words to sentences," and they are progressing toward their goal of creating
a "large vocabulary." The project is backed by a three-year National
Science Foundation grant.
Click Here to View Full Article
to the top
Future Net: Expanding the Web From Pages to Data
Sources
eWeek (12/07/06) Taft, Darrel K.
IBM Research is developing a middleware platform code-named Infinity that
will allow universal access to the data stored on mobile devices,
potentially creating a boundless pool of data. The goal of the IBM team,
led by Stefan Schoenauer, a researcher at IBM's Almaden Research Center, is
to enable all types of mobile devices to connect to the network using
various methods, including device-to-device communication. The Infinity
project marks the beginning of discovering how mobile networks can provide
the backbone for an innovative information marketplace that would replace
Web pages with data sources. "We have all this data in a lot of different
formats, and it's the middleware's job to translate those different
formats," said Schoenauer. "What's there now is a scattered spectrum, and
with this middleware we can have a unified platform." Several different
types of mobile devices have already been successfully tested.
Applications could include traffic monitoring and disaster response, which
would utilize the network's device-to-device capabilities, or simple data
search functions. Co-workers could make data available only to each other,
allowing a higher degree of collaboration and interactivity. Some think
that for such an idea to be fully realized, embedded systems must also be
integrated into the network.
Click Here to View Full Article
to the top
More Trouble With Programming
Technology Review (12/07/06) Pontin, Jason
C++ inventor Bjarne Stroustrup lists exceptional examples of C++ code that
"cleanly separate concerns in a program," allowing parts to be developed
separately and simplifying comprehension and maintenance. Outstanding
examples of programs written in C++ he cites include Google and the Mars
Rovers' scene-analysis and autonomous driving systems. Stroustrup makes
few predictions about the next programming language design paradigm shift,
noting that aspect-oriented programming is not about to leave the "academic
ghetto" anytime soon. But he does think the next conceptual change will
tie into concurrency management somehow. The C++ creator is against the
idea of "dumbing down" the coding process to overcome a computer-language
learning curve, arguing that while programming languages should not be
unnecessarily complex, the tools should be designed to serve skilled
professionals. Stroustrup backs this statement with the assertion that the
programming languages are not so difficult to learn, and contends that the
difficulty lies in appreciating "the underlying techniques and their
application to real-world problems." It is his wish that evolutionary
changes in programming would accelerate, and doing so requires funding of
"advanced development," "applied research," and "research into application"
on a currently unheard-of scale. Stroustrup deems it critical that
language and library evolution is supported with tools to expedite upgrades
of systems and tools that permitted older applications to run in domains
designed for newer systems.
Click Here to View Full Article
to the top
Scientific Remedies
Inside Higher Ed (12/06/06) Thacker, Paul D.
Federal government strategies for funding science and technology in the
immediate future was the focus of a recent discussion at the Brookings
Institution. Richard Freeman, professor of economics at Harvard
University, said the federal government should focus more on targeted
spending, such as on graduate research fellowships, which would empower
young students to determine the future direction of science. He noted that
today the National Science Foundation (NSF) sponsors about the same number
of students in the Graduate Research Fellowship Program as it did 30 years
ago. Freeman said the NSF should triple the number of fellowships to about
3,000, and increase each fellowship by $10,000 to $40,000 annually.
Meanwhile, Thomas Kalil, special assistant to the chancellor for science
and technology at the University of California at Berkeley, suggested that
the federal government rely more on prizes, which would allow it to
essentially fund a successful project rather than a research proposal. He
added that prizes are attractive to private investment and that they can
captivate the public as well. Experts are concerned that the United States
could lose its leadership position in competitiveness if it does not take
science and technology more seriously.
Click Here to View Full Article
to the top
Criminals 'Target Tech Students'
BBC News (12/08/06)
IT students are being targeted by criminal enterprises before they
graduate from college, and can even have their degrees paid for by these
criminals. McAfee security analyst Greg Day, co-author of the "Virtual
Criminology" report, which explores the digital criminal underground, says
the most successful cyber gangs were partnerships between experienced
criminals and people with technical computer skills. As cyber criminal
gangs have expanded, skilled hackers have become more difficult to recruit,
causing criminals to search Web sites, message boards, and chatrooms for
potential targets around the world, some as young as 14 years old. Day
says the glamour of being a "hacker" as well as the low risk of being
caught and the high reward offered make the job very attractive to young
people. Often, these hackers can end up being blackmailed by the criminals
they work for, because they have knowledge of the crimes they have
committed. Day adds that some groups even try to recruit within the very
companies they hope to exploit. Day says, "Cybercrime is no longer in its
infancy, it is big business."
Click Here to View Full Article
to the top
On Internet2, Innovating at Higher Speed
Chicago Tribune (12/06/06) Van, Jon
Internet2 members celebrated the 10th anniversary of the high-speed data
network by gathering in Chicago this week. Those in attendance received a
first-hand look at the 10-fold increase in the speed of Internet2 to 100
Gbps; the ultrafast link between Chicago, New York, and Washington will be
expanded to the rest of the digital network by the summer of 2007. They
also discussed how far Internet2 has come, as it was born out of the desire
of the academic community to have a network that could be used for research
at a time when the regular Internet was becoming a commercial smash.
Nearly two-thirds of colleges are connected to the superfast network that
is largely used by researchers to test new technology. But college
students also have access to Internet2, which is increasingly being used to
gauge the potential popularity and economic feasibility of new products.
For example, AT&T is planning to roll out Internet-based television in some
parts of the Chicago area next year, but students on the campus of
Northwestern University have been watching TV programs over the Internet
for the past four years. In addition to speed, Internet2 members continue
to focus on secure authentication and soon hope to allow users to go online
without having to identify themselves by name. "If you authenticate
someone's right to use a service without giving personal information, it
reduces concerns over privacy issues," says Charles Catlett, a senior
fellow at the University of Chicago and Argonne National Laboratory.
Click Here to View Full Article
to the top
Technology by Design
The Record (Ontario) (12/07/06) Aggerholm, Barbara
Michael Terry, an assistant computer science professor at the University
of Waterloo, challenges his fourth-year students to find real-world
applications for their technical knowledge that will help people with
real-world problems. Students in his course on human-computer interaction
have responded to the challenge by coming up with devices that help
parking-meter readers write citations in any weather or lighting condition,
help catering chefs call up information about similar events so they can
better plan menus, and help architects plan buildings and rooms. Terry,
who also holds a degree in psychology, says that "in the past, computer
technology was designed...without thinking of real-world needs," but
"industry is starting to recognize we have to understand what people do to
better design for them." He has his students interview professionals and
shadow them in real-world settings to get a good idea of their work
environment and what sorts of things will help them out while integrating
well with their work. The device they designed for parking enforcement can
be held in one hand, so the enforcement officer can hold a flashlight or
umbrella in the other, and rather than use a pen or stylus they use a
button and scroll wheel to control it, employing its built-in camera to
photograph the license place, OCR software to read letters and fill in
forms, and GPS to identify the location. The system they created for
architects uses special tokens and visualization techniques to allow
architects to see more; for example, a token placed in a specific room on a
paper drawing enables the architect to see a detailed zoomed-in version and
all the drawings for that specific room. One token allows the architect to
see the mechanical engineer's drawings, another overlaps the electrical and
mechanical engineers' drawings, and another shows all the drawing revisions
over time. The system designed for catering chefs is a digital notepad
with pen with which the chefs can recall similar events, put together
estimates and lists of requirements, and jot down notes and let their
creativity flow.
Click Here to View Full Article
to the top
Image Analysis: Enhancing the Human Equation, or
Removing it Altogether
Advanced Imaging Pro (12/06) Reid, Keith
Automated image analysis technology, although mature, still encounters
difficulties as different demands are made of the technology. Software
designed to study pathology specimens using image analysis can observe
single cells or groups of cells, but run into problems due to cell and
disease diversity, as well as lighting and image problems. However, by
identifying and observing the nuclei of the cells in a sample, the software
has able to perform the necessary calculations. The technology has been
further applied to the identification of cancer cells, and while it is not
meant to replace human pathologists, image analysis can be used before
human observation to draw a human's attention to certain features, or after
human observation to make sure nothing is missed. Military applications
for image analysis are currently concentrated in remote sensing, automated
targeting, and automated navigation, and have struggled with the same
problems as other applications, but in this case a lack of real-time
analysis can result in a life-or-death situation. DARPA wants to develop
automated, self-navigating supply vehicles that utilize many sensors; the
subject of the unmanned race across the desert that was won by the Stanford
team in 2005. Another application is the automated refueling of aerial
vehicles, both manned and unmanned. Researchers successfully designed a
system of algorithms by which the fuel probe from an unmanned plane found
and calculated its position in relation to a drogue, the basket on the end
of an aerial tanker's probe, and then connected to and disconnected from
the drogue, helping simplify one of the most challenging aspects of
unmanned military aircraft.
Click Here to View Full Article
to the top
DHS Passenger Scoring Illegal?
Wired News (12/07/06) Singel, Ryan
Privacy advocates charge that the Department of Homeland Security's
Automated Targeting System (ATS), which assigns terrorism scores to people
traveling in and out of the United States, is a violation of the limits
that have been placed on the department by federal lawmakers. Pointing to
a provision in the 2007 Homeland Security funding bill, Identity Project
members Edward Hasbrouck and James Harrison wrote, "By cloaking this
prohibited action in a border issue...the Department of Homeland Security
directly and openly contravenes Congress' clear intent. A DHS spokesperson
said the appropriations bill's language--which bars government agencies
from using appropriations funding to "develop or test algorithms assigning
risk to passengers whose names are not on government watch lists"--does not
cover the ATS, which harvests passenger data from international flights and
scores each passenger's risk based on watchlists, criminal databases, and
other government systems. High scorers are targeted by Customs and Border
Protection for extra screening at deplaning time, and the data and scores
can be kept for 40 years, broadly shared, and be used for hiring decisions;
in addition, travelers are not able to see or contest their scores.
According to congressional testimony by DHS official Paul Rosenzweig, the
system had "encountered 4,801 positive matches for known or suspected
terrorists," although it was not clear how many were correct matches.
Critics who say the ATS program is illegal under the law include Marc
Rotenberg of the Electronic Privacy Information Center and Jim Harper of
the Cato Institute. DHS spokesman Jarrod Agen argues that the
appropriations bill's language refers specifically to a program called
Secure Flight, a planned successor to the CAPPS II screening system, but
Rotenberg and Harper disagree with that interpretation.
Click Here to View Full Article
to the top
LinuxBIOS Ready to Go Mainstream
Linux.com (12/07/06) Byfield, Bruce
Though it faces major challenges such as a scarcity of resources and
protests from certain proprietary chipset makers and original equipment
manufacturers, the LinuxBIOS project is on the verge of standardizing a
free BIOS for computers. Comprising LinuxBIOS is the smallest amount of
code necessary for starting a mainboard to the point where a payload can
complete the machine's booting, and the project's profile was raised via
its inclusion in the Free Software Foundation's high priority list and its
use in the One Laptop Per Child (OLPC) initiative. Debugging and updating
LinuxBIOS is faster and simpler because it is written in C instead of
assembly language, while OLPC BIOS release manager Richard Smith notes that
LinuxBIOS' cost is far lower than that of its proprietary counterparts; in
addition, customers concerned about security may find LinuxBIOS more
attractive because it is licensed under the GNU General Public License.
Perhaps the biggest benefit of LinuxBIOS, according to Los Alamos National
Laboratory researcher Ron Minnich, is its ability to store BIOS knowledge
that could prove invaluable to manufacturers and vendors later on. Among
the obstacles LinuxBIOS faces is lack of support for the development of an
alternative operating system and non-interoperability with Windows, says
Smith. But he points to the promising trend of "Manufacturers...getting
better about releasing specs on older boards," which could give a boost to
LinuxBIOS' support and credibility; the success of the OLPC project could
also be an advantage for LinuxBIOS. Minnich is optimistic about Google's
funding of an automated distributed testing environment for LinuxBIOS,
while momentum is building for major vendors to offer LinuxBIOS as an
option.
Click Here to View Full Article
to the top
Microsoft Research Fights Critics, Targets
Innovation
Network World (12/06/06) Fontana, John
Though Microsoft is criticized of falling short when it comes to
innovation, one thing to its credit is the establishment of Microsoft
Research (MSR), which currently commands a budget of over $250 million and
over 700 researchers; MSR has served as the incubator of technologies that
are incorporated into the Xbox 360, Exchange Server, and Windows Vista, to
name a few products. Nucleus Research CEO Ian Campbell says the problem
for Microsoft is its inability to complement its strong underlying
technical innovation with an equally strong marketing savvy. "Technology
transfer is a full contact sport," notes MSR's Rick Rashid. "It can happen
by accident, but mostly it is hard work." He adds that practically all
Microsoft products today stem from research in some capacity. Long-term
projects the lab is focusing on include the SenseCam virtual memory and the
TouchLight interface, which seeks to replace the mouse and keyboard with
new input mechanisms via computer vision and sensing. Other projects of
interest include Nocturnal, a social networking tool that embeds Web site
bookmark-sharing into instant messaging systems, and a privacy engine that
can filter data that is stored in statistical databases. Director of
Microsoft Research's Silicon Valley lab Roy Levin remarks that his lab
achieves a balance between near-term projects and long-term projects.
Click Here to View Full Article
to the top
Should the U.S. Increase Its H-1B Visa Program? CON:
Wages Belie Claims of a Labor Shortage
San Francisco Chronicle (12/07/06) P. B7; Matloff, Norman
The tech industry is pushing for an expansion of the H-1B visa program not
to forestall a labor shortage as they claim, but to save money by hiring
lower-paid foreign workers, argues UC Davis computer science professor
Norman Matloff. He says the industry's allegation of a labor shortage is
contradicted by a Business Week article noting that starting salaries for
new bachelor's degree graduates in electrical engineering and computer
science have been flat or declining in recent years, while
postgraduate-level wages have also remained level. Though H-1B holders are
required by law to be paid the "prevailing wage," there are plenty of
loopholes employers exploit to pay them less. "Employers who favor aliens
have an arsenal of legal means to reject all U.S. workers who apply,"
commented immigration lawyer Joel Stewart. Though Matloff is in favor of
importing the most talented people from overseas to maintain U.S.
innovation, he contends that there are few tech-oriented H-1B holders who
make the grade. Government data shows that most H-1B holders earn a
maximum wage of around $60,000, while the most talented techies earn more
than $100,000. The most common type of H-1B holder employers hire is the
software developer.
Click Here to View Full Article
to the top
ICANN Reviews Revoking Outdated Suffixes
Associated Press (12/07/06) Jesdanun, Anick
ICANN has begun accepting public comments at its meeting in San Paulo,
Brazil, this week concerning what outdated domain name endings should be
revoked and deleted, with the public comment period to remain open until
Jan. 31, 2007. ICANN also has launched a review of eligibility rules for
registration for .int, a domain in development designed for international
organizations. In terms of county code domain names (ccTLDs), one likely
candidate to be nominated and cancelled is .su for the Soviet Union. While
.yu for Yugoslavia still has an abundance of Web sites, Yugoslav republics
Serbia and Montenegro are transitioning to their own ccTLDs. East Timor
once used .tp but now uses .tl, and Great Britain's residents use .uk far
more than the provincial .gb. East Germany's .dd and Zaire's .zr already
have been deleted. If ICANN makes deletions, there likely will be a
year-long transition period for Web site users to switch to another
domain.
Click Here to View Full Article
to the top
The Privacy Klatch
National Journal (12/02/06) Vol. 38, No. 48, P. 52; Harris, Shane
Nascent and current technologies that could be employed for the protection
of civil liberties during data collection and analysis are the focus of
"privacy workshops" sponsored by the Office of the Director of National
Intelligence (DNI). "It was clearly an effort to reach outside of the
intelligence community and reach outside of the classified environment,"
noted Jim Dempsey of the Center for Democracy and Technology. Alex Joel
with the DNI's Civil Liberties and Privacy Office explained that
participants recommended several technologies, such as tools for comparing
multiple databases without sharing data, and data-misuse prevention
technologies such as devices that generate "audit logs." Certain privacy
proponents and technology experts, including some very vocal critics of the
Bush administration, were not invited to the workshops, while attendees
said arguments over policy were avoided. "I think the overall aim is to
look at increasing the body of knowledge [on privacy protection], to
further technical research within the DNI," said Factiva director of
government services Tony Hall. It is no coincidence that the DNI's office
is taking the reins of research that was originally conducted under the
auspices of the Defense Department's now-defunct Total Information
Awareness (TIA) project, with certain TIA component programs folded into
the DNI and included in the Tangram program. The Tangram manager pledged
that civil liberties officials would be consulted prior to the deployment
of the new program. The advice of privacy workshop participants will be
used by DNI officials to establish a research agenda for guaranteeing
privacy in new counter-terrorism solutions and to bolster their cognizance
of the cutting edge.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Me Translate Pretty One Day
Wired (12/06) Vol. 14, No. 12, P. 210; Ratliff, Evan
Meaningful Machines has spent four years designing automated translation
software, and Carnegie Mellon University professor Jaime Carbonell, who is
also Meaningful Machines' chief science officer, presented a paper last
summer that calls the software a major development as well the most
accurate Spanish-to-English translation system in existence. Machine
translation (MT) made significant advances thanks to a shift from
rules-based systems to statistical-based MT, in which algorithms study
collections of previous translations to determine the statistical
likelihood of words and phrases in one language cropping up in another,
building a model from those probabilities that can be used to assess new
text. But such algorithms are only successful when applied to the same
type of text on which they have been trained. The system developed by
Meaningful Machines employs a large collection of text in the target
language, along with a small volume of text in the source language and a
vast bilingual dictionary; when translating a passage, the system examines
each sentence in consecutive five- to eight-word fragments, and uses the
dictionary and a process called flooding to produce and store all possible
English translations for the words in each fragment. The system ascertains
the most coherent candidates by scanning the English text and ranking
candidates by the frequency of their occurrence in the text. As the
software scans each successive text chunk, it rescores the candidate
translations according to the degree of overlap between each fragment's
translation choices and the ones before and after it. The system looks for
unknown words in the smaller source language text collection using a
synonym generator, and when they are found the system drops the original
word and looks for other sentences that employ the surrounding words. The
system's commercial viability hinges on dramatically boosting the speed of
translation.
Click Here to View Full Article
to the top
Why Multigrid Methods Are So Efficient
Computing in Science and Engineering (12/06) Vol. 8, No. 6, P. 12;
Yavneh, Irad
Multigrid computational techniques are established methods for rapidly
solving elliptic boundary-value problems, and are also regarded as an
efficient tool for solving other kinds of computational problems, writes
Technion-Israel Institute of Technology professor Irad Yavneh. Multigrid
methods are undergirded by the concept of employing the simple local
process but applying it at all scales, and there are several jobs that must
be done in order to devise a multiscale solver for a given problem. The
appropriate local process must be selected; suitable coarse variables and
proper techniques for transferring information across scales must be
chosen; and the proper equations or processes for the coarse variables must
be developed. The level of ease or difficulty for each of these tasks can
vary in accordance with the application, Yavneh explains. Building
appropriate equations for coarse grids requires data from a fine grid, and
so care must be taken in selecting the coarse-grid variables, since coarse
grids are practical only for the representation of smooth functions.
Proper operators are necessary to facilitate the transfer of information
between the coarse and fine grids, and the multiscale algorithm can be
applied outside of the one-dimensional parameter. Solving the problem
approximately on a coarser grid and then interpolating the solution to a
fine grid as a representation of a first approximation is considered to be
an effective strategy, according to Yavneh.
Click Here to View Full Article
to the top