Bush Looks to Beef Up Protection Against
Cyberattacks
Wall Street Journal (01/28/08) P. A8; Gorman, Siobhan
The Bush administration plans to include a $6 billion cybersecurity
spending bill in February's budget proposal designed to protect U.S.
communications systems from cyberattacks, but without knowing the details
many lawmakers are unsupportive and skeptical the bill will pass.
Department of Homeland Security Secretary Michael Chertoff says
cyberterrorists make more aggressive attacks on government and private
networks now than in previous years, and in one recent case hackers even
shut down power equipment in several regions in an attempt to extort money.
The proposal would cost an estimated $30 billion over seven years with $6
billion in startup costs in 2009. National Intelligence director Mike
McConnell says the proposal can be cut back to cover only government
networks, though more than 90 percent of attacks occur in the private
sector. Lawmakers, however, are more concerned with the details of the
proposal, and how network security and monitoring would be implemented
without compromising civil liberties. One of the key sticking points will
be how much access intelligence agencies have to private networks, since
those agencies, particularly the National Security Agency, are expected to
have big roles in any protection scheme. "We don't want to
unconstitutionally infringe on the rights of private business under the
guise of this new program," says House Homeland Security Committee Chairman
Bennie G. Thompson (D-Miss.).
Click Here to View Full Article
to the top
At Florida Polls, Touch Screens and Crossed
Fingers
Washington Post (01/27/08) P. A8; Whoriskey, Peter
While the troublesome voting machines that created the "hanging chad"
debacle in the 2000 presidential election are gone, some in Florida are
preparing for more ballot trouble from touch-screen machines during the
upcoming presidential primaries. Following a machine failure in a 2006
congressional election, the Florida legislature voted to ban touch-screen
machines, but replacement machines will not be ready until the general
election in November. In one 2006 congressional contest, there were 18,000
people who checked into the polls and chose candidates in other contests
but not in the congressional race. Some believe the undervotes were the
result of a confusing ballot page or a conscious choice to skip that
contest due to the negative tone of the race. Others suspect the machines
dropped the votes, and numerous voters claimed the machines did not
function properly. The cause of the undervotes has not been determined,
and an official investigation did not find a bug in the machines that would
have caused the votes to be dropped. The touch-screen machine failures
prompted state lawmakers to require voting machines to leave a paper trail,
forcing most counties to buy new machines. With the exception of Sarasota
County, every county that has had to replace their machines will not be
ready by the state's primary election. "Floridians have said they want to
be able to cast a ballot on a piece of paper," says state elections
officials spokesman Sterling E. Ivey. "We're moving to a paper system to
help restore confidence."
Click Here to View Full Article
to the top
An Alternative Approach to NSF Funding of HPC
HPC Wire (01/25/08) Vol. 17, No. 4, Agarwala, Vijay K.
Vijay K. Agarwala, director of Research Computing and Cyberinfrastructure
Information Technology Services at Penn State University, has sent a letter
to all members of the Coalition for Academic Scientific Computing (CASC)
proposing an alternative strategy for National Science Foundation funding
in which a portion of the funding for large-scale computing at a single
center be redistributed to several smaller high-performance computing
systems in as many as 25 Tier 3 centers. "The science community and
industry will be well served if a portion of the federal funding for
large-scale computing systems is more evenly allocated rather than most of
it being concentrated in a few centers," Agarwala writes. "While the
national centers (Tier I and II) with their ultra-large systems will
continue to have an important role in meeting the capacity and capability
computing needs of U.S. scientists and engineers, support for a number of
university-based research computation centers will help fill existing
funding gaps and address many important policy objectives and goals such as
development of skilled HPC personnel, deeper university-industry
partnerships, increased adoption of HPC systems as a discovery tool by
larger number of academic researchers as well as by industry, improved
industrial competitiveness, and economic revitalization." Agarwala argues
that by extending the same benefits and support that larger centers receive
from NSF to smaller centers it will make the two-way migration between
campuses and national resources far more common, increase the number of
participants and providers, and make grid computing more of a reality.
Click Here to View Full Article
to the top
Engineer Unlocks Wii's Hidden Potential
CNet (01/28/08) Shankland, Stephen
Johnny Chung Lee, a Ph.D. graduate student from Carnegie Mellon
University's Human-Computer Interaction Institute, used the infrared remote
control of Nintendo's Wii to develop a virtual-reality head-tracker; a
virtual whiteboard on a wall, tabletop, and laptop screen; and a Minority
Report-style arm-waving and finger-pointing multitouch user interface. Lee
used a computer to process data from the "Wiimote" system, which emits
infrared light and has a bar-shaped detector that can track movements of up
to four infrared sources with a 45-degree field of view. He attached
Wiimote to a TV and the sensor bar to his head to create the
virtual-reality head-tracker, which also feeds the information into an
algorithm that adjusts the perspective of an image on a monitor. Video
games could take advantage of the 3D feel that it produces. Lee used a pen
with an infrared LED in its tip to create the whiteboard application, which
only needed a quick calibration to enable a computer to track what he was
"drawing" on a wall, tabletop, and laptop screen. He built the
Wiimote-based user interface by attaching small reflectors to his
fingertips, which the sensor bar can track, and ultimately respond to
gestures such as pinching and swiping.
Click Here to View Full Article
to the top
Thinking About Tomorrow
Wall Street Journal (01/28/08) P. R1; Vascellaro, Jessica E.
Looking ahead 10 years, it is almost certain that we will not be taking
jetpacks to work or have robot maids that cook and clean for us, but there
will be some significant advancements during that time that drastically
change how we live our lives. Mobile devices will continue to get smaller
and become more powerful, connecting to the Internet through high-speed
links and eventually giving people the power and functionality of a full
desktop in a cellphone-sized device. Communication will become
increasingly Internet based, with social networks often replacing
traditional communication and become the main form for communications such
as birthday greetings and wedding announcements. The lines between online
and real-world shopping will become increasingly blurred, as online
activity is tracked and relayed to stores that identify users and make
suggestions based on that user's online activities. Electronic
entertainment will be radically different as well. New video-game systems
will likely use cameras to track player motions, replacing any and all
forms of handheld controllers, and many Hollywood studios will start to
produce low-budget films that are released directly to the Internet. How
we access the Internet will also change, with mobile devices being more
user and Internet friendly. Search engines will also do a better job of
anticipating what users are looking for by more closely tracking Internet
use. Google software engineer Matt Cutts says search engines might even
analyze data from people's real-world movement, if they agree to it, by
tapping into GPS devices in someone's car or phone. GPS technology will
also allow people to interact more easily with resources such as Google
Earth and Microsoft's Live Search Maps, which can provide information of
buildings and the environment in detailed aerial maps.
Click Here to View Full Article
to the top
TV for the Visually Impaired
Technology Review (01/28/08) Sauser, Brittany
Schepens Eye Research Institute researchers have developed software that
enables users to manipulate the contrast on their televisions to create
specially-enhanced images for TV viewers with macular degeneration, a
disease that can make the images on a screen appear blurred and distorted.
"Our approach was to implement an image-processing algorithm to the
receiving television's decoder," says project leader and Harvard Medical
School ophthalmology professor Eli Peli. "The algorithm makes it possible
to increase the contrast of specific-size details." Peli and his
researchers discovered that patients with macular degeneration cannot
perceive high-frequency waves in the visible spectrum, making fine details
difficult or impossible to see. To make it easier to view an image, the
researcher designed an algorithm that increases the contrast over the range
of spatial frequencies that the visually impaired can see, specifically the
middle and low-frequency waves. The researchers conducted a study using 24
patients with visual impairments, and six normal-sighted people, to
determine the amount of image enhancement people prefer. The subjects sat
in front of a television to watch four-minute videos, adjusting contrast
with a remote control. All subjects, even people with normal sight,
preferred some level of enhancement. Eventually, the system could make
watching television a more "rewarding experience" by making it easier for
people to pick out objects of interest, and Peli hopes the system will be
incorporated into the options menu on all televisions.
Click Here to View Full Article
to the top
Smile! You've Been Averaged
ScienceNOW (01/24/08) Bhattacharjee, Yudhijit
Researchers from the University of Glasgow in the United Kingdom say the
"average" image of an individual can be used to improve the accuracy of
face-recognition technology. The work of psychologists Rob Jenkins and A.
Mike Burton is based on the way in which the brain becomes more familiar
with a face upon repeat encounters. Jenkins and Burton developed a model
of how the brain constructs an image of a face by distilling the underlying
features into a reliable mental representation, and then applied it to a
face-recognition system. The baseline performance of the system in probing
20 different pictures of 25 famous celebrities was 54 percent, but when a
computer program was used to create a composite image of each celebrity,
faces were recognized with 100 percent accuracy. Moreover, they also
constructed the average of the celebrity images the system failed to
recognize during the baseline performance test, and this image was
correctly recognized 80 percent of the time. An airport would be able to
use average images to improve the accuracy of matching passenger photos
taken by its camera, according to the researchers. Anil Jain, a
face-recognition expert and computer science professor at Michigan State
University, says the technique needs to be tested on larger data sets.
Click Here to View Full Article
to the top
Haptics: Just Reach Out and Touch, Virtually
ICT Results (01/25/08)
European researchers have developed a haptic interface that allows users
to feel virtual textiles. The system combines a specially-designed glove,
a sophisticated computer model, and the visual representation of cloth to
reproduce a realistic sensation. "It is a multi-modal approach that has
never been tried before," says professor Nadia Magnenat-Thalmann,
coordinator of the HAPTEX (haptic sensing of virtual textiles) project.
Creating the sensation of deformable textiles requires a lot of modeling,
the project's first significant challenge, says Magnenat-Thalmann, which
included taking precise measurements of the tensile, bending, and
stretching properties of the material. "You also need very high
resolution," she says. "The visual system will give a realistic impression
of movement with just 20 frames a second, but touch is much more sensitive.
You need a thousand samples a second to recreate touch." The team
developed two models to create virtual textiles, one global model to track
the overall properties of the material, and a second, fine-resolution model
that maps the actual sensation on the skin. This information is then
combined with a detailed visual representation of the textile, which needs
to be in perfect synchronization to create a realistic sensation. To
combine a force-feedback device with a tactile device, the project
developed a powered exoskeleton with a pair of pin arrays that provide
tactile sensation to two fingers. The glove gives the sensation of bending
and stretching fabric, while the pin arrays convey texture.
Click Here to View Full Article
to the top
Bubble-Busting Sounds Could Keep Chips Cool
New Scientist (01/24/08) Palmer, Jason
Sound waves can be used to improve the performance of liquid cooling as a
solution for keeping computer chips from overheating, according to Ari
Glezer and his colleagues at the Georgia Institute of Technology. Glezer's
latest experiments involve placing an acoustic driver--which acts as a
speaker--opposite the heated surface, with cooling fluid in-between. When
the team projected a small amount of sound energy, at frequencies near 1
kilohertz, across the fluid, the gathering bubbles were dislodged. The
amount of heat they were able to dissipate increased by as much as 147
percent. Sound-enhanced liquid cooling delivered its best results when the
acoustic driver and the heated surface were just a few millimeters apart,
which means the approach could work in applications that have little space.
"The underwater jets solution is effective, but this way is more compact,
requires less power, and is, well, more elegant," Glezer says.
Click Here to View Full Article
to the top
'Biometrics' Used to Identify Terrorists
Advertiser (AU) (01/22/08) Riches, Sam
Computer scientists and engineers, investigators, and lawyers gathered in
Adelaide, Australia, this week for the first international "e-Forensics"
conference, which addressed Internet and electronic-crime and crime
prevention. "We're talking about the Internet, telephony, mobile phones,
mobile phone cameras, digital cameras--all of these are being used not only
to commit crimes but also to solve crimes," says conference chairman Dr.
Matthew Sorrell from the University of Adelaide. The United States is
currently working with Australia, the United Kingdom, Canada, Japan, and
China on a collaborative database that would use biometrics to identify and
trace terrorists and other persons of interest. Airports and corporations
have used such artificial intelligence tools for years to capture facial
features and match them to existing images or data. "There have been some
very minor achievements, but people still expect to spend more money and
time and to achieve a solution that cannot afford any more mistakes--aiming
for 100 percent accuracy," says Northeastern University professor Patrick
Wang.
Click Here to View Full Article
to the top
Battlefields Will Be Big Test for 'Seeing' Robot
Christian Science Monitor (01/25/08) P. 3; Peter, Tom A.
The U.S. military could begin deploying "seeing" robots in Afghanistan and
Iraq within 12-18 months to test their ability to maneuver through unknown
terrain autonomously to perform such missions as removing bombs and
searching for casualties in contaminated sites. Until now, giving robots
the ability to see required computer-vision systems that were hard to mount
on anything much smaller than a SUV. Furthermore, systems designed for
factory robots in controlled environments did not work well outdoors in
conditions such as direct sunlight, fog, or dust. Also, the sensor systems
had a number of moving parts, making it hard for robots to guide
themselves. But new 3D flash laser radar (LADAR) technology eliminates all
these barriers, allowing sophisticated vision for small machines. "It's
one of the holy grails of robotics to be able to do that," says William
Thomasmeyer, president of the Pittsburgh-based National Center for Defense
Robotics. "It's like the smaller robots have been trying to navigate with
one arm tied behind their back when compared to larger robots... [Now] that
hand becomes untied for smaller robots, and they've got the same advantages
in terms of sensors and sensing as larger robots do."
Click Here to View Full Article
to the top
Mouse of the Future? N.C. Students Navigate PCs With
Glove
LocalTechWire.com (01/24/08) Anselm, Amy
Students at North Carolina State University's College of Engineering have
developed the Manus Glove, a device that they believe will be the next
major innovation in controlling computers. "Basically, it is the mouse of
the future," says Ameir Al-Zoubi, a senior in computer science at N.C.
State and a member of the four-student team that developed the glove. The
Manus Glove uses motion-sensing technology that interprets small motions
into acceleration, tracking movement not position. The device takes about
30 minutes to learn, says team member Matthew Crenshaw. During a
demonstration, Crenshaw flicked through application windows and browsed Web
pages. Cursor speed is controlled by the angle of the user's hand, while
movements such as flicking a finger backward or pinching fingers together
can send a browser window back a page or open documents and links. The
glove has sensors attached to the end of each digit so touching the thumb
to another digit can act as a click or activate hotkeys. One of the main
purposes of the glove, which can operate any Bluetooth device, is to unify
the control of cell phones, MP3 players, headsets, keyboards, and other
devices. "We're thinking we could make this like a leather driving glove,"
Al-Al-Zoubi says. The researchers say the glove could also be used during
presentations since it is more natural, easier to use, and less distracting
than having to hold a mouse, remote, or a clicker.
Click Here to View Full Article
to the top
Net Body Issues Plea for Liberty
BBC News (01/24/08)
In a lengthy report sent to the U.S. Department of Commerce, ICANN asked
to be freed from U.S. government control. In its report, ICANN said that
it has achieved the objectives the government said it must accomplish
before it can be released from official oversight. ICANN also noted that
now is the time to begin discussing its transition from an organization
overseen by the U.S. government to an independent organization. In remarks
to BBC News, ICANN President Paul Twomey said the government will still
have a role in the organization, even after it becomes independent. Twomey
said the U.S. government would keep the organization informed about public
policy developments, but would not dictate its agenda or development.
ICANN's report will be discussed with officials from the Department of
Commerce at a meeting in March.
Click Here to View Full Article
to the top
Common Computers Gain Superpowers Through Student
Network
Michigan State University Newsroom (01/21/08)
More than one million people have joined the Berkeley Open Infrastructure
for Network Computing (BIONC), a computing network that allows anyone with
a computer to help search for extraterrestrial intelligence, fight AIDS and
other diseases, assist cancer research, or support a variety of other
research efforts. BOINC is an open-source computer program created at the
University of California, Berkeley designed to generate the power of a
supercomputer with the unused processing power of personal computers. The
system breaks extremely large computing processes, such as proving
Einstein's theories or modeling climate change, and breaks them down into
small pieces, which are downloaded to personal computers that process the
pieces while idle. Running BOINC does not slow down other applications
because BOINC is given the lowest processing priority, allowing other
programs to be processed before BOINC starts. Jonathan Brier, a Michigan
State University junior and founder of the Michigan State BOINC
Researchers, says the total computing power of BOINC is surpassing the
fastest supercomputer in the world, and that it is only using a small
fraction of the computers in the world so it has the potential to be much
faster.
Click Here to View Full Article
to the top
Decertification Dilemma
Government Technology (12/07) Vol. 20, No. 12, Douglas, Merrill
Following an expert review of California's electronic voting systems,
Secretary of State Debra Bowen implemented the decertification of all the
machines and then recertified them for use under specific circumstances.
She says ongoing, unresolved debates about the security and reliability of
various e-voting techniques, as well as documented security bugs, spurred
her concerns. Bowen's office now requires the deployment by election
officials of tougher security and post-election auditing procedures for all
machines, while Hart InterCivic direct recording electronic systems may
continue to be employed by counties, provided that they are in compliance
with the stronger standards. Counties cannot use DRE systems from Sequoia
Voting Systems and Diebold Election Systems except to perform early voting,
and must supply one machine for disability access at each polling place.
California Association of Clerks and Election Officials President Stephen
Weir says this ruling will be especially significant for 21 counties where
Sequoia or Diebold DRE systems have been used for all Election Day voting,
adding that counties will probably be forced to use paper ballots for most
in-person voting in February. Votes can then be counted either by running
all ballots through the centrally located optical scanning systems that are
currently used to tally absentee ballots, or by purchasing new scanning
systems to count votes at the precinct level. Weir says the tallying
process will be slowed down with the addition of paper ballots to the
absentee ballots that counties already feed into the central scanners,
while installing new precinct-based scanners and associated training costs
could come to about $66 million. Bowen says the effects of the voting
system decertifications should be eased by the state's vote-by-mail
policies.
Click Here to View Full Article
to the top
Web 3.0: Chicken Farms on the Semantic Web
Computer (01/08) Vol. 41, No. 1, P. 106; Hendler, Jim
A new class of semantic technologies is being explored by both new and
well-entrenched companies seeking to harness their power for new "Web 3.0"
applications, driving anticipation of commercialization, writes Rensselaer
Polytechnic Institute professor Jim Hendler. The World Wide Web Consortium
produced the first Resource Description Framework specification in 1999,
but semantic technologies did not begin to take off until the following
year when the Defense Advanced Research Projects Agency invested in RDF to
tackle interoperability issues the U.S. Defense Department was saddled
with. The W3C refocused on the development of Semantic Web tools in 2001
under the Semantic Web Activity umbrella, and within several years the
improvement of the RDF standard, the completion of RDF Schema
standardization, and the commencement of work on the Web ontology language
(OWL) were being stressed by new working groups. In 2004 new versions of
RDF and RDFS, along with the first version of OWL, earned recommendation by
the consortium as Web standards. Various chicken-and-egg problems plague
the Semantic Web, including Web 3.0 applications' requirement for data that
is available for sharing either inside or across an enterprise; the need
for machine-readable vocabularies for describing these data sets or
documents; and the need for extensions to browsers or other Web tools
augmented by Semantic Web data. The creation of three-tiered Semantic Web
applications that bear a similarity to standard Web applications--and thus
the presentation of Semantic Web data in a usable form to end users or to
other applications--has been made possible by the emergence of RDF query
languages such as SPARQL. The challenge lies in persuading companies or
governments to release data, ontology designers to construct and share
domain descriptions, and Web application developers to probe
Semantic-Web-based applications.
Click Here to View Full Article
to the top