Cambridge to Host Wireless Sensor Network
IDG News Service (04/06/07) Ames, Ben
By 2011, remote researchers will be able to run experiments using a
network of wireless sensors installed on streetlamps throughout Cambridge,
Mass. The sensors, which will collect information on weather and pollution
at first, will connect to the central servers at Harvard and BBN
Technologies by forming a mesh with other sensors. The servers will then
post the database information on the Internet, where researchers can submit
programs to be run on the nodes. "Think of it like a virus infecting all
the nodes," says Harvard computer science professor Matt Welsh. "Every
node can talk to its neighbor and pass along the data, and eventually you
get your program up and running on all of them." The nodes consist of an
embedded PC, an 802.11 a/b/g/ Wi-Fi interface, and a variety of weather
sensors. They will be placed on streetlamps and get electricity from them
to avoid the common problems of powering sensors. Thanks to the mesh
strategy, each node can download software or upload data to a distant
server hub using a radio with a range of one kilometer. A five-node
prototype is currently running in a Harvard lab. Once the system goes
live, Microsoft plans to overlay the data on maps, allowing researchers to
track pollution with higher resolution and a longer window of time for
monitoring than previously available. The network could eventually include
nodes attached to vehicles.
Click Here to View Full Article
to the top
Professor Lectures at U. Massachusetts on Electronic
Voting
Massachusetts Daily Collegian (04/06/07) Trull, Andrew
MIT electrical engineering and computer science professor Ron Rivest last
week spoke to a University of Massachusetts audience about the challenges
facing electronic voting. Voting system security can be split into two
sectors, according to Rivest, ensuring that votes are "cast as intended"
and that they are "counted as cast." The biggest challenge to the creation
of a secure system is voter anonymity, since "privacy is the most important
part of any voting system," but developing a system that runs on "hundreds
of thousands of lines of programming" and ensuring that all votes are
counted accurately without keeping any record of how specific people voted
is extremely daunting, Rivest said. Although several solutions were
brought up, Rivest suggested that paper will play a role in the next
election, despite complaints by election officials that "all that paper" is
too cumbersome. He said that voter verified paper audit trails are a way
to make sure that votes are "cast as intended" but cannot ensure that they
are "counted as cast." A system known as "mix nets" offers a way to make
sure that votes are counted as cast and that voters retain their anonymity
throughout the process. Mix nets would encrypt randomly re-assorted votes
while going through several proxy servers. Each voter would receive a copy
of the encryption and be able to consult a public bulletin to check that
their vote was counted correctly without revealing who they voted for.
Rivest also stressed the importance of human poll workers and the need for
national standards. For information about ACM's e-voting activities, visit
http://www.acm.org
Click Here to View Full Article
to the top
SIGGRAPH 2007 Announces Co-Located Technology Events in
San Diego
Business Wire (04/05/07)
The Graphics Hardware Workshop, the Symposium on Computer Animation (SCA),
and Sandbox: A Videogame Symposium once again will co-locate for SIGGAPH,
which is in San Diego this year. And the International Symposium on
Non-Photorealistic Animation and Rendering (NPAR) and the Emerging Display
Technologies Workshop will take place during the International Conference &
Exhibition on Computer Graphics and Interactive Techniques for the first
time in 2007. ACM SIGGRAPH is the sponsor of SIGGRAPH 2007, which is
scheduled for Aug. 5-9, at the San Diego Convention Center in San Diego,
Calif. The five events will take place inside the site or nearby during
the gathering, which is expected to draw some 25,000 computer graphics and
interactive technology professionals from around the world. "This
partnership only enhances the overall SIGGRAPH experience in San Diego,"
says Joe Marks, SIGGRAPH 2007 Conference Chair from Walt Disney Animation
Studios. "Attendees will have a chance to delve deeper on select topics
through co-located events, while also getting broad exposure to the latest
developments in computer graphics and interactive techniques at the main
conference." For more information about SIGGRAPH, or to register, visit
http://www.siggraph.org/s2007/
Click Here to View Full Article
to the top
Is the Internet Ready to Break?
CIO Insight (04/07) Cone, Edward
Predictions that demand for bandwidth will cause the Internet's collapse
in 2007 are meeting with disagreement, although it is widely accepted that
the Internet will need to evolve in order to survive. A report from the
Technology, Media & Telecommunications (TMT) group at Deloitte Touche
Tohmatsu cites the doubling of traffic at the Amsterdam exchange (AMS-IX)
since last February as proof that the increasing popularity of Web video
and broadband access could overwhelm the backbone of the Internet.
Although the core of the Net is in good shape, and can easily handle the
large increases in traffic, trouble could arise in the "last mile" of
fiber, between hubs and homes. "There's nothing all that alarming going
on," says TeleGeography researcher Eric Schoonover. "This whole idea that
the increase in traffic is going to break something or kill something, or
the providers won't keep up, seems foolhardy to me � The network operators
know how to scale." The price of bandwidth is expected to increase for the
next several years, but the Internet displayed its resiliency when the
December 2006 tsunami that knocked out seven of the eight underwater
Internet cables carrying traffic to southern Asia failed to shut down
traffic completely, thanks to packets being rerouted through landlines and
satellites. Recent innovations have allowed infrastructure to be used at
rates as low as 1 percent of capacity, meaning the ceiling will only get
higher as innovations such as 10- and even 100-Gb equipment are deployed.
Another possible innovation is an Internet that can identify packets and
decide which should be given a higher priority, since not all packets
require the same speed.
Click Here to View Full Article
to the top
A Giant Leap Forward in Computing? Maybe Not
New York Times (04/08/07) P. BU4; Pontin, Jason
In February, D-Wave announced that it had built a "practical quantum
computer," but this claim has been met with much doubt. The February
demonstration, in which Orion, the "world's first commercial quantum
computer," figured out a seating chart for a wedding party, solved a Sodoku
puzzle, and searched for a protein in a database, showed that the computer
is about 100 times slower than today's PCs and even brought about doubts
that Orion was actually doing the work. D-Wave founder Geordie Rose said
the demonstration was meant to "to run commercially relevant applications
on a quantum computer, which has never even been done before--not even
close." The skepticism from computing experts was based on the lack of
information provided by D-Wave about the inner working of its machine, as
well as the general consensus that quantum computing will not be possible
for a number of years, if not decades. Orion is nothing but hype and is no
better at solving problems that a "roast-beef sandwich," wrote University
of Waterloo theoretical computer scientist Scott Aaronson on his blog.
Other doubters have made the argument that D-Wave's machine is worthless
since the point of quantum computing is to achieve a tremendous increase in
speed over today's machines. "D-Wave is misleading the public by calling
their device 'a practical quantum computer,'" said University of
California, Berkeley computer science professor Umesh Vazirani. D-Wave
claims that Orion's niobium chip foundation could eventually power a much
faster machine, but experts mostly disagree. Despite the lack of
acceptance in the academic community, D-Wave plans to contract Orion out to
businesses as a Web service. The company has announced that third-party
researchers will have the opportunity to experiment with Orion later this
year.
Click Here to View Full Article
to the top
Ease-of-Use Critics: Designers or 'Feature
Creeps'?
EE Times (04/05/07) Benjamin, David
A recent "ease of use" forum held in Palo Alto, Calif., brought together
experts to debate the merits of engineering for simplicity versus the
potential to maximize features. "Every possibility you add to an interface
increases your likelihood of failure" in the commercial world, said
Stanford University's Persuasive Technology Lab founder B.J. Fogg. Others
agreed that, in interaction design, making a device that everyone can use
is far more difficult than making something that only the highly-skilled
can use. Ease of use has become a "grave issue" in engineering, said EE
Times editor Junko Yoshida, citing an "SOS from consumers." Bill
Moggridge, author of "Designing Interactions," a book focusing on
complexity and ease of use, said the best way to avoid ease-of-use issues
is to build a prototype and test it out on everyday users. In support of
complexity, Moggridge called attention to the status workers can achieve
through their mastery of intricate devices. "We feel proud that we've
gotten past a barrier of difficulty," he said. The forum then turned its
attention to Japan's I-Mode mobile phone platform, which included watching
a video of a consumer spend 30 minutes using their phone's electronic
payment system to buy a can of tea from a vending machine. Yoshida said
there is a growing trend toward usability, but Fogg replied that the
tendency for designers to believe that "more is more" rarely leads to the
"best user experience [being] the initial winner." Moggridge explained how
Web 2.0 concepts could allow users to "go, converse, and manipulate,"
without the restrictions of hardware or software that is designed for a
specific device.
Click Here to View Full Article
to the top
Getting in Touch: Virtual Maps for the Blind
Scientific American (04/07) Ross, Rachel
Researchers at the Aristotle University of Thessaloniki in Greece have
developed a way to transform video into virtual haptic maps that can
provide the blind with a better grasp of cities and building layouts.
Three-dimensional models are sometimes used as maps for the blind, but can
only accommodate one person at a time, and paper maps with ridges cannot
provide comprehensive information. However, with the Greek system digital
dioramas are available to people around the word and are accompanied by
audio. After shooting video of an architectural model, each frame of the
video is then processed using software designed by lead researcher
Knostantinos Moustakas and his team. As the camera angle changes
throughout the video, each structure is analyzed by the software to
determined its shape and orientation. The result is a 3D grid of force
fields representing each structure. "Each point on the grid has an
associated force field," says Moustakas. The human interface consists of
the CyberGrasp glove, which pulls on the user's fingers, and the Phantom
Desktop, a wand that applies pressure to the user's hand in one direction.
Moustakas has also designed a system that converts paper maps into 3D maps;
users would run their finger or a wand down grooves representing streets as
the names are said aloud to them. In tests of 19 subjects, the dioramas
were found to be preferable for small groups of structures, and the 3D
street maps were preferable for large areas.
Click Here to View Full Article
to the top
FSF Releases New Draft of LGPLv3
Linux-Watch (04/03/07) Vaughan-Nichols, Steven J.
The Free Software Foundation (FSF) has unveiled a new draft of the LGPLv3,
and hopes that the free software community will continue to provide
feedback on how to improve the Library General Public License before its
final release. "The license is currently written as a set of additional
permissions on top of GPLv3, a number of terms have been updated to reflect
changes in the GPLv3 draft released last week," explains Brett Smith,
licensing compliance engineer for FSF. LGPL primarily differs from GPL in
that software or a library can be "linked to" or "used by" a GPLed or a
proprietary program, and distributed to users without making the LGPLed
part of the code available to other developers. And a developer would not
be required to share the code that is not covered by the LGPL. Some other
small adjustments have been made for LGPLv3, says Smith, who adds that
comments received from previous discussion drafts are still being
evaluated.
Click Here to View Full Article
to the top
Researchers Find GPS Significantly Impacted by Solar
Radar Burst
Government Technology (04/04/07)
Cornell University researchers have confirmed that solar radio bursts can
disrupt GPS and other communications that use radio waves. Solar radio
bursts start with a flare sending high-energy electrons into the solar
upper atmosphere; radio waves are then produced and propagate to the Earth
and cover a large range of frequency, over which they act as noise. "Space
weather cuts across many different federal agencies and is a particularly
fruitful area in which to develop sustained partnerships between government
agencies and academia," said Brig. Gen. David. L. Johnson, U.S. Air Force
(Ret.). An unprecedented December 2006 solar radio burst caused many GPS
receivers to stop tracking; before this, the possibility of such an event
had only been extrapolated from data. "Now we are concerned more severe
consequences will occur during the next solar maximum," explains Cornell
electrical and computing engineering professor Paul Kintner. A New Jersey
Institute of Technology solar radio telescope showed that, at its peak, the
burst generated 20,000 times more radio emission than the rest of the sun.
The affect of this event was seen on the Global GPS Network and on the
civil air navigation system, the first time such a burst was detected by
the latter. Haystack Observatory's Anthea Coster said the December solar
radio burst showed that solar bursts have worldwide and instantaneous
effects on technology and since the burst was unexpected in both size and
timing, the frequency of such events cannot be predicted.
Click Here to View Full Article
to the top
Microsoft Funds New Mapping Research
IDG News Service (04/07/07) Gohring, Nancy
Researchers at Harvard University, the University of Illinois at
Urbana-Champaign, and Georgia Tech University are developing new mapping
applications thanks in part to $1.1 million in grants from Microsoft. The
academic research programs will use the company's Virtual Earth, a mapping
product, and SensorMap, a tool for publishing and sharing sensor data and
browsing live data on an interface, to develop the applications.
Researchers at Illinois are building a mobile sensor application that may
be used to monitor the sounds, heartbeats, and location of birds, as well
as a social application that will allow someone wearing a jacket of sensors
to be monitored as an avatar in a virtual world. Meanwhile, the City
Capture project at Georgia Tech plans to integrate multiple cameras with
Virtual Earth as researchers attempt to document the transformation of a
city over time.
Click Here to View Full Article
to the top
International Conference at Duke to Examine Social,
Educational Dimensions of Technology
Duke University News & Communications (04/03/07)
Digital technology and its impact on modern life will be the focus of the
"Electronic Techtonics: Thinking at the Interface" conference at Duke
University this month. Organized by the Humanities, Arts, Science
Technology and Advanced Collaboratory (HASTAC), the April 19-21 event will
bring together specialists in computer science, art, education, law, and
other fields from all over the world. "It is a rare opportunity for
scientists who helped develop the Internet to exchange ideas with
adventurous and socially concerned educators, policymakers, artists,
theorists and practitioners," says HASTAC co-founder and Duke professor
Cathy N. Davidson. "Our mission is to inspire the future of digital
technologies." Former Xerox chief scientist John Seely Brown will give the
keynote address in a speech entitled "The Social Life of Learning in the
Net Age." Another speaker will be Dan Connolly, a research scientist at
the MIT Computer Science and Artificial Intelligence Laboratory, who worked
with Tim Berners-Lee on the development of the World Wide Web. The
semantic Web and the future of the Web will be among the issues discussed
in sessions, and there will also be demonstrations of new technology and
opportunities to learn about open source and proprietary software and
hardware.
Click Here to View Full Article
to the top
DARPA Seeks Shape-Shifting War Robots
InformationWeek (04/05/07) Jones, K.C.
Developers have until July 2, 2007, to submit full proposals to the
Defense Advanced Research Projects Agency (DARPA) for developing chemical
robots that are able to change shapes and squeeze into small openings in
buildings, walls, or under doors as part of military operations in war
zones. DARPA is seeking soft, flexible, and mobile robots that are still
large enough to carry an "operationally meaningful payload." After the
robots have gained access to denied or hostile areas, they will need to
return to their size, shape, and functionality. The agency refers to the
chemical robots as ChemBots, which will also need to sense small openings,
and withstand different temperatures, humidity levels, and precipitation.
The robots can be self-powered, self-consuming, or energy-scavenging;
autonomous or user-controlled; but not limited by controllers or power
sources. "Nature provides many examples of ChemBot functionality" such as
mice, octopi, and insects, DARPA said in its request for proposals.
Click Here to View Full Article
to the top
Continuum: Designing Timelines for Hierarchies,
Relationships and Scale
University of Southampton (ECS) (03/31/07) Andre, Paul; Wilson, Max L.;
Russell, Alistair
Temporal visualizations are limited in that they only represent discrete
data points or single data types as they occur along one timeline, and the
authors present Continuum to address this limitation and represent faceted
properties of temporal events. Continuum is a timeline visualization tool
that facilitates the representation and exploration of hierarchical
relationships in temporal data, the expression of relationships between
events across periods, and user-focused control over the level of detail of
any relevant facet in order to allow the system user to ascertain a focus
point regardless of the degree of zoom over the temporal domain. The
important data is always presented without redundancy at all detail levels.
Through the use of histograms and detail controllers, Continuum can
visually represent and interrogate with relevant detail levels over
substantially bigger data sets than are provisioned in currently available
services. With Continuum, temporal information is visualized in either a
dynamic hierarchy, across concept relationships/associations, or in
large-scale overviews with relevant details. Continuum users commented
that the tool allows the richer, more innovative exploration of data, and
some were especially drawn to the control over the data facets they could
wield via Continuum.
Click Here to View Full Article
to the top
Free Software Foundation's Richard Stallman: 'Live
Cheaply'
Linux Insider (04/04/07) Blangger, Tim
In a talk given at Lehigh University, Free Software Foundation founder
Richard Stallman explained his beliefs on the negatives of proprietary
software and the importance of free software. Stallman spoke about the
goal of creating an alternative to the computing landscape run by
corporations and their software, which is full of hidden features and
prohibitive rules. However, he did show support for the free market,
saying that free software is better for such an economy than the limiting
aspects of proprietary software. Choosing software or an operating system
for convenience, reliability, and ease of use is "a fundamental mistake
because they don't allow us to see what is important," according to
Stallman. Proprietary software leaves the computing community divided
"because we can't make copies to help our neighbors and helpless because we
can't see the source code," he added. Stallman went on to express his
disapproval of combining free software with proprietary, because this
confuses the issue and sends a message "that it's OK to use proprietary
software." He named Utoto, Blag, and gNewSense as popular systems that are
free of proprietary software. Cell phones also came under attack, for
their ability to be used as "a tracking device, even when it is turned
off." In summing up a broader philosophy, Stallman suggested, "Don't buy a
house, a car, or have children. The problem is they're expensive and you
have to spend all your time making money to pay for them."
Click Here to View Full Article
to the top
Putting Some Circuit Breakers Into DNS to Protect the
Net
CircleID (04/03/07) Auerbach, Karl
Smart criminals on the Internet are using viruses to take over computers
and are then hiding the location of these computers and preventing the PCs
from being shut down by rapidly changing the address data that domain names
represent, moving the domain's control point from minute to minute.
Changes to the address data normally take several months or longer to
occur. But the criminals are quickly changing the DNS records in the
authoritative servers for a given domain and then combining this technique
with low time-to-live values on DNS information, which causes cached data
to be eliminated quickly. In this manner, the criminals are protecting
themselves by eliminating a potential audit trail. The criminals are using
the same tactic on the name servers used for the domain, making it more
difficult to come to grips with the attack. One potential solution to this
problem offered by Karl Auerbach is an Internet "circuit breaker" that
calls for domain names, such as example.com, to be removed from their
zones, such as .com, during an emergency situation. This circuit breaker
would prevent domains from being resolved and would prevent the quick
shifting of A and NS records.
Click Here to View Full Article
to the top
Easy on the Eyes
Economist (04/04/07)
An MIT researcher has developed a computer system that mimics activities
of the human brain to recognize broad categories of objects in images, such
as animals. When shown an image for only an instant, people often think
they would only be guessing if they answered whether to not they saw an
animal in the picture, but they usually answer correctly. The functions
that allow the brain to immediately carry out visual processes work
hierarchically, meaning that signals from the retina go to the first area
of processing, which sends a signal to the next area, and so on until the
final area is able to identify general categories of objects. Thomas
Serre's project aimed to reverse-engineer the brain's ventral visual
pathway, where the aforementioned processes occur. He first re-created the
time in a person's infancy where nerve cells orient themselves to visual
features based on how often they are seen, while showing the computer a
variety of photographs. Once these sensitivities were established, he
showed more images, now indicating whether each contained an animal or not.
The system he created had processing units in different areas that were
sensitive to the same set of features as the nerve cells in analogous areas
of the brain. In tests, the computer outperformed human subjects in
answering whether or not photographs contained animals, with an accuracy of
82 percent compared to the humans' accuracy of 80 percent. The computer
and the humans often answered incorrectly for the same image. Aside from
more practical applications such as finding child-pornography sites, the
technology could provide insights into how the human brain operates, as
Serre has been able to create artificial "lesions" and observe their
effect.
Click Here to View Full Article
to the top
How to Read Signs of Safe Software
Government Computer News (04/02/07) Vol. 26, No. 7, Buxbaum, Peter A.
The development of metrics for software assurance is still in its infancy,
according to speakers at the DHS-DOD Software Assurance Forum in Fairfax,
Va. Tony Sager, chief of the vulnerability analysis and operations group
at the National Security Agency, said being able to pick the 10 best
software security solutions out of a field of thousands requires some sense
of measurement, an ability to determine which solutions are the best.
Microsoft has been developing metrics to reduce the level of
vulnerabilities in its software, including internal measurements for the
assurance of the software it ships, according to senior director of
security engineering strategy Steven Lipner. He said the metrics are aimed
at improving future product versions, and Microsoft wants to assess
products before they are shipped. Microsoft has developed two software
assurance metrics. The first, known as Relative Attack Surface Quotient,
measures such things as default configurations, open ports, permissions
services, and the number of ActiveX controls available by default, Lipner
explained. The second metric is informally known as the "vulnerability
coverage method," and basically functions like an independent community of
researches reporting vulnerabilities in new versions of Microsoft products.
Reported vulnerabilities are analyzed by a team at Microsoft, who then
determine if the vulnerability has been removed from updated versions of
the product; if the vulnerability has not, the team decides if it should be
removed.
Click Here to View Full Article
to the top