Now Starring...Digital People
New York Times (07/31/06) P. C1; Markoff, John
Former Apple engineer Steve Perlman is finalizing a futuristic camera
system that could revolutionize Hollywood cinematography with
photorealistic three-dimensional effects. Contour, to be unveiled today at
the SIGGRAPH conference in Boston, will be able to capture the facial
movements of actors in unprecedented detail. "Instead of grabbing points
on a face, you will be able to capture the entire skin," said director
David Fincher, who plans to use Contour when he begins filming a movie next
year whose main character ages in reverse. "You're going to get all of the
enormous detail and quirks of human expression that you can't plan for."
The technology could lead to what observers in Hollywood are referring to
as "navigable entertainment," a new form of digital video where viewers
could control the point of view. For the system to work, actors cover
themselves in a phosphorescent powder that does not show up under normal
light. Two synchronized cameras then record the actors' movements in a
light-sealed room. Using florescent lighting that flashes at intervals too
rapid for humans to perceive, cameras capture the light and transmit images
to a group of computers that reproduce the three-dimensional shapes of the
glowing areas. Sophisticated software tools can then manipulate the images
and edit them into broader digital scenes. Until now, the creation of
realistic digital actors has required significant computing resources and
has been prohibitively expensive. "The holy grail of digital effects has
been to create a photorealistic human being," said Ed Ulrich of Digital
Domain. Contour builds on existing motion-capture technology by increasing
the image resolution from a few hundred points on a human face to some
200,000 pixels. Perlman and a small team of engineers developed Contour in
his Palo Alto, Calif., garage using a small graphics supercomputer.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
Tech Camp Targets Girls
Poughkeepsie Journal (NY) (07/27/06) Wolf, Craig
Now in its eighth year, IBM's EXITE (Exploring Interests in Technology and
Engineering) program brings middle-school girls to a week-long camp that
seeks to boost female participation in technical fields. "Over the last
decade, fewer women have pursued science and engineering at the university
level, despite the fact during the next decade, one in 10 jobs will be the
technology field," said IBM's Mary Murray. There are 1,700 girls
participating in EXITE camps around the world, and more than 5,000 have
attended since the program began. The hope is that some will go on to
careers in technology. Murray's assertion is disputed by some in the
field, including WashTech President Marcus Courtney, who cites a University
of Chicago study that found that more technical jobs are being lost than
created. "You can't criticize a company for wanting to promote a strong
foundation in math and science," Courtney said. "But the company
continually overstates the demand for students who have computer science
and engineering degrees, and the actual number of jobs being created in the
industry."
Click Here to View Full Article
to the top
The Security Risk in Web 2.0
CNet (07/28/06) Evers, Joris
The hype surrounding the development of Web 2.0 has been so overwhelming
that the issue of security is being forgotten, say experts. Web 2.0 allows
Web sites to expand their features by being more interactive. It allows
users to write captions under online photos and offers experiences similar
to that of desktop applications. Web 2.0 has created a buzz about its
features, but no one is thinking about safety. "We're continuing to make
the same mistakes by putting security last," says Billy Hoffman at SPI
Dynamics. "People are buying into this hype and throwing together ideas
for Web applications, but they are not thinking about security, and they
are not realizing how badly they are exposing their users." Advanced Web
sites use Asynchronous JavaScript and XML (AJAX), which helps make Web
sites more interactive, but may also be exploited by hackers. AJAX may
contain cross-site scripting flaws, which an attacker can use to hack user
accounts, download malicious code on PCs, and spread phishing scams. Not
everyone agrees that Web developers neglect security. Ryan Asleson,
co-author of "Foundations of Ajax," says security was not as big of a
problem 10 years ago as it is today, and Google and AOL agree. Yahoo
insists it does the best it can to protect its users' information. Asleson
says developers can prevent security issues through training and best
practices.
Click Here to View Full Article
to the top
Team Combing Internet to Track Terrorism
Arizona Republic (07/28/06) Carroll, Susan
Researchers at the University of Arizona have amassed the largest online
repository of intelligence on terrorist and extremist organizations in the
world. They hope that the Dark Web project will improve intelligence
agents' ability to track terror suspects on the Internet, which has long
been acknowledged as a shortcoming of the intelligence community. The Web
has become the primary tool for communication and recruiting among many
extremist and terrorist groups. Using supercomputers, the Arizona
researchers developed a virtual library that contains millions of Web pages
and intercepts chatter on terrorist Web sites. "Even the people we talk to
in the federal agencies are hampered by the amount of information that's
being collected. They don't know how to analyze it," said Hsinchun Chen,
the director of UA's Artificial Intelligence Lab, which launched the Dark
Web project three years ago. "It's a new virtual battleground." Dark Web
uses programs to find connections among different groups using
social-networking analysis, and also detects similarities in writing styles
and performs Web-matrix analyses to gauge the sophistication of the sites.
To avoid detection, terrorist groups often only hold on to Internet
addresses for a short period of time. Some of the sites contain detailed
instructional information, such as a guide on how to carry out a bombing or
a beheading. "The Web is the al-Qaida university. They season you, and
they recruit you, and they give you all the materials to train you," Chen
said. "It's a very significant international phenomenon."
Click Here to View Full Article
to the top
How Can I Tell If I'll Be Any Good as a
Programmer?
Guardian Unlimited (UK) (07/27/06) Arthur, Charles
Saeed Dehnadi and Richard Bornat from Middlesex University's school of
computing have created a test that can help determine whether a student is
likely to become a good programmer. The test, which consists of a
three-line code example and multiple-choice questions, "predicts ability to
program with very high accuracy before the subjects have ever seen a
program or a programming language," according to a draft paper written by
Dehnadi and Bornat. Programming teachers often find out through exams that
they are teaching two groups of students with different abilities. "It is
as if there are two populations: Those who can, and those who can not,
each with its own independent bell curve," they say. Reports indicate that
30 percent to 60 percent of a group of incoming computer science students
will fail the first programming course. The draft paper challenges the
notion of simply recruiting more computer programming students, when the
focus should be on identifying people who are likely to succeed as students
and as professionals. The British Computer Society reports that
applications for computer-related degrees are down 50 percent, but the
dropouts could very well be people who are unlikely to be good
programmers.
Click Here to View Full Article
to the top
CERT Seeks Secure Coding Input
Dark Reading (07/25/06) Higgins, Kelly Jackson
Next month, the Computer Emergency Response Team (CERT) will give
developers their first chance to look at the work behind its Secure Coding
Initiative (SCI). When completed, SCI will produce a set of rules for
developers to create safer and more reliable software, according to CERT's
Robert Seacord. "We're focusing on common programming errors that
developers can make. These are the sort of errors you put into code and
can lead to exploitable vulnerabilities," Seacord said. Buffer overflow is
one of the most common results of programmer error, he added. Though there
have been many attempts to improve the security of software code, none has
been articulated as a universal set of standards, Seacord said. By CERT's
latest figures, there were 3,997 vulnerabilities reported in the second
quarter of this year alone, while there were 5,990 for the entirety of
2005. In an effort to ensure that its standards will be adopted by the
community, CERT is partnering with developers and actively soliciting
input.
Click Here to View Full Article
to the top
Navigation Guides Robotic Future
Research Australia (07/26/06)
Working under a $3.3 million grant, researchers at the University of
Queensland will launch a program to develop a new breed of robots that will
be able to acquire knowledge about their environment based on research
about animal navigation skills. They will study the way that bees,
rodents, and humans navigate in an attempt to learn how the hippocampus
functions. "One thing that makes us special as humans is that we might be
using this part of the brain not just to map physical space, which we do
very effectively, but also to map the space of ideas," said Janet Wiles, a
cognitive scientist at Queensland and the project leader. The researchers
could then use computer models to convert the results into maps of ideas.
Those same models could be used to give robots the ability to navigate.
"The study will look at how information is transmitted, received,
processed, and understood in biological and artificial systems," she
said.
Click Here to View Full Article
to the top
Reducing the Dimensionality of Data With Neural
Networks
Science (07/28/06) Vol. 313, No. 5786, P. 504; Hinton, G.E.;
Salakhutdinov, R.R.
Teaching a multilayer neural network with a small central layer to rebuild
high-dimensional input vectors can facilitate the conversion of
high-dimensional data to low-dimensional codes, and the reciprocal decoding
of code into data. The weights in such "autoencoder networks" can be
refined via gradient descent, provided the initial weights are close to a
good solution. G.E. Hinton and R.R. Salakhutdinov detail how to initialize
the weights in such as way as to enable deep autoencoder networks to learn
low-dimensional codes that are a much more effective tool for data
dimensionality reduction than principal components analysis. They outline
a neural network comprised of a stack of restricted Boltzmann machines
(RBM), each of which boasts a single layer of feature detectors. The
learned feature activations of one RBM are employed as the "data" for
training the next machine in the stack. Following pretraining, the RBMs
are "unrolled" and create a deep autoencoder. The autoencoder is then
refined via backpropagation of error derivatives. "Unlike nonparametric
methods, autoencoders give mappings in both directions between the data and
code spaces, and they can be applied to very large data sets because both
the pretraining and the fine-tuning scale linearly in time and space with
the number of training cases," the authors conclude.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Divided By a Common Language
Guardian Unlimited (UK) (07/27/06) McCarthy, Kieren
The Internet is not exactly global in scope for people in countries
outside of Europe and the Americas, countries that do not use Latin-based
languages such as Asian and Middle Eastern nations. People in these
countries who do not know English or another Latin-based language must at
minimum master the unfamiliar letters that make up .com, .net, .org, and
other domain endings. In addition, they often have to search through
volumes of English-language text to find a translation button, or have to
comb through blizzards of English-language Web sites to find an oasis of
Web sites in their own native language. ICANN, Internet expert Vint Cerf,
and the U.K. Parliament are among those who believe that Internationalized
Domain Names (IDNs) in non-Latin-based languages is the single most
pressing concern for the future of the Internet as a global system. The
emergence of other Internets in non-Latin languages with their own DNS
roots is the main threat to the continued existence to a single World Wide
Web. China already has created Chinese-language equivalents to .com, .net,
.org; China announced it this past February, but has not broken off from
the Internet. Israel has an internal system in Hebrew; Iran, Syria, Japan,
and Korea have similar systems; and Microsoft plans to debut its own IDN
system as part of its newest version of Internet Explorer. Right now all
these systems are added onto the Internet itself, but how long this status
quo may last without a global integration of IDNs remains to be seen.
Click Here to View Full Article
to the top
Interstate Intelligence
Popular Mechanics (07/06) Vol. 183, No. 7, P. 42; Ward, Logan
Experts at the annual World Congress on Intelligent Transportation Systems
(ITS) predict that the highways of the future will more efficiently manage
traffic through wireless technology. This is the goal of the Vehicle
Infrastructure Integration (VII) initiative, which Philip Tarnoff of the
University of Maryland's Center for Advanced Transportation Technology
characterizes as an effort to "manage vehicular traffic similar to the way
telephone companies manage telephone traffic." Intelligent highways, or
"smart roads," are expected to have a network of transponders and Global
Positioning System (GPS) units to detour specially equipped cars around
accidents and other obstructions to unsnarl congestion. The implementation
of VII involves collaboration between car manufacturers, the U.S.
Department of Transportation, and state DOTs; the state and federal DOTs
would install the transponders along the highways while the carmakers would
outfit vehicles with compatible telematic units. Privacy protection is a
hot-button issue associated with VII, since the initiative would rely on
the harvesting of information by both the government and megacorporations.
Privacy rules are being crafted by a subcommittee within VII, with concern
particularly focused on law-enforcement groups demanding data for criminal
investigations. "How do you protect people's privacy and still allow such
a ubiquitous network?" asks VII privacy committee Chairman Paul Barrett.
The project is currently in the test phase, which will continue until the
federal government decides whether to approve it in 2008.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Metcalfe's Law Is Wrong
IEEE Spectrum (07/06) Vol. 43, No. 7, P. 34; Briscoe, Bob; Odlyzko,
Andrew; Tilly, Benjamin
Metcalfe's Law states that the value of a communications network--any
communications network--is proportional to the square of its user
population, but BT Networks Research Center researcher Bob Briscoe,
University of Minnesota professor Andrew Odlyzko, and Rent.com programmer
Benjamin Tilly contend that the law is wrong and that heeding this warning
will help ensure that the new wave of telecommunications growth driven by
broadband does not fall prey to the same mistakes that characterized the
dot-com bubble. The authors maintain that the increase in value that
accompanies the network's growth is a direct reflection of the type of
network involved, and that this value falls somewhere between linear and
exponential growth. The basic error Metcalfe's Law makes is assuming that
all connections or groups in the network have equal value, when in fact
most large network connections lie idle. Metcalfe's Law cannot account for
isolated communications networks, because by its reckoning all networks
depending on the same technology should have the incentive to integrate or
at least interconnect. Briscoe et al also point out that while Metcalfe's
Law would dictate that two networks of disparate size interconnect, in
reality only companies of approximately equal size are encouraged to
interconnect. "At a time when telecommunications is the key infrastructure
for the global economy, providers need to make fundamental decisions about
whether they will be pure providers of connectivity or make their money by
selling or reselling content, such as television and movies," state the
authors. "It is essential that they value their enterprises
correctly--neither overvaluing the business of providing content nor
overvaluing, as Metcalfe's Law does, the business of providing
connectivity."
Click Here to View Full Article
to the top
Neuron-Microchip Interface
Futurist (08/06) Vol. 40, No. 4, P. 14; Tucker, Patrick
NACHIP project scientists say they have been able to create a "working
interface" by fusing mammalian neurons with a silicon chip, which they say
puts them closer to getting both groups to communicate. "Computers and
brains work electrically," says Peter Fromherz at the Max Planck Institute
for Biochemistry. "However, their charge carriers are different." The
science has been able to help researchers create a map of the human brain
that shows how thoughts and emotions develop into consciousness, leading to
the formation of memory. Scientists are also exploring using genes to
communicate with neurons. The research may also help create new,
implantable neural prostheses to fight neurological disorders and could
possibly lead to the development of a computer with a structure similar to
that of a human brain. Those kinds of advancements are still many years
away, according to researchers. "Pharmaceutical companies could use the
chip to test the effect of drugs neurons, to quickly discover promising
avenues of research," says Stefano Vassanelli at Italy's University of
Padua. The NACHIP project receives funding through the European
Commission's Future and Emerging Technologies Commission.
Click Here to View Full Article
to the top
The Evolution of an Internet Standard
IETF Journal (Quarter 3, 2006) Vol. 2, No. 1,Huston, Geoff
Author Geoff Huston outlines the route of an individual document through
the IETF's Internet Standards Process, stressing that the purpose of
Internet standards is "to address a practical need where a standard
specification can assist both vendors and consumers of a product or a
service to be assured that a standards conformant implementation will
undertake certain functions in a known manner, and that, as appropriate,
implementations of the standard specification from different vendors will
indeed interoperate in intended ways." The first step is to clearly answer
the question of what problem the standard is supposed to solve and why that
problem must be addressed via standardization. Incorporating the work item
within the agenda of an IETF Working Group (WG) is the next step, and while
some items may produce their own WGs, others may simply be added to an
existing WG, usually through the acceptance of an individual submitted
Internet-Draft as a WG document. Step three is to refine the document to
ensure that it is a peer-reviewed, neutral, and unbiased specification that
demonstrates a common comprehension of the space; that it feasibly and
pragmatically addresses the problem while being of practical use to the
Internet; and that it is reflective of a rough agreement of being a
high-quality spec. The fourth step is to evaluate the implementability and
interoperability of multiple independent spec deployments, which serves to
gauge the spec's practicality and usefulness. In the next step, the
document is handed over to the IESG for a detailed iterative review process
that concludes with its publication as an RFC. The final step is for
others to generate implementations of the technology or debate its
application within their specific domain as they employ the spec in their
area of expertise.
Click Here to View Full Article
to the top
The Human Touch
Science & Spirit (07/06) Kuzma, Cindy
Increasingly sophisticated and intelligent robots are throwing the issue
of machine cognition into sharp relief. But consciousness in people and
animals is only just starting to be defined by neuroscientists, notes
Patricia Churchland of the University of California, San Diego. She doubts
that conscious machines can be constructed until humans understand the
mechanics of consciousness. Currently under development are robots that
integrate AI models based on the behavior of humans and animals with
neuroanatomy that enables the machines to gather data from the outside
world and alter tasks according to their perceptions of their surroundings.
Director of MIT's Computer Science and Artificial Intelligence Laboratory
Rodney Brooks has created a number of robots designed to mimic human
learning: One such robot is Abrero, a machine whose hand is equipped with
tactile sensors that allow it to sense when its grip on an object is
slipping and immediately respond to this input by making a grab for it.
Another robot from MIT named Mertz has a camera-outfitted humanoid head
that can record images of the people it encounters for later identification
via face-recognition software. Drawing a distinction between people and
objects is essential to creating interactive robots, according to Brooks;
"If we want to have robots that interact with people in the future, they
should know what things are people and what are not people," he explains.
One school of thought believes conscious robots will remove some of
humanity's uniqueness, but robotics expert Ray Kurzweil disputes this
assumption, especially as humans integrate with robotic technology to
augment their bodies and brains.
Click Here to View Full Article
to the top
Wi-Fi's Five-Pronged Attack Alters the Wireless
Landscape
Electronic Design (07/20/06) Vol. 54, No. 16, P. 36; Frenzel, Louis E.
There are five reasons why Wi-Fi is proving to be so successful, the first
being the continuous development of standards both large and small. The
latest standard, 802.11g, delivers up to 54 Mbps in the 2.4 GHz band, and
it is the standard through which most wireless connectivity is currently
achieved. Ratification of the 802.11n standard, which promises more than
100 Mbps data rates via the use of multiple-input/multiple-output (MIMO)
technology, is pending; in the meantime "draft-n" offerings are available,
and though those products will not offer full interoperability, the Wi-Fi
Alliance provides a testing and certification process to guarantee
interoperability. Also contributing to Wi-Fi's amazing progress is the
proliferation of Wi-Fi hot spots (more than 100,000 globally, according to
the Wi-Fi Alliance) and the rollout of VoIP over Wi-Fi through converged
cellular/Wi-Fi handsets, which has facilitated the development of
unlicensed mobile access (UMA) systems. A fourth trend benefiting Wi-Fi is
the deployment of self-healing, self-configuring mesh networks, a scheme
that enables a few wireless nodes to supply blanket coverage for a very
large area. Wi-Fi mesh networks are very popular for municipal deployments
designed to enhance city government services, make local broadband markets
more competitive, and help cities close the gap between the Internet haves
and have-nots. The fifth development adding to the success of Wi-Fi is the
incorporation of the technology into many consumer electronics products.
Wireless video technology is expected to be the killer residential Wi-Fi
application.
Click Here to View Full Article
to the top