Computerized Voter Registration Databases Need a Major
Overhaul
Technology Review (10/16/06) Bourzac, Katherine
University of Utah political scientist Thad Hall says the most pressing
concern facing voters in this November's general election is not voting
machines being hacked into, but their names being deleted from the voter
registry. Hall is co-author of the recent book "Point, Click, and Vote:
The Future of Internet Voting." The problem, Hall explains, is that there
is no standard format for creating voter registries, and thus comparison of
databases is very unreliable. "You want states to have common databases so
that at least within a state you should be able to know if a person has
moved, and you can keep records with a state accurate." Kentucky was sued
by its own attorney general earlier this year for attempting to delete
8,000 voters from the rolls, with no notice given to these voters. The
attempted removal was the result of a comparison of its database with that
of Tennessee and South Carolina, which tried to identify voters registered
in multiple states. The process of matching names to identify voters
registered in two states needs to be a dynamic one, explains Hall, so that
registry in one state would lead to immediate removal of the voters name
from the rolls of his previous state of residence. Currently, the process
is done in a one-time bulk comparison of databases. The Organization for
the Advancement of Structured Information Standards (OASIS) and IEEE are
currently working on election standards that provide uniformity for
difficult issues such as how addresses are to be broken down. Another way
the voting process lacks standardization it that hardware from one
manufacturer and software from another cannot be used together, severely
limiting the choice officials have in creating the most reliable election
infrastructure. The Help America Vote Act, the first intervention of the
federal government into elections, does not give the four-year-old Election
Assistance Committee power to enforce federal standards, to do so would
require an act of congress, but Hall foresees increasing pressure for this
power to be granted. For information about ACM's e-voting activities,
visit
http://www.acm.org/cacm
Click Here to View Full Article
to the top
Researchers Devise Algorithm to Prevent Information
Overload
InformationWeek (10/16/06) Babcock, Charles
Researchers at the University of Illinois at Urbana-Champaign are working
on what they believe will be a system capable of digging through massive
amounts of random data and highlighting that which is most important.
"Getting the information we need is not the problem; sorting it and
deciding what is useful without being overwhelmed is the challenge,"
explains Robert Ghrist, associate professor of mathematics at the
University of Illinois, who will co-lead the Stomp (Sensor Topology &
Minimal Planning) project, which won $8 million in funding from the Defense
Advanced Research Projects Agency (DARPA). Ghrist says the system being
devised could also be used for detecting and exploring holes in the
coverage of a wireless phone network. After locating these gaps, topology,
the study of abstract spaces, can provide guidance as to how they can be
fixed. The goal is to create "a global picture" by combining the readings
of numerous local sensors, explains Ghrist. Yuliy Baryshnikov, lead
engineer of Bell Labs, a contributor to the project, described how such a
large number of sensors often turning up irrelevant results could cause a
human observer to miss a critical piece of information, but the topology
algorithms being developed would not let anything of importance go
unnoticed. A central aim of the project is to create the smallest sensor
network needed to perform a required task.
Click Here to View Full Article
to the top
Anita Borg Institute Sets Plans for 2007 Grace Hopper
Celebration
Business Wire (10/11/06)
The success of the recent Grace Hopper Celebration of Women in Computing
(GHC) conference has led the Anita Borg Institute for Women and Technology
(ABI) to make some changes to the event. GHC, which has been a bi-annual
event since its beginning in 1994, will become an annual gathering in 2007,
and ABI will also co-locate and bridge the conference with the Richard
Tapia Celebration of Diversity in Computing conference. ABI has scheduled
next year's GHC for Oct. 17-20, in Orlando, Fla., and it will follow the
Richard Tapia conference, which is set for Oct.14-17. GHC is the largest
technical conference in the world dedicated to women in information
technology and computer science, and it is jointly sponsored by ABI and
ACM. This year's GHC, which ended Saturday in San Diego, had a 49.9
percent increase in attendance by technical people from 2004, and had a
record number of students attending on scholarship and a record number of
corporate sponsors. "ABI's Board of Trustees has recognized a groundswell
of support for Grace Hopper technical conferences among all our
constituencies, who believe that these events underpin their efforts to
attract, develop, and retain more women and underrepresented groups in the
technical and computer science professions," says ABI President Telle
Whitney.
Click Here to View Full Article
to the top
IT Industry Lobby Group Says Germany Lacks
Specialists
Heise Online (Germany) (10/16/06)
The German Association of Information Technology, Telecommunications and
New Media (Bitkom) says that Germany does not have sufficient measures in
place for educating technology specialists. The argument is based on the
number of college graduates majoring in technical and natural-science
fields in several major industrial countries. "In a few years, Germany
will lack the critical mass of bright thinkers it needs to develop basic
innovations and turn them into marketable products and new services," says
Bitkom's Walter Raizner. He points to China and India as models for rising
educational innovation, and stresses the importance of improving Germany's
international competitiveness through education policy. Chinese engineers,
of which there are 820,000 produced each year, 30 percent of which can
compete internationally, compared to Germany's 37,000, are making there way
into the German academic and economic realm, and China's emphasis on
research continues to grow. While the number of computer science students
in Germany is expected to drop to 14,000 in 2010, from 17,000 in 2006,
India produces 200,000 computer science students a year. In order to
reverse this trend and protect the status of German specialists, Bitkom
feels that education policy must be the focus of the government's
"high-tech strategy." Student "coupons" could increase competition among
schools, while the schools must become more attractive to private investors
through promotion of research sciences. In addition, Germany must do its
best to import specialists, which Raizner feels would require universities
to improve and the country to adopt less bureaucratic immigration rules.
Click Here to View Full Article
to the top
Monster Jellyfish? Mapping the Global Internet
IST Results (10/13/06)
Volunteers from around the world are participating in an effort to model
the Internet by running a software agent on their PCs that maps the online
network. Data collected thus far and a graph theory used to visualize the
structure of the Internet reveals that the online network, with its central
nucleus of nodes, highly interconnected group outside the nucleus and
another group of isolated clusters connected directly to the nucleus,
resembles a jellyfish. "The largest, well-connected part is the outer
mantle of the jellyfish, the little nucleus is the brain, and the tendrils
hanging down are the least connected features that have to send their
messages to the nucleus before being fed out," explains Scott Kirkpatrick,
EVERGROW scientific coordinator. The project has revealed that the nucleus
of the Internet consists of about 100 nodes, the highly connected mantle
has about 15,000 nodes, and the simple tendrils contain about 5,000 nodes.
The EVERGROW team believes its research could help improve the routing of
Internet traffic in the future, and lessen network bottlenecks. For
example, the team says sending information via nodes in the outer mantle,
and bypassing the nucleus altogether, may be a better way of responding to
requests from browsers from a distant Web site.
Click Here to View Full Article
to the top
Identity Federation Getting Dose of Reality From
Internet2 Affiliate
Network World (10/12/06) Fontana, John
InCommon Federation, a facilitator and policy setter for identity sharing,
has added 10 universities, four service providers, and an independent
security provider to its membership. While some see federation as a
technology that will only be realized in the future, InCommon has already
displayed its ability to secure data access between partners while
maintaining individual privacy. "Federation is something that has been
envisioned by those with a long scope to the future as to how networking is
going to operate in an information and knowledge based world," says Tracy
Mitrano, director of IT policy at Cornell University and the chair of the
InCommon Steering Committee. "As we move to that world, we are seeing the
value of real federations among universities, information providers, and
service providers." The identity federation architecture used by InCommon,
known as Shibboleth, is the foundation for regulating resources maintained
by its members. Shibboleth is built upon the Security Assertion Markup
Language (SAML) and is a foundation technology for Internet2's Abilene
Network. "There is no question that higher education is already
participating in a flat world, so to speak, and federation makes that
possible," says Mitrano. The network requires members to share
authoritative and accurate identity information concerning their identity
management system, and universities can use the system to set privacy
policies that control what type of information is accessible at different
destinations. Two standards for trustworthiness must be satisfied by all
members: their identity management system must be under the purview of the
organization's executive management, and the system for issuing credential
to end users must contain a sufficient risk management system.
Click Here to View Full Article
to the top
Say Hello to Your Robot Self
Globe and Mail (CAN) (10/14/06) P. F4; Hornyak, Tim
Hiroshi Ishiguro is a pioneering robotics designer whose latest creation
is a robotic puppet that could actually serve as a stand-in for a real
person. Ishiguro, senior researcher at Keihanna, Japan's ATR Intelligent
Robotics and Communications Laboratories, has created a replica of himself,
which can perform eerily life-like gestures thanks to 46 air actuators.
Using a motion-capture system, movements, such as those of his own upper
body and lips, can be transmitted to the robot, known as Geminoid. He
claims to have had the idea because of his long commute, thinking that he
could simply leave the robot at his office to carry out his daily
interactions by proxy. Rather than simply projecting image and voice,
Geminoid allows Ishiguro to convey physical presence. Ishiguro calls the
type of robotic work he conducts "android science," an integration of
robotics and cognitive science by which human behavior can better be
examined. "A robot is a kind of simulator for expressing human functions,"
says Ishiguro. The human-looking robots he has designed in the past can
detect human presence and conduct conversations, such an interview for a TV
broadcast. "Robots are information media, especially humanoid robots.
Their main role in our future is to interact naturally with people."
Japanese culture embraces robots as helpful, friendly companions that will
play a large part in the maintenance of society and economy in a country
whose average age is growing rapidly as a result of a low rates of birth
and immigration. Ishiguro is currently planning cognitive science
experiments where the android will be placed in social situations to help
him gain insight into his driving curiosity: "why are we living, and what
is human?"
Click Here to View Full Article
to the top
Brazil's Electronic Voting Has Safeguards Lacking in the
US
Associated Press (10/14/06) Lehman, Stan
Brazil began using electronic voting 10 years ago with great trust in the
system, but many computer experts think this faith has gone too far. The
voting machines operate using Windows CE, but Microsoft, which cites trade
secrecy, will not allow independent investigations to assure that malicious
programmers have not tampered with the software, and for this reason many
advocate switching to an open-source system. Amilcar Brunazo, a computer
and data safety engineer who is also the Democratic labor Party's permanent
technical representative, founded the Safe Vote Forum to lobby for greater
transparency of the electronic voting process. "I agree the electronic
ballot box makes it more difficult to defraud the election process, but the
system is still not transparent enough, and the best way to address this is
by allowing an independent inspection of the operating system used in the
machine." A verification system was tried in 2002, where a slip of paper
appeared behind glass to assure the voter that their vote was counted
correctly, but the manufacturer, Diebold Procomp (Diebold's Brazilian
division), was opposed to this and favored a single printout from each
machine recording every vote registered. However, this "ballot box
bulletins" system cannot assure that the votes were not "flipped" by a
malicious program. The problem, according to Dr. Avi Rubin, director of
the Information Security Institute at Johns Hopkins University, is not that
elections have necessarily been rigged, but that no way to confirm whether
or not they were rigged exists. Brazil does conduct random tests of
machines hours before its elections, and an independent non-partisan
tribunal oversees every step of the election process. "Antonio Dourado de
Rezende, a computer science professor at the University of Brasilia says,
"The main flaws are not in the software, hardware, or data transmission
systems, but in the human links that control the connections between the
three--connections held together by the myth of infallibility and
incorruptibility of those who run the system."
Click Here to View Full Article
to the top
UCF Research Team Achieves Milestone Toward More Powerful
Computer Chips
University of Central Florida (10/11/06) Abney, Barb
A research team at the University of Central Florida has been very
successful in developing extreme ultraviolet light (EUV) as a way to power
the manufacturing of the next generation of computer chips. Team leader
Martin Richardson, university trustee chair and UCF's Northrop Grumman
professor of X-ray optics, showed off an EUV light source that was 30 times
as powerful as any previous attempt, adequate for supplying power to the
stepper machine used to reproduce intricate circuitry images onto computer
chips. Currently, chips are built using longer-wavelength UV light
sources, but Richardson's successful use of EUV light is a landmark
achievement in the industry-wide effort to find the most economical power
source for creating the computer chips of the future. Richardson
collaborated with Powerlase, a UK-based company, who provided him with an
incredibly strong Powerlase laser to use in conjunction with the
specialized laser plasma source technology that his team has developed.
The combination eliminates the neutral and charged particles associated
with existing EUV plasma sources, which can harm the expensive optics of
EUV steppers if they are allowed to stream freely away from the source. In
order to keep up with Moore's law, Richardson says considerable changes
must be made in the way chips are produced, claiming "we must use a light
source with a wavelength short enough to allow the minimum feature size on
a chip to go down to possibly as low as 12 nanometers."
Click Here to View Full Article
to the top
National LambdaRail President Explains Research
Focus
HPC Wire (10/13/06) Vol. 15, No. 41, West, Tom
National LambdaRail (NLR) CEO Tom West answers questions about why NLR is
so committed to the facilitation of network research and "big" science
applications, first explaining that the development and evolution of the
research and education (R&E) community has followed a cyclic pattern
similar to that of major cities, starting with an overwhelming focus on
research, in much the same way that cities' principal concentration has
always been location. West writes that revolutionary innovations in packet
networks and related technologies originated with university researchers in
the late 1960s. When coupled with the implementation of the ARPAnet and
the Defense Advanced Research Project Agency's funding, these advancements
yielded the Internet's initial core technologies and seminal deployment.
Since then, major networking advancements for the R&E sector as well as
society in general have been fueled by research inventions and the
researchers' specific requirements. The current needs of the R&E community
and society at large demand a major jump in advanced networking on the
basis of three factors: The need for increasingly manageable bandwidth for
research into "big" or specialized applications; the need for breakable and
researcher-controlled networks that include waves for network research; and
the need for underlying owned fiber. Fulfilling these requirements is the
mission of NLR, through the deployment of a national networking physical
infrastructure based on owned and lit fiber connected with multiple
Regional Optical Network (RON) physical infrastructure that is also
RON-owned, West notes. "By focusing on facilitating Research! Research!
Research!, NLR, in partnership with the RONs, continues the network
innovation cycle and ensures that all the participants in the research and
education community reap the benefits of big, fast, customizable networks,"
West concludes.
Click Here to View Full Article
to the top
Photon Computers a Bright Idea
The Gateway (10/12/06) McClure, Sean
Mark Summers, an electrical and computer engineering graduate student at
the University of Alberta, has dedicated himself to making computers that
use photons to process and transmit information a reality. The concept,
known as an "all-optical" computer, would be capable of impressive speeds
and would avoid the problem of overcrowded and overheated circuits. "The
photon can be thought of as an ideal carrier and superior to the electron
in terms of transmitting data," says Summers. "In optics, you can overcome
the problems associated with heat dissipation, and you can fit a lot more
information at particular wavelengths in the same amount of space."
Photonic crystals, highly ordered structures made of silicon using a
technique known as Glancing Angular Deposition (GALD), are Summers proposed
material to carry out the same function as current transistors with
photons. "The process grows isolated columns which look like a field of
grass. We use complex compute-controlled substrate motion algorithms to
nano-engineer a complicated three-dimensional architecture inside the
columnar film," explains Summers. He says three stages are required in
order to create photon computers: "the first stage will involve integrating
optical interconnects between the various chips... increasing the bandwidth
between the devices. The next stage is to integrate microelectronic
circuits with microphotonic circuits, and the final stage will be
everything optical, all the way to the human interface." Photonic crystals
will probably be utilized first in frequency filters or light-directing
devices, with the idea of an all-optical computer still existing only as a
dream for now.
Click Here to View Full Article
to the top
Giant Screen, Bold Visions
Courier-Journal (10/11/06) Poynter, Chris
University of Kentucky computer engineers have created what they call "the
world's highest resolution seamless display." Christopher Jaynes, an
associate professor of computer science and a member of the Center for
Visualization and Virtual Environments, and Stephen Webb developed the
27-feet-wide, 15-feet-high, and 60-million-pixel screen from off-the-shelf
products for about $100,000, compared to the millions of dollars normally
spent on such projects. Their creation, the result of eight years of
research, was revealed at the IdeaFestival, a four-day event in downtown
Louisville. The screen, which offers higher resolution than an IMAX
screen, was used to show images from the Hubble Space Telescope, National
Weather Service satellite images that show Katrina forming, and an inside
look at the cockpit of a space shuttle. Henry Fuchs, a computer science
professor at the University of North Carolina who is well versed in Jaynes'
and Webb's work, refers to it as "a milestone" for projection systems, and
acknowledges that it is "the largest, in terms of pixels." He says the
screen is the equivalent of having 30 high definition TVs tied together.
Jaynes believes the technology would be incredibly beneficial to scientists
who currently need to travel the world to get to screens with proper
resolution to view their work. "What we are trying to do," he says, "is
allow that scientist to take a 15-projector display and put it in his lab
and rending high-resolution content without having to get on a plane and
fly to Tokyo."
Click Here to View Full Article
to the top
Five Questions With Alan Page
Dr. Dobb's Journal (10/10/06) Hunter, Michael
In an interview, Alan Page, a test architect on Microsoft's Engineering
Excellence team, says he continues to be amazed by how much there is to
know about software testing. Page says he started testing software about
15 years ago at a tech startup and joined Microsoft as a tester in 1995,
adding that it took a few more years for him to really begin to understand
what testing was all about. At Microsoft, Page teaches new and experienced
testers, creates and updates courses, and works with test teams. Page says
his testing philosophy is to have a big toolbox of techniques, and the
experience to know when to use the right tool for a specific situation. At
the same time, he does not think the approach of "breaking stuff" is
important, when a tester can focus more on finding a problem and its root
cause, and using the information to uncover other problems. Page says
testers will need to continue to enlarge their toolbox because testing will
not get any easier over the next five years. Developers are writing unit
tests or using TDD and software engineering teams are performing defect
detection and early prevention, which should help with the easy bugs. But
testers will have to employ multiple testing strategies to find the more
difficult bugs.
Click Here to View Full Article
to the top
Environmental Sensor Networks: A Revolution in the Earth
System Science?
University of Southampton (ECS) (10/12/06) Hart, Jane K.; Martinez, Kirk
Environmental sensor networks (ESNs) will significantly augment
environmental monitoring and broaden the array of methods for taking
measurements or deploying sensors, and Jane Hart and Kirk Martinez of the
University of Southampton expect ESNs to completely revolutionize earth
system and environmental sciences. "We suggest that ESN's are the next
step in the understanding of the environment, and a key component of
environmental analysis," the authors write. An ESN consists of a sensor
node array and a communications system that transmits the sensors'
autonomously collected data to a server. A comprehensive understanding of
the physical environment and implementation is necessary prior to the
design and installation of an ESN. The sensor nodes should be low-power,
low-maintenance, robust, pollution-free, and designed to blend into the
environment to keep human interference to a minimum. The scale and
function of ESNs are variable, depending on the environmental conditions
and what role the network is supposed to play: Large scale single function
networks (such as weather stations and the Global Seismographic Network)
usually monitor large geographic areas, are big and expensive, and use
large nodes that typically measure one or more variables; localized
multifunction sensor networks tend to monitor a smaller area in more
detail, typically with wireless ad-hoc systems; biosensor networks use
sensors with biological sensing elements linked to a physical transducer in
order to monitor environmental processes and develop proxies for immediate
use; and heterogeneous sensor networks monitor the environment at varying
scales using data from the other kinds of ESNs. Hart and Martinez believe
heterogeneous networks represent the future of the ESN. Among the issues
that are challenging the development of the ESN are power management,
management and usability, standardization, data quality, security, data
mining and harvesting, and new sensor development.
Click Here to View Full Article
to the top
Teenager Moves Video Icon Just By Imagination
Washington University (St. Louis) (10/09/06) Fitzpatrick, Tony
A 14-year-old boy was able to complete two levels of the two-dimensional
70s video game Space Invaders by simply looking at an object on a screen
and imagining it moving. A team of neurologists and neurosurgeons and
engineers at Washington University in St. Louis carried out the experiment
meant to test the feasibility of biomedical devices that patients could use
to control prosthetics simply by thinking about it. The study used a grid
placed atop the boy's brain, an invasive technique that uses
electrocorticographic activity directly from the surface of the brain. The
Atari game console software was programmed to accept signals from the
brain-machine interface. The grid was already in place because the boy is
an epileptic, and scientists were hoping that when he had his next seizure
they could find the part of the brain that causes it, and remove the
section. This type of brain-machine interface is an alternative to
non-invasive electroencephalographic systems that use electrodes attached
to the scalp. The boy was first instructed to move his hands so brain
function could be correlated with physical movement. He was then told to
play the game by moving his hand and tongue, then to imagine performing
these movements, but keep completely still. By looking at the cursor
(spaceship) on the screen he was able to direct its movement. "He learned
almost instantaneously," says Eric C. Leuthardt, MD, assistant professor of
neurological surgery at the school of Medicine.
Click Here to View Full Article
to the top
Electronic Voting Machines May Not Eliminate Election
Problems
Ottumwa Courier (10/09/06) Milner, Matt
Despite the uproar over a need for electronic voting machines after the
2000 Florida "hanging chad" controversy, many are doubting the reliability
of these new machines. Some of them do not supply a print out of each
vote, meaning should the machines fail, or fraud is suspected, there would
be no paper trail to consult. With so much riding on these machines, the
risk of a hacker or virus tampering with the election is a danger that must
be taken seriously. E-voting expert Dr. Douglas Jones, associate professor
in the University of Iowa's computer science department, says a tension
exists between transparency represented by the paper ballots and the secret
ballot process represented by the machines. When using machines, he says,
only computer experts can tell if anything has gone wrong, but anyone can
understand a paper ballot. Jones also points out that "far more frequent
than fraud in elections are mistakes." Jones cites that fraud drops off
significantly if only 10 percent of voters look over again their ballot
after making their mark. Poll workers are not professionals, are generally
inexperienced, and pose as a large a threat to a smooth election as any
element. Jones's problem with voting machines is in the standards to which
they are tested after fabrication and before distribution, which are no
stricter than any other consumer good coming off an assembly line. Each
state has different laws on printouts from voting machines or if paper
ballots can be used at all. Iowa's laws, which provide paper ballots in
the case that the machines malfunction "come as close to perfect as you can
get," says Jones. What Jones really thinks is needed is for election
officials to increase emphasis on research and development of voting
machines.
Click Here to View Full Article
to the top
Leading the Blue Brain Project
sciencecareers.org (10/06/06) Pain, Elizabeth
A team of computational neuroscientists in Switzerland is currently
building a digital 3D model of the human brain, in its every detail. The
Blue Brain Project, which takes its name from the IBM Blue Gene Computer it
uses, is a joint effort between IBM and the Brain Mind Institute at the
Ecole Polytechnique Federale de Lausanne. The project confronts the
problem in computational neuroscience that, "theoreticians [who] do not
have a profound knowledge of neuroscience build models of the brain," says
Henry Markram, founder of the Brain Mind Institute. The project's leader
is Max Schurmann, a German physicist with a background in computer
engineering and experience building hardware neural models, which Markram
calls "perhaps the most challenging task possible in computational
neuroscience." Schurmann says, "The brain is the computer in the world
that does the most fabulous things. Finding how computing can be done
differently will change our technological environment." He chose to work
with those developing hardware-implemented neural networks, because "they
are physicists who develop microchips" containing analog integrated
circuits, rather than less intricate digital microchips. By incorporating
experimental neuroscience data into complicated computer simulations, the
team is better able to study the diverse array of cells, which a model can
not provide. The team has "built and simulated 10,000 compartmental neurons
with over 30 million dynamic synapses and are fine tuning the biological
parameters. This is several orders of magnitude larger and more detailed
than any previous attempt," says Schurmann. The team must now carry out
the calibration phases, making sure every piece of the simulation is backed
up by experimental data.
Click Here to View Full Article
to the top
Princeton Establishes Leading Research Computing
Facility
Princeton University (10/02/06) Cliatt, Cass
Three supercomputers have been brought together in a single research
facility, asserting Princeton University's place at the top of
university-based research. The need for such a facility has been known for
some time: "The community of scientists and engineers at Princeton whose
research and teaching depends on high-speed computing is very rapidly
growing," says Jeremiah Ostriker, director of Princeton Institute for
Computational Science and Engineering, and former provost from 1995-2001.
He estimates that the total number of faculty requiring this technology
doubles every three years, and includes those who were not heavily involved
in computing in the past. "Each of the three high-performance
machines...has a different performance profile suitable for handling
different kinds of computational tasks. Together these three machines
provide Princeton faculty a world-class computational research
environment," says Betty Leydon, Princeton's VP for information technology
and chief information officer. The computers include the Dell cluster
known as Della, which has 512 processors capable of very high speeds; an
IBM Blue Gene brand machine known as Orangena, which has 2,048 processors,
and the SGI Altix computer, Hecate, which has 64 processors with large
memory capabilities. "There are some scientific models which can't be
broken down in small enough pieces for Orangena, and there are some kinds
of problems that can't be broken down at all, and the entire problem needs
to fit into one big piece of memory, which is what the Hecate is good for,"
says Curt Hillegas, manager of computational science and engineering
support in the Office of Information Technology academic affairs
department. Princeton says its machines can handle a vast array of science
and engineering systems, and refers to them as "three legs of a stool,"
upon which the research needs of the entire facility may rest.
Click Here to View Full Article
to the top
In the Beginning Was the Word
Economist Technology Quarterly (09/06) Vol. 380, No. 8496, P. 10
As text-to-speech synthesis technology improves, so does the number of
ways it can be put to use. A voice is recorded and the words are chopped
up and reconfigured by a computer. The larger the chunks that are put
together, the more natural the voice sounds, and vice versa. However,
smaller pieces, sounds such as "eh" or "ar," known as phonemes, require
less storage space, and are crucial in the development of text-to-speech
technology. SVOX, a Swiss company, is developing algorithms to gauge
pitch, rhythm, and phrasing in order to perfect the context of the voice
created at a lower cost than the current $100,000 price tag. Dr. Jan van
Santen of Oregon Health & Science University is working on a system that is
meant to take into account neighboring phonemes, and actually be able to
change the pronunciation of phonemes to avoid a robotic sound. This
technique would require a relatively minute sample of the actual voice
being mimicked. A simpler and easier technique known as "voice
transformation" uses a synthetic voice model that can have any voice placed
onto it, like a costume. The use for such technology spans from making
characters in a video game say what a player wants them to, to allowing
people, who have a medical condition that makes speech difficult or
impossible, to speak again in real time, using earlier voice samples.
Other uses include in-car navigation systems that could pronounce intricate
names of places and streets, phones that can read text messages aloud, and
even as one researcher at IBM imagines, having a personal Internet
interface in your ear.
Click Here to View Full Article
to the top