Computing, 2016: What Won't Be Possible
New York Times (10/31/06) P. D3; Lohr, Steve
Now that computers have proved their worth in science, business, and
culture, what's next? An October symposium titled "2016," held in
Washington by the Computer Science and Telecommunications Board, a part of
the National Academies, took aim at this question. Two themes emerged from
the discussions: the continuing penetration of computing into the sciences;
and that computing will expand even more into the realm of social sciences,
with policy issues becoming major topics of debate as technology allows for
greater pervasiveness. Richard M. Karp, a professor at the University of
California, Berkeley, said that computing, the systematic study of
algorithms, allows us to describe biological processes: "In other words,
nature is computing." As electronic social networks grow, so do our
abilities to track and study them, on an increasingly large scale. "We're
witnessing a revolution in measurement," says Jon Kleinberg, professor at
Cornell, who points out that sociologists have been studying social
networks, which are pre-technological creations, for decades. Simply by
observing postings on MySpace or Facebook, recommendations on Amazon, or
the diffusion of news, opinions, and rumors, a bounty of information can be
attained and used to study intricate social questions of interest not only
to sociologists, but to marketers, politicians, and countless other fields.
"This is the introduction of computing and algorithmic processes into the
social sciences in a big way, and we're just at the beginning," says
Kleinberg. Technology will allow people to record their entire lives and
either keep it for themselves or make it available to the entire world.
While technology will create the potential for pervasive surveillance, "it
will be up to society to determine how we use it," says Rick Rashid, a
computer scientist and head of Microsoft's research labs. "Society will
determine that, not scientists."
Click Here to View Full Article
to the top
Does E-Voting Need Paper Trails
CNet (10/31/06) Broache, Amy
Despite the mistrust that many have for a completely electronic voting
system, as the November general elections near, many officials are
explaining why no action was taken to implement a paper trail. While 27
states have mandated paper records, not all of them have the system in
place. "The officials have spent gazillions of dollars to buy what they
have now," said Eugene Spafford, computer science director of the Center
for Education and Research Information Assurance and Security at Purdue
University. "Any additions will need to come out of local budgets, so they
are looking for ways to avoid incurring that expense. They can't return or
throw out the existing machines without huge expense, and modification
won't be cheap, either." New York had passed a law requiring electronic
voting machines to produce paper trials by September 1, 2007, but did not
want to spend a great deal on systems only to have them become quickly
outdated. They are currently using lever-operated voting machines, and
testing electronic systems. Ohio has equipped all of its electronic voting
machines with paper trails. A study conducted by the Election Science
Institute found that nearly 10 percent of the paper print-outs made by
Ohio's machines were "compromised in some way," and would have been
uncountable should a recount have been necessary, said director Steven
Hertzberg. He blames the mess the country finds itself in on a failure to
conduct proper testing, as would have been done in any commercial industry,
and suggests that governments hand out grants in order to inspire
innovation in competing companies. Still, many states claim that a paper
printout is against principles because voter information and who they voted
for could be matched up. An alternative system, which complies with
federal obligations, involves a ballot that is filled in by voters then
read by an optical scanner that totals votes. Eugene Spafford is chair of
ACM's U.S. Public Policy Committee. To learn more about
USACM's many e-voting activities, visit
http://www.acm.org/usacm
Click Here to View Full Article
to the top
How to Build Software? Henry Ford, Meet eBay
Christian Science Monitor (11/01/06) P. 1; Arnoldy, Ben
"Code Jam 2006," hosted by Google and put on by TopCoder, exhibited how
software development competition can be harnessed to create the best
software for customers. A group of 99 of the world's best programmers, all
male, a third Russia, devoid of Indians, and containing seven Americans,
was presented with the task of individually solving three problems in 75
minutes. This new sort of assembly line for code writing is called a "very
intriguing and attractive model," by Thomas Malone, director of the Center
for Collective Intelligence at the Massachusetts Institute of Technology.
Malone sees the approach as a logical extension of the progression in
computing power. "Things that are done today inside big companies will, in
the future, be done by temporary combinations of very small companies, in
many cases, independent contractors," he says. TopCoder receives a project
from a client, divides it into several components, and opens the creation
and development work to a series of online competitions, and "prize
money," usually a few thousand dollars, is given to the programmer with the
best finished product. TopCoder then combines the individual components
into the system the client had requested. "Our competition model drives up
quality in a way that no one can duplicate. No one else I know can get
four or five [versions] made of the same thing and take the best," says
Brendan Wright of TopCoder. Programmers benefit from being able to
hand-pick projects to work on and customers benefit from lower prices.
Critics of the competition-based system claim that it is unable to handle
concerns arising in the middle of production, or handle projects that
change over time.
Click Here to View Full Article
to the top
Global Competition Spark Spending Spree
Financial Times (10/30/06) P. 4; Cookson, Clive
The international R&D Scoreboard has just released a report showing a 7
percent increase in R&D spending by the world's top 1,250 companies.
Norman Price, an industrialist at the UK Department of Trade and Industry,
which publishes the annual report, says that "in many sectors profits are
growing strongly and companies can afford to spend more on R&D."
Technology hardware and equipment shows the highest total R&D spending,
over $80 billion, but has experienced a 0.6 percent decrease since the
2001/2002 report. Electronics & IT tops both U.S. and Japanese investment
in R&D, and is second to engineering & chemicals in Germany, third to
engineering & chemicals and pharmaceuticals in France, and the lowest of
all reported sectors in the UK. Asia experienced the greatest growth of
all; the 44 Taiwanese companies that made the top 1,250 increased R&D
investment by 30.5 percent last year, with the bulk of investment focused
on electronics and computer companies; the 17 South Korean companies
increased R&D investment by 11.9 percent, with most of the nation's R&D
investment being accounted for by Samsung, Hyundai, and LG. Samsung's R&D
spending grew $5.44 billion in the past four years, from $1.88 billion,
forcing competitors to increase R&D investment as well. The two most
important trends in corporate R&D, according to Prof. George Haour,
professor of technology and innovation management at IMD business school in
Geneva, are "open innovation" and the move to Asia. Open innovation refers
to the end of the traditional R&D structure in which scientists work on
secret projects in isolation, and the beginning of businesses working with
each other, in order to commercialize their own innovations and find other
inventions to exploit. Many western companies were found to have set up
R&D centers in Asia, particularly China and India. "They want to take
advantage of all the talented people in Asia and the dynamic markets
there," says Haour. However, there is still a relative lack of indigenous
investment in R&D in these two nations.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
E-Mail Voting Comes With Risks
Washington Post (10/31/06) P. A19; Nakashima, Ellen
Thousands of U.S. soldiers around the world will be given the chance to
vote in the November 7 general elections using fax, email, snail mail, or a
combination of methods. Computer security experts have warned that the
program, the Pentagon's Federal Voting Assistance Program (FVAP), is open
to the dangers associated with unencrypted email, which include
interception, hacking, and identity theft. "Email traffic can flow through
equipment owned and operated by various governments, companies, and
individuals in many countries," says Joel Rothschild, a Navy Reserve
captain who prepared a report on this topic for the Pentagon in August.
"It is easily monitored, blocked, and subject to tampering." Rothschild
noted in his report that encryption could be used to secure email
transactions, but the Pentagon has no plans for such measures. "No bank
would ask their customers to send Social Security numbers over unencrypted
email," said Rothschild's co-author, David Wagner. He calls the
combination of faxing and email "about as dangerous as you can get. It's
got all of the problems with unencrypted email, plus your ballot is being
routed through the Department of Defense. Will soldiers feel free to vote
their conscience when they know that the DoD may be able to see how they
voted? How do we know that the DoD or their contractors haven't modified
soldiers' ballots in transit?" Scott Wiedmann, deputy director of the
FVAP, claims that the federal facility is unable to alter email ballots,
which are sent as "read-only" files. However, soldiers must sign away
their right to a secret ballot when using this process. Email is currently
an option for soldiers voting in eight states, and soldiers are encouraged
to mail an original copy of their ballot, as a precautionary measure, if
they vote via fax or email.
Click Here to View Full Article
to the top
The Trouble With Multi-Core Computers
Technology Review (11/01/06) Greene, Kate
MIT researchers are creating a computing framework that allows programmers
to write software without having to deal with some tedious
parallel-programming details, using transactional memory, which coordinates
software operations and allows several transactions to share the same
memory at the same time. With the coming of the age of personal
supercomputers, average programmers will need to be able to program with
multiple cores in mind. "That's a scary thing," says Krste Asanovic,
professor of electrical engineering and computer science at MIT, "because
most have never done that, and it's quite difficult to do." While "locks"
are currently used to prevent programs running simultaneously from trying
to access the same piece of memory, implementing these locks is
complicated, says Jim Larus, research area manager at Microsoft. After a
transaction is complete, the system (under construction) verifies that no
changes have been made to memory that could hinder the results of the first
outcome, if changes have been made, the transaction is repeated as
necessary. Transactional memory can fail, says Asanovic: transactions may
require more memory than the set amount available, causing the system to
crash, but by adding a small backup memory onto the cache as well as adding
software to recognize when the transactions are overflowing, explains
Asanovic, transactional memory capacity can be increased, preventing
failure. Researchers are working with a combination of hardware and
software; "its not clear yet where the right line is," says Larus. Today's
dual-core systems are not as affected by the lack of truly parallel
programs as the quad-core systems, which will be released by AMD and Intel
next year, would be. Transactional memory, according to Asanovic will not
wipe away all troubles in multi-processor programming, but it will become
an integral part of the future parallel-computing model.
Click Here to View Full Article
to the top
Feds Leapfrog RFID Privacy Study
Wired News (10/30/06) Singel, Ryan
A report warning the Department of Homeland Security about the security
risks involved with the use of RFID chips is stuck on the draft stage even
as RFIDs are currently being developed for use. The report points to a
danger of vital information being "skimmed" off of the cards for malicious
use, such as tracking of individuals or recreating chips for payment,
security, and passports. Jim Harper who served on the committee that
issued the report and recently published a book titled "Identity Crisis,"
claims that "there's such a strongly held consensus among industry and DHS
that RFID is the way to go that getting people off of that and getting them
to examine the technology is very hard to do." RFID chips are commonly
used today in highway toll payment systems or in tracking inventory, but
the State Department has announced new cards for visitors to Mexico,
Canada, and the Bermudas, containing an RFID chip, readable from 20 feet
away. New laws will accompany distribution of these "PASScards," which
will be required for reentry to the U.S. in 2008. Additional use of RFID
chips are planned for passports, identity cards for transportation workers
and federal employees, and possibly for driver's licenses. A spokesman
from DHS claims that the report is still being considered. The Center for
Democracy and Technology called for deeper inquiry into identification
technologies. CDT believes the focus should be on how secure the cards
are, rather than preventing their development, since the reality is they
are already being used. Whether or not the new cards will have encryption
is being left up to the State Department.
Click Here to View Full Article
to the top
Cybersecurity Expert Says Nationwide Use of Computerized
Voting Poses Risk
Purdue University News (10/31/06) Schenke, Jim
Purdue University cybersecurity expert Eugene Spafford has pointed out
several basic flaws in the implementation of electronic voting machines by
local government bodies meant to combat the election difficulties of 2000.
"The problem with the 2000 elections that prompted the reforms was only
with one type of paper-based ballot in a few jurisdictions. That's hardly
a cause to hurriedly and somewhat recklessly replace all of the equipment
nationwide," says Spafford, executive director of the Center for Education
and Research in Information Assurance and Security (CERIAS). He says
vendors may have exaggerated claims when telling election boards that the
new direct-recording machines were tested exhaustively and were not
susceptible to failure. "No mention was made of the limitations of the
software testing or the obstacles to creating bug-free software," said
Spafford. "Furthermore, there are some unexpected bugs or failures that
cannot be resolved because there are no actual ballots to recount." Some
software has been found to not count votes over a certain number, or reset
to zero after a power failure. As far as the machines being easier to use
for handicapped voters, Spafford points out that blind voters will not be
provided with Braille, as utilized by the paper ballots, and voters with
palsy will experience trouble with the interface. He is perplexed by the
fact that so many jurisdictions saw electronic voting machines as necessary
when banks, race tracks, lottery systems, and other businesses count
millions of paper documents every day. CERIAS and Purdue's computer
science department will host former ACM President Barbara Simons, an
Internet voting expert, on November 2. In 2005, Simons was the first woman
to receive the Distinguished Engineer Alumni Award from the University of
California, Berkeley. To view USACM's report "Statewide Databases of
Registered Voters" visit
http://www.acm.org/usacm/VRD
Click Here to View Full Article
to the top
CMU, Intel See Fantasy as Future
Pittsburgh Post-Gazette (11/01/06) Templeton, David
The Intel Research Pittsburgh program, a joint effort between Intel and
Carnegie Mellon University researchers, is in the early stages of
developing technology that would allow computers to physically recreate
humans. Billions of speck-sized robots, known as "catoms," short for
Claytronic atoms, that move by hydrostatic electricity would be able to
form whatever shape for which they are programmed. However, finalization
of this technology is still decades away and more funding will be required,
according to those involved. Pearl-sized catom prototypes have been
developed, and the hope is to have them move independently and follow
simple algorithms dictating shapes, textures, and colors, to move
identically to the humans they are replicating. Seth Goldstein, a CMU
computer scientist and a co-creator of Claytronics, says that more
immediate goals are make faxing in 3D a reality, computer aided design
tools, and an antennae that is able to grow or shrink depending on the
signal it is receiving. The Intel Research Pittsburgh laboratory includes
about 20 CMU researchers, and is one of four such labs that Intel created
at computer-powerhouse universities over the past year, including
University of Washington in Seattle, the University of California,
Berkeley, and England's University of Cambridge. Other projects being
developed by the lab include Intel's Diamond, which reduces mammograms to
numbers and compares them with known images to help doctors diagnose
malignancy, and a tool that searches for specific motions in Internet
videos.
Click Here to View Full Article
to the top
Team Strives to Optimize Vital Wireless Networks
Stanford Report (11/01/06) Orenstein, David
A DARPA funded project will conduct research addressing new ideas and
fundamentals of wireless network design and performance as they pertain to
field communications between soldiers and first responders. "Mobile ad hoc
networks have been the basis of military communications for decades but
most of the work hasn't been based on anything fundamental," says Stanford
electrical engineering associate professor Andrea Goldsmith, the lead
principal investigator on the project. "We really don't know the
performance limits or the optimal methods to communicate over wireless
networks." The ad hoc networks presently used are easy to establish, but
not well optimized and have an unknown capacity, resulting in emergency
communications that are often lost in a queue of less important
communications, among other problems. The research grant "really energized
the [wireless network] community," Goldsmith says. The most basic concern
of the team is the capacity limit of wireless networks, and the "design
insights and guidance that come with knowing fundamental limits and the
techniques that achieve these limits," Goldsmith says. Another key area of
inquiry is developing a technique for prolonging the life of networks with
battery powered nodes that cannot be recharged, such as those deployed in
remote locations. Security will also need to be considered. Innovations
resulting from this study could include ways of routing information around
a network and to making transmitters cooperatively allocate resources such
as bandwidth. The resulting technology could be applied to "smart"
highways that could guide automated cars, and more efficient buildings with
intelligent security, or systems to help the elderly and disabled.
Researchers from Stanford, MIT, University of Illinois, Cal Tech, and the
University of Texas-Austin will collaborate on the project.
Click Here to View Full Article
to the top
World Discusses Internet Future
BBC News (10/30/06) Waters, Darren
Security, diversity, openness, and access will be the key agendas for the
Internet Governance Forum (IGF), being held in Athens from October 30 to
November 3. "The forum will give voice to the citizens of the global net
and help identify emerging issues which need to be tackled in the formal
processes," says Nitin Desai, chair of the organizing body for IGF. Over
1,500 delegates will attend the meeting, including representatives of
governments, companies, organizations, and individuals who have the option
of participating in discussions via blogs or actually coming to Athens, a
distinct change from the past. Desai warns that a "potential culture
clash" is the most severe challenge to the forum's success. The debate
over internationalized domain names (IDNs), for countries who do not use
the Latin alphabet, overshadowed the World Summit on the Information
Society in Tunis, which the IGF was borne out of. ICANN is overseeing the
move toward IDNs, and has recently begun testing them with its engineers.
The body has taken a "huge step forward," according to IDN program director
Tina Dam. Beyond this one concern, IGF is important because it will tackle
"issues around spam, cybersecurity, openness, what are the blocks to
freedom of speech?--they all speak to all Internet users directly," says
Emily Taylor, legal director of Nominet, the U.K. body in charge of the .uk
domain name. She points out that everyone has experienced viruses, but not
everyone might be aware of international differences in approaches to
freedom of speech, beyond the simple examples of government censorship. "I
know, from speaking to ordinary users, that these issues are much more on
their minds that discussions about who manages the Internet and exactly
what is the role of the U.S. government," says Desai.
Click Here to View Full Article
to the top
Australian Women Dive Into IT
Sydney Morning Herald (Australia) (10/31/06) Moses, Asher
Google hopes the $24,000 in scholarships the company recently awarded to
females in Australia pursuing computing-related studies at the university
level will encourage more local women to follow suit. The Google 2006
Australia Anita Borg Scholarship program, which awarded two students with
$5,000 scholarships and 14 others with $1,000 scholarships, marks the first
time the initiative has been offered in Australia. The program has been
available in the United States for the past three years. "By supporting
the next generation of great technical minds, we pay tribute to Anita and
her vision of women in the computer sciences," says Lars Rasmussen, head of
engineering at Google Australia. The company's headquarters in Sydney
recently hosted the women for a networking function. Although more women
in Australia are now IT professionals, according to a new report by
Talent2, the firm's Ian James suggests the negative perception of IT being
for techies and geeks has led many women to pursue careers in other
industries. Nonetheless, James says opportunities abound in IT, and they
are not limited to programming and software development. For information
on ACM's Committee on Women in Computing,
visit
http://women.acm.org
Click Here to View Full Article
to the top
'Big Brother' Call-Router May Stop Interruptions
New Scientist (10/26/06) Simonite, Tom
Researchers in Germany have developed the prototype of an intelligent
call-routing system that is able to determine whether an employee is too
busy to answer the telephone. The intelligent switchboard makes use of
video cameras positioned around the office to provide footage of what the
employee is doing, as well as computer-vision software that can determine
whether the employee is sitting at their desk, talking to a colleague, or
participating in a meeting. The software also monitors computer use. When
the "Connector" system decides that a phone call has come at a bad time for
the employee, it relays a message to the caller that the individual is
unavailable to talk. The system, developed by researchers at Karlsruhe
University, can also suggest that the caller talk to a colleague instead,
while notifying the co-worker of the reason for the transfer. The
technology is designed to address the problems employees face when calls
come at an inappropriate time or the connection is missed. Europe and U.S.
organizations involved in the Computers in the Human Interaction Loop
(CHIL) project are developing other technologies for the Connector system,
including facial-recognition software and a program that is able to analyze
conversations.
Click Here to View Full Article
to the top
Research: IT Generation Gap Overblown
eWeek (10/30/06) Rothberg, Deborah
Forrester Research analyst Phil Murphy says it is irresponsible for
bloggers and pundits to encourage the rift between older and younger IT
workers, adding that the old guard is unlikely to retire en masse as
workers reach 65 years of age. Murphy's new report, "CIOs: Avoid War
Between IT's Twentysomethings and More Mature Workers," is his response to
the view that Baby Boomers are on the verge of retiring, few young people
are interested in tech careers, and skilled workers are harder to find. He
believes older workers can no longer afford to retire at 65, which means
many will stay around for a few more years, and that their departure from
the industry will more likely resemble a trickle. Murphy says CIOs should
look at their IT workforce as having complementary skills, and focus on the
"middle third" of workers who will continue to work or try to learn new
skills. He adds that many of the old dogs probably have not had the
opportunity to learn any new tricks, as organizations are often more
"worried that they won't be around long enough to pay back their
investment." Older workers have a lot to teach younger colleagues, with
regard to their business knowledge and relationship with key users, and can
serve as mentors, says Murphy. He also believes legacy technology can be
modernized and that COBOL is not on its deathbed.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
Is It Worth Arguing?
University of Southampton (ECS) (10/23/06) Karunatillake, Nishan C.;
Jennings, Nicholas R.
Conflicts in a multi-agent society can be effectively resolved by
argumentation-based negotiation (ABN), although the University of
Southampton's Nishan Karunatillake and Nicholas Jennings caution in
Proceedings of First International Workshop on Argumentation in Multi-Agent
Systems that a considerable amount of time and computational resources must
be devoted to the generation, selection, and assessment of arguments.
There are other ways to address conflicts besides argumentation, such as
evading conflicts altogether by finding an alternative technique to achieve
the same plan, and changing the original plan, also known as re-planning.
The authors advise that it would be advantageous for agents to recognize
such situations and evaluate the pluses and minuses of argumentation prior
to implementing it as a means for resolving conflicts, and they present a
empirical study to assess a simple ABN system's performance in a specific
task allocation scenario. Karunatillake and Jennings model a multi-agent
community and deploy a set of ABN, re-planning, and conflict evasion
methods as a conflict resolution toolkit. The experiment's results
demonstrate that ABN can effectively address conflicts in cases where
resources are restricted, while evasion and re-planning techniques are less
costly and more effective than ABN in scenarios where resources are more
plentiful. The authors also demonstrate that combining evasion and ABN in
a hybrid approach yields superior performance, and they detail a simple
multi-agent context in which conflicts naturally result from the
interaction of agents with differing motivations. Of the various
strategies Karunatillake and Jennings explore, the one with the most
favorable overall performance is usually an approach that uses evasion
first and then argumentation as a last resort.
Click Here to View Full Article
to the top
Test Challenges Could Trump Future Chip Designs, Expert
Warns
EE Times (10/31/06) Maniwa, Tets
Portland State University electrical and computer engineering professor
Robert Daasch raised questions about the potential impact of variations in
silicon process generations and devices on future chip designs, during a
recent discussion at the International Test Conference. Daasch, head of IC
Design and Test Laboratory at the university, also noted that chip testing
will also be affected, and that design will have to account for variability
and integrate test into the design. "New combinations of materials,
coupled with atomic-level granularity, will make the next generations of
semiconductors much more susceptible to device variations," he explained.
"The number and types of failure modes will increase to the point where we
will see failures with no easily discernable physical cause." As defects
increase, fault models will also rise, and impact test costs. Although
more statistical testing could emerge, such a development could shorten the
number and length of tests. What is more, statistical testing could lead
to adaptive formats that would allow for more dynamic testing and
eventually different tests for each die. Future designs will have to take
materials and test design into consideration at the same time.
Click Here to View Full Article
to the top
Mapping Information Flow in Sensorimotor Networks
PLoS Computational Biology (10/06) Vol. 2, No. 10, P. 1301; Lungarella,
Max; Sporns, Olaf
Max Lungarella of the University of Tokyo and Olaf Sporns of Indiana
University demonstrate sensorimotor interaction and body morphology's
inducement of statistical regularities and information structure in sensory
inputs and within the neural control architecture. The information content
of inputs does not therefore exist separately from output, leading to the
authors' suggestion that neural coding must be considered in the
"embeddedness" of the organism within its ecological niche. The
researchers show how the stream of data between sensors, neural units, and
effectors is influenced by interaction with the environment. Analysis of
sensor and motor data gathered from simulated and actual robots illustrates
the presence of information structure and information flow instigated by
dynamically coupled sensorimotor activity, including how sensory inputs
affect motor outputs. Lungarella and Sporns determine there to be a
spatially and temporally specific nature to information structure and
information flow in sensorimotor networks; they also find that these
factors can be shaped by changes in body morphology as well as by learning.
The results of Lungarella and Sporns' study point to a basic connection
between physical embeddedness and information, which emphasizes the effects
of embodied interactions on neural information processing. Insight into
the role that various system components play in behavior generation is also
revealed, marking a first step toward the creation of a qualitative
framework that ties neural and behavioral processes together; this
framework could offer a key design principle to guide the building of more
efficient artificial cognitive systems, Lungarella and Sporns conclude.
Click Here to View Full Article
to the top
Innovative Research in the Labs Part IV -- Carnegie
Mellon University
Speech Technology (10/06) Vol. 11, No. 5, P. 40; Jamison, Nancy
Researchers at Carnegie Mellon University have recently announced the
development of the simultaneous translation of talks and lectures and
recognition of mouthed, but unspoken, speech, through detection of movement
of the facial muscles. While continuing previous projects such as the CMU
Sphinx speech recognition system, which is widely available in open source
form, the CMU Speech project is taking on a multitude of other tasks.
Robust speech recognition technology that can discern language in difficult
acoustic environments is in the works. Researchers are presently focusing
on multisensor processing and signal processing that is motivated by the
form and function of the human system. Further research includes spoken
dialog management architectures for solving complex problems; such as a bus
schedule information provider and a multimodal system for F18 maintenance
workers. Educational projects include FLUENCY, which helps with foreign
language pronunciation, and Project LISTEN, which improves literacy in
children. In addition, the department of electrical and computer
engineering is working on a silicon VSLI chip that implements Sphinx
decoding algorithms.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Cross-Platform Development: Software That Lasts
Computer (10/06) Vol. 39, No. 10, P. 26; Bishop, Judith; Horspool, Nigel
Designing easy-to-port or multi-platform software is not a well-known
issue in the field of software engineering, but linking components and
toolkits via XML and reflection offers a potential solution, according to
the University of Pretoria's Judith Bishop and the University of Victoria's
Nigel Horspool. The differences between functional, nonfunctional, and
platform changes must be understood in order to facilitate change
management and software adaptation, and platform changes involving the
software's migration to new or additional languages, operating systems,
hardware, or devices constitute the authors' area of concentration. "In
our approach, the idea is to anticipate and build for platform changes from
the beginning" by using certain software development innovations
(cross-platform toolkits, application programming interfaces, virtual
machines, reflection, and XML), Bishop and Horspool explain. The API
supplies the interface to low-level functionality; cross-platform toolkits
are critical to the development of a graphical user interface (GUI) and
improve the chances of a successful transition to a new platform; a program
can monitor and perhaps modify its own structure through reflection, which
allows the programmer to defer deployment decisions to runtime and provides
a new type of program abstraction; virtual machines (VMs) implement both a
computer- and OS-independent machine architecture; and XML is an
outstanding format for data exchange by virtue of its platform independence
and the availability of standard tools on all platforms for manipulating
XML files. Bishop and Horspool write that a high level of platform and
language independence for a GUI library was achieved via middleware
constructed using reflection and controlled with XML-based specifications.
The authors conclude that "Reflection is a software mechanism that
transcends change. Coupling it with XML and toolkits gives a brighter
future for software developers."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top