HPCC - Twenty Years and Counting
HPC Wire (03/03/06) Vol. 15, No. 9,Feldman, Michael
This month the High Performance Computing and Communication (HPCC)
Conference will mark its twentieth anniversary in Newport, R.I., as it
addresses the topic "Mainstream Supercomputing: The Convergence of HPC and
Grid Computing." The conference began when supercomputers emerged in the
1980s, attracting the interest of the Navy and other divisions of the
Department of Defense. While he was president of the Federal Information
Processing Council of New England, HPCC founding organizer John Miguel
launched the first supercomputing conference, attracting manufacturers and
government officials. While originally intended as a one-time-only event,
the conference was so popular that it became an annual occurrence, which
has spread into a variety of non-defense related sectors. Miguel notes
that the conference is a forum for sophisticated discourse on the state and
future of supercomputing technology and networking among representatives of
the government and private sector. The focus of the conference has evolved
away from a vendor showcase in favor of a more general discussion of the
issues facing the community, having included sessions on information
security, information assurance, and homeland security in the last few
years. This year's conference will feature a discussion from Intel's
Stephen Pawlowski, entitled "HPC: Into the Mainstream," a presentation from
Procter and Gamble's Tom Lange on "The Aerodynamics of Pringles," and a
discussion of multicore technology from Louisiana State University computer
science professor Thomas Sterling. Another panel devoted to the real-world
issues facing supercomputing will take up the issue of the disconnect
between users and vendors, as well as assessing the most pressing needs of
government and industry.
Click Here to View Full Article
to the top
The Art of Building a Robot to Love
New York Times (03/05/06) P. 16; Fountain, Henry
Speaking at a conference on human-robot interaction last week in Salt Lake
City, University of Southern California computer science professor Maja
Mataric discussed the emotions that humans wish to see robots express.
Carnegie Mellon graduate student Rachel Gockley found that people are more
likely to engage with a robotic receptionist when it appeared unhappy.
Reporting the findings of his driving simulations, Stanford communication
professor Clifford Nass said that people interact most effectively with
robots when the devices act like humans, suggesting that researchers must
formalize human emotions to the point where they can be modeled. Nass
believes that the shortest path to optimizing human-robot interaction will
be for robot developers to pinpoint how humans fake sincerity and
incorporate that capability into their devices. Given that emotional cues
guide people in their interactions with each other, Notre Dame's Matthias
Scheutz believes that predicting emotions will be central to a human's
understanding a robot's motivations and predicting its actions.
Conversely, when robots take on human-like qualities there often arises a
host of expectations. "As soon as it shows the characteristics of anything
animate, then they start projecting or anthropomorphizing," said Scheutz.
The roboticists at the conference agreed that the technology is still far
from maturity, and that people are best advised to view the robots with
whom they interact more like pets, rather than expecting them to function
with the cognitive and emotional abilities of a fellow human.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
BlackBerry Case Could Spur Patent-Revision Efforts
Wall Street Journal (03/06/06) P. B4; Heinzl, Mark; Guth, Robert
Although the recent $612.5 million settlement between BlackBerry maker
Research In Motion (RIM) and NTP appears to have ended the long-running
patent fight between the two, legal experts and technology executives say
the case could ultimately spur efforts to amend the U.S. patent system.
Defenders of the U.S. patent system say stiff measures are needed to
protect the rights of inventors and encourage innovation. However, many
technology executives argue that injunction provisions under U.S. law are
outdated and enable patent-holding firms to extract prohibitive payments
from others seeking to bring useful products to the market. "It won't be
too long before this brand of litigation triggers a backlash, in the form
of patent reform, proposals for which have languished in Congress for
years," says patent lawyer Matthew D'Amore. Though RIM still believes the
U.S. Patent & Trademark Office will eventually completely reject NTP's
patents that were at issue in the BlackBerry dispute, the legal battle's
negative effect on BlackBerry sales in the United States helped force the
company's hand, said RIM Chairman Jim Balsillie. "There is an urgent need
for patent reform," Balsillie said. D'Amore agreed: "Already the Oracles
and Microsofts--the formidable technology companies that use patents
defensively--are starting towards a serious push towards reform," he
noted.
Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top
Security Tackles Smartcard Hackers
Electronics Weekly (UK) (03/06/06) Evans-Pughe, Christine
Hackers have been able to exploit computers with smart cards or other
types of accessible security chips by monitoring execution time, power
consumption, and other characteristics and extracting security keys through
statistical analysis. Research has demonstrated that microprocessors, DSP,
FPGA, and ASIC encryption systems can all be compromised within minutes.
The best-known attack method is differential power analysis (DPA), which
exploits the fact that the power supply provides power when a zero-to-one
transition takes place, providing a window into the machine's power
consumption as it processes hundreds of cryptographic computations.
Statistical analysis can then retrieve the secret key based on the
fluctuation of power consumption. A new technique that uses small coils or
other magnetic sensors has made this type of spying harder to guard
against, and even smart cards that minimize the signals emitted by the
leaky parts of the circuit cannot prevent DPA attacks. Time-dependent
preventive measures could throw off the monitoring with dummy calculations
or variable clock periods, said Cryptography Research's Ken Warren.
Cryptography Research has also developed more sophisticated software and
hardware to build randomness into the calculations, as well as developing
the ability to change a key faster than it sheds power. "If you calculate
the circuit is leaking half a bit per transaction and you change the key
every 10 transactions, you know the attacker can't get enough information
to find the key," said Warren. This type of diversification technique
forms the basis of the leak-proof algorithms that secure data at financial
institutions. Other research suggests that electromagnetic emission
simulation could improve security. STMicroelectronics, Philips, the
University of Nice, and others are also exploring a system-level
methodology for incorporating security into device designs.
Click Here to View Full Article
to the top
Women in IT: Yvonne Parle
Computerworld Australia (03/03/06) McConnachie, Dahna
In a recent interview, Yvonne Parle, the manager of information management
at Western Australia's Chamber of Commerce and Industry, described her
experience as a woman in the IT workforce. Parle's teachers encouraged her
to take science classes, and she first became interested in computers while
organizing the library of a pharmaceutical company with an index-card
system when a co-worker suggested she design a computer program. Parle's
most difficult times in her career came while she was working in London in
the 1970s as the only woman in the data center. Her favorite job was at
Chase Manhattan, when she first managed a technical team. While female
involvement in IT has increased over the years, Parle still believes that
people view it as an odd career path for women, particularly when they
gravitate toward the more technical vocations. She has encountered
frequent disappointment at seeing her own opinions on a technical matter
accepted only after being echoed by a male colleague. Because women
continue to be judged according to outdated career stereotypes, Parle
believes that women in IT must champion the career to female students,
particularly as the adversarial perception of "us versus them" between
technology departments and the rest of business disappears. Parle is also
a founding member if Western Australia's Women in IT (WIT), which promotes
IT as a career path to women, and will speak on the historic role of women
in technology at the upcoming Go Girl, Go for IT Careers Showcase.
Click Here to View Full Article
to the top
Artificial Intelligence Gains Momentum
Memphis Daily News (03/01/06) Vol. 121, No. 49,Guy, Rosalind
The FedEx Institute of Technology (FIT) at the University of Memphis is
launching a robotics research center later this month to develop a robotic
vehicle capable of integrating safely with a human workforce. So far the
lone corporate backer, FedEx, has given the center $120,000 in the form of
one-year seed money, though the FIT's Eric Mathews is confident that the
center's research will attract the attention of other sponsors. The idea
for the center came from FedEx's interest in yard management--creating a
vehicle that could navigate the tarmac and move cargo bins around while
being able to sense and avoid objects. Ultimately, the center wants to
create a robot that can safely operate in a warehouse alongside humans.
Mathews likened the focus of the center to last October's DARPA autonomous
vehicle competition, though instead of a predetermined course, the center
aims to create devices that can move objects around in a real-life setting
with an awareness of the other machines' location. "We're talking about
having a global view of patterns of things being moved around and what
needs to go where," said Mathews. "So you have to have global
intelligence, but you also need local intelligence so that if a person
steps in front of it, that person doesn't get killed." The project is
planned over five years, with the early stages to be spent developing
smaller robots before moving on to a more ambitious application endowed
with the ability to navigate and operate the brakes, gas, and other
functions of a vehicle. Mathews does not envision robots ever replacing
humans as a source of labor, but rather working alongside them to increase
productivity and reduce errors.
Click Here to View Full Article
to the top
Hunt Intensifies for Botnet Command & Controls
eWeek (03/02/06) Naraine, Ryan
A tema of security researchers is stepping up efforts to locate and
disable the command-and-control infrastructure that powers millions of
zombie drone machines, or bots, hijacked by malicious hackers. "If that
command-and-control is disabled, all the machines in that botnet become
useless to the botmaster," says Gadi Evron, a CERT manager in Israel's
Ministry of Finance. "It's an important part of dealing with this
problem." Evron and the team, which consists of representatives from
anti-virus vendors, ISPs, law enforcement, educational institutions, and
international DNS providers, have launched a public, open mailing list to
encourage the general public to help report botnet C&C servers. The
mailing list will be a place to discuss detection techniques, report
botnets, send information to the appropriate private groups, and
automatically notify the relevant ISPs of command-and-control sightings.
Experts say CIOs should put the threat of botnets high on their lists of
priorities as they become more dangerous. Command-and-control shutdowns
are just a small part of fighting the problem. Evron predicts that experts
in the anti-virus, anti-phishing, anti-spyware, and anti-spam industries
will all work together on research and development to help stop the growth
of botnets.
Click Here to View Full Article
to the top
Computer Scientist Sorts Out Confusable Drug Names
ExpressNews (University of Alberta) (03/02/06) Smith, Ryan
In an effort to curb the spread of medication errors stemming from similar
drug names, the FDA contracted Project Performance Corporation (PPC) to
develop a software application to eliminate confusion among the more than
4,400 FDA-approved drugs in circulation. PPC contacted University of
Alberta computer science professor Greg Kondrak, who had developed the
ALINE linguistic and bioinformatics program to distinguish between
similar-sounding words in his doctoral work. Kondrak gave PPC the ALINE
program, and devised the BI SIM program to compare and analyze the spelling
of words. PPC combined Kondrak's two programs to provide the FDA with a
framework for assigning new drug names based on how likely they are to be
confused with existing drugs either phonetically or orthographically. "The
FDA used to have dozens of people scouring the lists of names to check if
the proposed ones were too similar to any of them," said Kondrak. "But now
one person using PPC's system can identify sound-alike and look-alike drug
names with great accuracy in a matter of seconds." Computer scientists and
linguists have embraced the ALINE program for other applications, as well.
Kondrak is pleased with the warm reception that his program has enjoyed,
particularly in light of the objections he encountered when writing ALINE
that it would never have a practical use.
Click Here to View Full Article
to the top
Digital Entertainment on Every Channel
Fraunhofer-Gesellschaft (03/06)
The future of digital media will be on display from March 9-15, 2006, at
CeBIT in Hanover, Germany. Researchers from the Fraunhofer Institute for
Integrated Circuits IIS in Erlangen envision in-car surround sound, and
have combined the MPEG Layer-2 coding used in Digital Audio Broadcasting
(DAB) with the new MPEG Surround technology to create DAB Surround, which
will produce surround sound using a bit rate 5 Kbps above stereo data
rates. Fraunhofer engineers are also bringing surround sound to handheld
devices such as mobile phones or PDAs through Ensonido, which uses special
filters that take features of the human head and ear into consideration to
adjust acoustic signals, so that sounds can be distinguished in headphones
on the left, right, front, and rear. Also at CeBIT, researchers from the
Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut (HHI)
in Berlin will introduce a system that will harmonize the emerging industry
standards for bringing television to mobile phones or PDAs. HHI is working
with Siemens, Sony, T-Systems, and Vodafone on an approach that makes use
of the Internet Protocol (IP). Meanwhile, the Fraunhofer stand will
include a demonstration from researchers at IDMT on IAVAS (Interactive
Audiovisual Application Systems), when HHI engineers turn their attention
to the high compression of data needed to deliver razor-sharp TV images. A
final area of focus has been to develop an ultra-low-delay audio codec for
living-room surround sound, via wireless loudspeakers. "The leap in
quality is comparable to the transition from mono to stereo," says
Karl-Heinz Brandenburg, director of IDMT, which has developed the
loudspeakers.
Click Here to View Full Article
to the top
Push to Create Standards for Documents
New York Times (03/03/06) P. C2; Lohr, Steve
More and more government records and documents are being created and
stored in purely digital form, which has raised concerns that public
information might become locked into proprietary formats designed for
software that may someday be obsolete. In order to address this potential
problem, the OpenDocument Format Alliance has been created by 30 companies,
trade groups, academic institutions, and professional organizations in an
effort to promote government adoption of open technology standards. "The
goal is to ensure that the largest number of people possible are able to
find, retrieve, and meaningfully use government information," said the
American Library Association's Patrice McDermott, who says the problem is
bad and is getting worse. The National Archives and Records Administration
is currently amid a pricey project designed to ensure that electronic
documents it saves from federal agencies are able to be opened and read.
The alliance backs the OpenDocument Format for typical word processing,
presentation, and spreadsheet documents, which today are overwhelmingly
stored in proprietary Microsoft Office formats. Though a number of the
alliance's members are Microsoft rivals such as IBM and Sun Microsystems,
Sun's Simon Phipps said that "This is not a partisan, anti-Microsoft
group." However, Microsoft backs another open document standard called the
OpenXML Document Format, which will be the default format for the
forthcoming Microsoft Office 2007. Other supporters of the OpenXML
format--which Microsoft submitted to the standards body Ecma International
in 2005--include Intel, Apple, Toshiba, BP, and the British Library, said
Microsoft's Alan Yates.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
Study: Cell Phones a Hazard on Flights
CNet (03/01/06) Termen, Amanda
Loose enforcement of the ban on cellular calls on airplanes could have
potentially disastrous consequences because the risk of radio emissions
from cell phones disrupting vital instruments such as GPS receivers is
higher than previously thought, according to a study conducted by Carnegie
Mellon University researchers. The research team carried broadband
antennas and spectrum analyzers with them onto random U.S. flights to pick
up cell phone signals, and discovered that one to four calls are made on
every U.S. commercial flight, on average. The study showed that signals
emitted by other electronic devices such as laptops and game consoles could
also be potentially harmful on flights. The in-flight mobile phone ban was
originally established to prevent such calls from interfering with
ground-based cell phone conversations and planes' radio communications, and
the FCC said this danger could be neutralized through technical
advancements. The FBI and the Department of Homeland Security responded
negatively to the FCC proposal's to lift the ban out of concerns that it
would constitute noncompliance with wiretapping guidelines. The Carnegie
Mellon researchers are also opposed to the ban's elimination, and they
urged the design of special tools flight crews could use to track the use
of electronic devices during critical points in the flight.
Click Here to View Full Article
to the top
RIM: Web Services Will Trump Mobile Browsers
eWeek (03/01/06) Hines, Matt
Though Research In Motion's (RIM) days may be numbered due to litigation
over BlackBerry handhelds, senior product manager David Heit believes he
has found the key to the future of mobile application delivery for
businesses in the design of new Web services-based tools to build new
applications and not mobile browsers, as are being pushed by many in the
industry. "The assumption is that the mobile browser experience should be
the same as the desktop experience, but we believe that the usage patterns
are different than when you're sitting at your desktop, versus when you're
working with a mobile device," says Heit. "The mobile experience is much
more about immediacy and having information available when you need it.
Web services represent a third development model beyond browsers and
something like Java, and they will greatly increase our ability to extend
applications onto the handheld." In accordance to this belief, RIM has
launched the Blackberry MDS Studio, a visual platform design and assembly
tool that allows software developers to more quickly build applications for
mobile devices using the drag-and-drop method. The technology has gained a
following: Real estate specialist JJ Barnicke, for example, has built a
field sales automation tool for its agents. But analysts say if such
technology is to gain a greater foothold, wireless carries will have to
embrace it first. "At the end of the day, the U.S. market is all about
control by the carriers, and from their perspective pushing Web services
through their portals gives them a lot more control, so there could be some
resistance," says Current Analysis analyst Brad Akyuz. "I don't think that
there's much question that someday the predominant way for delivering
applications to mobile handsets will be push-based services built on Web
services," he notes. "But the manner in which carriers embrace all of
this, which mostly remains to be seen, will have a significant impact on
where and when we see these types of applications showing up."
Click Here to View Full Article
to the top
The Emergence of Interactive Supercomputing
Applications
Always On (02/28/06) Wladawsky-Berger, Irving
The steady improvement in the performance quality of computing
environments has been driven by their accelerated responsiveness, cutting
run times from hours to seconds, writes IBM's Irving Wladawsky-Berger.
From their inception, graphical user interfaces have appealed to their
users' instinctive sense of sight and sound, and have become increasingly
interactive. Time-consuming, computation-intensive supercomputers are on
the verge of embracing interactivity as technologies such as
microprocessors and storage have seen major improvements while prices
continue to fall as architectures advance. IBM's BlueGene attains a high
level of performance from low-power versions of its Power Architecture by
using multiple versions in parallel, netting the system five of the top 15
spots on the Top500 list of the world's most powerful supercomputers. IBM
has also unveiled a new high-performance version of its BladeCenter
architecture that, with an almost exponential increase in internal
bandwidth, is poised to take on a host of supercomputing applications. A
new blade has appeared based on the Cell processor, a Power
Architecture-based high-performance processor that Sony, Toshiba, and IBM
jointly developed for the Play Station 3. Today's supercomputing
applications are used for modeling, simulation, and analyzing vast amounts
of data, and many are already moving toward interactivity. Interactive
supercomputing could have significant applications in medical diagnosis,
automotive engineering, or oil discovery. Doctors could train on surgical
simulators modeled after flight simulators, realistically demonstrating the
conditions that arise when things go wrong in a procedure.
Click Here to View Full Article
to the top
The Next Generation of In-House Software
Development
McKinsey Quarterly (02/06) Marwaha, Sam; Patil, Samir; Tinaikar, Ranjit
Some trailblazing companies have figured out how to build a
customized-applications environment that incorporates the advantages of
packaged software by adopting software vendors' "write-once, widely sell"
strategy for packaging and selling applications designed to fulfill group
rather than individual needs. Standardization of maintenance, support, and
software-management processes shared by groups of applications as products
is an approach some companies have followed to transform components of
custom-applications support into packaged activities. Reduced costs for
applications maintenance and accelerated deployment of new applications are
among the benefits of this tactic. Application "owners" and developers
face less of a burden in going through management issues, while per-seat,
per-application prices lower costs and augment cost transparency. In
addition, companies that consolidate applications in shared services
realize full value, resource utilization increases substantially, service
levels become more manageable, and activities for applications of the same
archetype become standardized. Early adopters of the product-oriented
approach have demonstrated the wisdom of building the right products
"prospectively" with both present and future needs considered. Other
important steps to follow include organizing groups to deliver products
effectively against business needs as well as technology outcomes, and
keeping in mind organizational factors to guarantee proper governance and
yield business advantages.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
M2M: More Than a Modem
Portable Design (02/06) Vol. 12, No. 2, P. 22; Quinnell, Richard A.
Design teams are often not ready to meet a crucial challenge to developing
portable machine-to-machine (M2M) applications: Obtaining the
certifications that sanction implementation. This involves complying
concurrently with the requirements of government regulatory bodies,
wireless standards bodies, and the wireless service developer, each of
which has its own unique requirements and certification standards. To
acquire government certification, it must be proved that the application's
design meets the regulatory limits for the given radio equipment, which
requires rigorous testing of broadcast power levels as well as frequency
spectra; testing has to encompass the entire product, and any subsequent
design changes following certification may require recertification.
Developers must also appease different wireless standards entities,
depending on whether the product is based on CDMA and/or GSM: The CDMA
Development Group (CDG) handles certification for the former while the
PCS-1900 Type Certification Review Board (PTCRB) and the Global
Certification Forum (GCF) certifies for the latter in North America and
Europe, respectively. Products must then be certified by carrier service
providers to guarantee compliance with network standards, and this can be
an aggravatingly long and arduous stage that may easily entail six
additional weeks of work. Design modifications to satisfy the certifying
organization may be necessary at any stage, which could force a
certification process restart or at least a fast retest; newcomers to the
M2M market could get a leg up by jointly going through the certification
process with the radio component provider.
Click Here to View Full Article
to the top
IPod Obedience Training
Discover (02/06) Vol. 27, No. 2, P. 24; Johnson, Steven
As technology becomes increasingly attuned to the preferences of its
users, it may be time for an update to the lexicon of verbs that describes
their functions. Each successive innovation brings into circulation a new
batch of verbs, from the TV's "change the channel" to the CD player's
"track up" and "track down." The command of moving to the next chapter or
track assumes that the instrument has an understanding of the structure of
its own content. The iPod is a prime example of a digital device whose
content format has outpaced its menu of functions. The iPod is incapable
of expunging a song from its shuffle-mode rotation even if a user
consistently skips past it. Smart algorithms are enabling devices to make
more and more decisions for us, though even the most sophisticated
algorithms can be improved with training. Software that provides
recommendations could be improved with the addition of two new verbs:
Remove and ignore. Remove would tell the device never to make a particular
recommendation again, while ignore would instruct a device that learns by
observing behavior not to incorporate a user's action into its future
recommendations. With these additions, users could prevent their TiVos
from continually recording the same unwanted program over and over again
under the assumption that it is of interest to the viewer, and users could
stipulate aberrant purchase or content decisions in applications such as
Amazon's recommendation system. Some technologies have begun to include
variations of this feature, such as TiVo's "thumbs up" and "thumbs down"
buttons, and the "no follow" hypertext standard that the major search
engines have adopted. Apple's iTunes allows users to take a song out of
shuffle circulation, though the process is hardly intuitive. What these
functions are missing is universality--the simple iconography that defines
a forward arrow as "Play," and the square as "Stop"--which will only become
more important as more of our decisions are made by recommendation
algorithms.
Click Here to View Full Article
to the top
Biometrics Becomes a Commodity
IT Architect (02/06) Vol. 21, No. 2, P. 46; Dornan, Andy
The limitations of password-based security are driving the growth of
biometric solutions, but IT departments considering biometrics must be
mindful of three facts: The only physical biometrics set for widespread
use in authentication for the foreseeable future are fingerprints;
biometrics' effectiveness depends on the technology being incorporated into
a multifactor authentication framework that includes passwords or hardware;
and physical biometrics is the optimal choice for local physical security
rather than direct access to networked resources. Authentication by
automated face recognition systems is currently beset with a high rate of
false positives, while DNA authentication is likely to remain in the realm
of science fiction because of cost, privacy, and feasibility issues.
Voiceprints have a better chance of mainstream acceptance, but the ease of
voiceprint analysis and counterfeiting is a major drawback, though blending
voiceprints with other techniques could be advantageous. The inability to
keep biometric identifiers private is a weakness common to all biometrics
solutions, which is why IT departments must establish privacy safeguards to
protect a user's fingerprints, as well as ensure the prints have not
already been exposed by some other system to which the user has
authenticated. Preventing the transmission of biometric templates across a
network or their storage in a central repository, usually through a
combination of the biometric factor and a hardware device, is the best
strategy for shielding user privacy. An attacker can be deterred from
accessing either private keys or the biometric template by storing them on
the PC's cryptographic coprocessor, or TPM. The TPM could provide a layer
of interoperability--a capability lacking in all fingerprint-based
authentication systems--because it is a standardized technology.
Click Here to View Full Article
to the top
Next-Gen Libraries
Campus Technology (02/06) Vol. 19, No. 6, P. 43; Villano, Matt
Digital library efforts are in a constant state of flux, but several
projects are encouraging. One such initiative is the Plowshares Project, a
collaboration between Indiana's Goshen, Manchester, and Earlham colleges to
digitize local historical archives focusing on studies of peace and social
justice affiliated with the Quaker, Brethren, and Mennonite denominations;
the library is but one component of a larger institutional effort,
according to Earlham's Tom Kirk. Copyright issues and metadata quality and
uniformity have been challenges for the Plowshares Project, notes Goshen
College library director Lisa Guadea Carreno. The University of
Washington's DigitalWell, or D-Well, project seeks to archive large audio
and video files, and its file management architecture consists of a
standard central computer with clustered, scalable-on-demand servers.
D-Well can also interoperate with other digital library systems throughout
the academic sector thanks to the proprietary Storage Resource Broker and a
middleware system built by the University of California at San Diego's San
Diego Supercomputer Center. The University of Michigan is compiling
digital collections through the use of Digital Library eXtension Service,
standalone content scanning and management software that aids schools
without digital libraries in the rapid setup of such archives. The
University of Virginia, meanwhile, is digitizing collections of old
documents and images, while most of its library consists of scanned,
multilingual content or "e-texts" that are searchable by topic, word, and
character.
Click Here to View Full Article
to the top
Model-Driven Engineering
Computer (02/06) Vol. 39, No. 2, P. 25; Schmidt, Douglas C.
Software developers can use third-generation languages and reusable
platforms to better protect themselves from the complexities associated
with generating applications using earlier technologies. But the evolution
of platform complexity has overtaken the ability of general-purpose
languages to conceal it, leaving several major problems unaddressed and
increasing developers' difficulty in determining which application segments
are vulnerable to side effects stemming from changes in user requirements
and language/platform environments. Model-driven engineering (MDE)
technologies show promise as tools for handling platform complexity and
third-generation languages' inability to ease complexity as well as
effectively express domain concepts. This is accomplished through a
combination of domain-specific modeling languages (DSMLs) that can
formalize structure, behavior, and requirements within specific domains,
and transformation engines and generators that facilitate the analysis of
certain model types and the synthesis of artifacts. MDE tools are informed
by the results of prior initiatives to develop higher-level platform and
language abstractions, and can dictate domain-specific restrictions and
execute model checking to find and deter numerous errors early in the life
cycle. For MDE tools to successfully migrate from early adopters to
mainstream software developers, useful standards that enable effective and
portable tool/model interoperability must be defined. Standards must also
be complemented by a sturdy infrastructure for supporting the development
and evolution of MDE tools and applications. Complex systems cannot be
developed with just models, so MDE is designed to integrate, enhance, and
leverage other tools such as patterns, model checkers, third-generation and
aspect-oriented languages, application frameworks, component middleware
platforms, and product-line architectures.
Click Here to View Full Article
to the top