Change Your Appearance, Not Your Shirt
New Scientist (04/07/06) Biever, Celeste
Researchers at the University of Connecticut in Storrs have employed the
process of electrospinning in order to make longer electrochromic polymers
for clothes that can change colors. Until now, researchers had dissolved
molds to shape electrochromic polymers into thin cylinders, but the
resulting fibers were a tenth of a millimeter long, which is too short for
woven or knitted fabric. "These are the first long fibers that have the
ability to change color," says Greg Sotzing, who is currently able to
change fibers from orange to blue and from red to blue. The
1-kilometer-long threads of electrochromic polymers that Sotzing and
colleagues have developed can change colors when a different voltage is
applied. The thread is washable, and a mixture of different colors can be
knitted or woven into a T-shirt or blanket, which would also include thin
metal wires connected to a battery pack and a microcontroller. The small
number of criss-crossing wires would divide the garment into pixels. The
wearer of a T-shirt would be able to change its color to fit his or her
mood or outfit. Also, the controller could be connected to a camera, which
would give the wearer the ability to switch the pixels to match the pattern
of the shirt to his or her surroundings.
Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top
Is There a Robot in Your Future? Helen Greiner Thinks
So
[email protected] (04/18/06)
Former MIT student and iRobot co-founder Helen Greiner notes in an
interview that consumers' resistance to domestic robots, stemming from
fears that robots will make humans obsolete, tends to wear off once they
see the device in action and compare what it is capable of to what it is
not capable of. She says her company's customer base, which is primarily
homemakers, will buy domestic bots such as the Roomba vacuum cleaner and
grow attached to them as if they were pets, even going so far as to give
names to the robots. Greiner says liability issues with practical consumer
robots are minimized by considering the dangers such machines could present
to the home, and designing the product around those risks. She observes
that an autonomous robot's mechanical systems and its intelligence should
complement each other, pointing out that the robot's sensory systems play
an important role in the device's intelligence capabilities, while the
mechanics, when done right, can greatly simplify the robot. Greiner finds
that programming tasks can be substantially streamlined via intelligent
mechanics, while cheaper sensors can yield more information through
excellent low-level software; "So designing the system as a whole is what
has pushed our robots ahead," she maintains. Many robots are originally
designed for military applications in order to reduce human
casualties--bomb-sniffing, for example--but Greiner reports that such
robots are often adapted for non-military and civilian use. She expects to
see robots moving into academia as tools for learning and getting students
interested in technology, especially at the high-school level. Among the
research areas iRobot is investigating is swarm intelligence, in which
large teams of robots collaborate to perform tasks collectively through
distributed algorithms.
Click Here to View Full Article
to the top
Thinking Beyond the Box
Michigan Daily (04/06/06) Bond, Eston
The flexibility and openness of the Internet has widened the range of
options available to science & technology undergraduates, and some of the
more entrepreneurial students are leaving the academic world to go after
lucrative corporate opportunities. One such student is University of
Michigan engineering senior Jeremy Linden, who considers current
undergraduate research to be too theoretical and not very practical. "I've
always wanted to start my own business, so I'm interested in ideas that
have real-world, business applications," Linden explains. College of
Engineering professor Farnam Jahanian, who co-founded the Arbor Networks
network security company, does not see the lure of online business
opportunities to students affecting current research in a significant way:
He says that while undergraduates can benefit from internship experience,
research can help students get ready for graduate-level programs. Jahanian
adds that a mass undergraduate defection is a non-issue, and notes that a
substantial portion of the university's research comes from people seeking
more advanced degrees. Despite the robust health of the school's
engineering research, Jahanian thinks the concerns of undergraduates who
leave the university for entry-level corporate positions is a valid issue
that must be addressed, and that the College of Engineering and its funding
sources should carefully consider the school's competitiveness as other
institutions and nations make economic gains. Jahanian and others suggest
greater emphasis on independent study, the allocation of additional
fellowships and assistantships, more in-depth undergraduate classes, and
greater federal investment in basic research as strategies for shoring up
the university's research programs. "If you want the research quality to
be high, the university needs to keep doing what it does best: Hire
professors at the top of their field, who will in turn attract the best
graduate students," says Linden.
Click Here to View Full Article
to the top
Mash-Ups and 9 Other Wacky Web Ideas
InformationWeek (04/03/06)No. 1083, P. 43; Claburn, Thomas; Ricadela,
Aaron; Malykhina, Elena
Among 10 startups that could potentially bring Web 2.0 to the enterprise
is Blinkx, an Internet search company whose free Pico desktop search
toolbar can infer the meaning of what users are viewing and retrieve
relevant information from across the Web, and whose blinx.tv video search
engine can index spoken words via speech-to-text conversion. Recently
acquired Google subsidiary Upstartle has developed Writely, an online word
processor that eliminates the need for repeated trips to the Web server
when composing documents online, while SimpleFeed applies Really Simple
Syndication (RSS) feeds toward the cultivation of customer relationships
and revenue by enabling people to choose and subscribe to topics of
interest from companies they do business with. Six Apart's Movable Type
software for blogging and social networking is very popular thanks to its
simplicity, and Socialtext capitalizes on the use of firewall-guarded wikis
as tools for collaboration and information access. An example of effective
mash-ups, or combinations of open online information sources, is SkiBonk, a
clearinghouse for ski information that integrates the Google Maps interface
with live Webcams, local weather reports, slope conditions, trail maps, ski
area locations, and lodging, gear, and food listings. DreamFactory, from
the company of the same name, brings application customization to a new
level by allowing interactive features to be downloaded to an Internet
user's computer without overloading the machine with code. Small
businesses could coordinate events, conferences, and networking among
employees through 83degrees' free online "social calendar," while Jigsaw
Data aims to improve salespeople's capability by building and offering
access to a business contacts database. Finally, Laszlo Systems enables
Web application authors to build user-interactive programs via the Laszlo
XML high-level language and the OpenLaszlo declarative language
platform.
Click Here to View Full Article
to the top
Why Coders Must Mind Their Language
IT Week (04/03/06) Hatton, Les
Despite the lengthy standardization process that guides the formalization
of many of today's languages, their high level of complexity ensures that
the resulting programs will contain at least some errors and
vulnerabilities. The unexpected program failures are the inevitable result
of the inconsistencies and compromises inherent in the standards process.
At the beginning of the computing age, developers had to write programs
down to the level of ones and zeros. Though more expressive languages have
emerged to enable programmers to write code at a higher level, program
failures occur with a similar frequency today as they did 20 years ago,
despite the existence of language committees and compiler implementers.
While serving on a vulnerability working group, author Les Hatton received
a set of rules for C++ distributed as a Word file. The file crashed some
versions of Word when opened by Hatton's colleagues, demonstrating how
fickle even the most frequently tested languages and programs can still
be.
Click Here to View Full Article
to the top
Why VOIP Needs Crypto
Wired News (04/06/06) Schneier, Bruce
Voice over Internet Protocol (VoIP) phone calls must be encrypted because
the scope of the dangers VoIP is vulnerable too far exceeds that of threats
to traditional phone calls, writes Counterpane Internet Security CTO Bruce
Schneier. He notes that data packets can be intercepted at any point along
the route of transmission, and eavesdropped on by governments, corporate
competitors, hackers, and criminals. Schneier envisions a multitude of
crimes that can be committed through VoIP call eavesdropping, including the
hijacking of phone calls, the theft of account information, the
accumulation of sensitive material for blackmail or industrial espionage,
and insider stock trading. The author criticizes the U.S. government's
suggestion of permitting encryption by everyone, provided it owns a copy of
the key; he calls this "an amazingly insecure idea for a number of reasons,
mostly boiling down to the fact that when you provide a means of access
into a security system, you greatly weaken its security." Schneier reports
that there are many products that provide VoIP encryption, including
built-in encryption from Skype, and Phil Zimmermann's open-source ZFone.
However, he cautions that encryption is not a cure-all, in that it cannot
address the leading threat of endpoint surveillance. "No amount of IP
telephony encryption can prevent a Trojan or worm on your computer--or just
a hacker who managed to get access to your machine--from eavesdropping on
your phone calls, just as no amount of SSL or email encryption can prevent
a Trojan on your computer from eavesdropping--or even modifying--your
data," Schneier says.
Click Here to View Full Article
to the top
Wireless Sensor Networks Offer High-Tech Assurance for a
World Wary of Earthquakes
EurekAlert (04/06/06)
City officials must make snap decisions about whether bridges can support
the load of emergency-rescue traffic when they have been damaged in an
earthquake or other disaster. To provide them with better information,
Lehigh University assistant professor of civil and environmental
engineering Yunfeng Zhang is developing wireless sensor networks that could
relay data about a bridge's performance and ability to support traffic.
Wired networks could relay information in real time, but the wires are
susceptible to electromagnetic signal interference and could themselves be
damaged in an earthquake. Zhang, working under a five-year, $400,000 NSF
grant, is developing sophisticated data-compression algorithms to overcome
the limited bandwidth available for wireless sensor networks, which can
dramatically slow data transmission rates. The algorithms include the
capability to filter out redundancies in the sensor data to maximize
compression rates. "Using the sensor-data-compression algorithm I'm
developing," Zhang said, "we can minimize data-downloading time and
ultimately download data in real time and evaluate it in near real-time
basis." As part of the grant, Zhang will build a test network to monitor a
bridge in China that was damaged during construction in 2000, and conduct
extensive validation testing in 2009 to assess its ability to carry
traffic. Zhang says the data collected in the test could also benefit
American engineers, and he will also incorporate his research into the
classes he is teaching at Lehigh.
Click Here to View Full Article
to the top
Industry, Academia Advance on Common Ground
EE Times (04/03/06)No. 1417, P. 42; Goering, Richard
Acknowledging that universities are critical but often underfunded
instruments of research and innovation, Texas A&M University computer
science professor Steve Liu will moderate a panel calling for a partnership
between the academic and business communities. "In the 1970s universities
led the industry, but these days technology is so complicated that it
becomes quite expensive for universities to acquire equipment," Liu said.
The partnership is billed as mutually beneficial, as universities struggle
to teach students on the most current technologies and applications, while
companies complain of having to retrain recent graduates. The panel is a
joint endeavor of the Embedded Systems Conference Silicon Valley and the
IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS
2006). In his classes at Texas A&M, Liu has replaced textbooks with data
sheets, and encourages his students to undertake ambitious research
projects, reimbursing the students with the best work from his own research
funding. Liu hopes that the panel will dispel the twin myths that
businesses are uninterested in research and academia is indifferent to
practical applications. Academic partnerships benefit companies by
advancing research in areas of interest to the corporate sponsor, said Sun
Microsystems' Greg Bollella, who will serve as a panelist. Bollella, who
has led partnerships with universities in the past, said companies cannot
expect academic departments to work at the same schedule as their corporate
counterparts, but that universities should be researching practical new
interfaces and implementations for application programming. Also sitting
on the panel is Boeing's Douglas Stuart, who identified the four goals of
this year's RTAS as development tools, co-design of software and hardware,
industrial applications, and real-time and embedded-systems theory.
Click Here to View Full Article
to the top
10 Emerging Technologies
Technology Review (04/01/06) Vol. 109, No. 1, P. 55; Savage, Neil;
Talbot, David; Greene, Kate
Although the FCC estimates that 70 percent of allocated wireless radio
spectrum may not be in use at certain times of the day, the ever-growing
population of wireless-enabled devices must contend with a limited amount
of bandwidth, and computer science professor Heather Zheng at UC Santa
Barbara is focused on allowing wireless devices to tap idle spectrum
through cognitive radio. Such devices determine which frequencies are
unused and select one or more over which to transmit and receive data, and
Zheng has devised a scheme for doing so without inducing bottlenecks by
giving devices that are not assigned FCC priority to split the spectrum up
among themselves through negotiation. She chose a series of game
theory-based rules for devices to follow via software, so that each radio
can observe its neighbors' activities and take its own course of action.
Rutgers University professor Dipankar Raychaudhuri is seeking a universal
protocol for connecting multiple wireless devices and networks on the
fly--a critical step toward pervasive computing--by trying out candidates
on the radio test grid. Ohio State University systems developer Scott
Cantor believes the Internet could be made more trustworthy by "Web
authentication" systems such as Shibboleth, which sets up a one-step login
that confirms identity while guaranteeing privacy. Over 500 educational
institution use Shibboleth worldwide, and Cantor has established a
relationship with the Liberty Alliance to expand Shibboleth's presence even
further. University of Illinois at Urbana-Champaign materials scientist
John Rogers is pursuing the development of stretchable silicon-based
electronics, as organic semiconductors cannot support more intense
computing tasks because of speed limitations. Rogers uses single-crystal
silicon prepared as an ultrathin layer and affixed to narrow strips of a
rubbery polymer.
Click Here to View Full Article
to the top
Can Computers Help to Explain Biology?
Nature (03/23/06) Vol. 440, No. 7083, P. 416; Brent, Roger; Bruck,
Jehoshua
Developments in computational formalisms could help advance scientists'
understanding of biological systems beyond its current level of
natural-language descriptions in textbooks and journals to a point where
algorithms can be used to predict quantitative behavior. Biological
systems are unique from other natural processes in that they are controlled
by a central stored program--the genome. Though the genome controls some
features of biological systems directly, such as protein sequencing, most
other functions originate from the genome through a much more complex
process. With its assorted internal and external inputs, outputs (executed
tasks), and processing systems, living systems already share many features
of the von Neumann-based computer. Just as the activities of Egyptian
surveyors formed the basis of mathematics, workaday language-based
activities in computer programming could produce formalisms in the form of
new mathematics. Though it is tempting to view DNA as an executable code,
close examination reveals important differences, such as the absence of
modularity, boundaries, and a defined execution sequence in the genome.
Most importantly, the lack of a defined boundary between processing
mechanism and output undermines the likelihood that a theory of
stored-program machines will ever offer a comprehensive explanation of
biological systems. Important intermediate steps can be taken, however,
such as formalizing the cause-and-effect connections between proteins and
regulatory sites. Though most differential equations arising from
biological narratives are too complex to be adequately analyzed, numerical
simulations could enable scientists to devise probabilities for specific
sequences of reactions. Another method of better understanding biological
systems involves a deeper understanding of biological semantics, one that
draws more heavily on the concepts of meaning and purpose.
Click Here to View Full Article
to the top
Wireless Sensing Spawns the Connected World
Electronic Design (03/30/06) Vol. 54, No. 7, P. 49; Allan, Roger
Ubiquitous wireless sensor networks and pervasive computing are expected
to emerge from Internet and telecom innovations, leading to
super-intelligent environments and significant lifestyle enhancement.
Professor Dipankar Raychaudhuri at Rutgers University's Wireless
Information Network Laboratory (WINLAB) foresees five major developments:
The improvement of assisted living through smart homes; a system of
frictionless capitalism where consumers can find goods and services and
perform monetary transactions without human assistance through personal
devices; low-stress airline boarding, lost luggage recovery, and passenger
screening by airport logistics and security systems; smart offices that
support the fast and accurate search and retrieval of various items and the
maintenance of "lifelogs" by workers; and intelligent transportation
systems that can guide people to parking spaces, provide
collision-avoidance feedback with augmented reality displays, and route
vehicles around congestion in real time, among other things. The cell
phone will be a key tool in many of these applications, and various labs
are busy developing the technologies needed for low-power multimodal
sensors and wireless transceiver nodes. Standard software platforms are
also a necessary ingredient for ubiquitous sensing, computing, and
communications. The trend toward ubiquitous wireless sensor networks and
pervasive computing must be helped along by new approaches to software
development and implementation, and the industry must create the proper
networking protocols, the organization of network-based services, and
methods for self-organization, self-configuration, and self-maintaining
large distributed systems as well. Additionally, new business models must
be developed to guarantee that devices and services are interoperable,
while privacy and security issues also have to be addressed.
Click Here to View Full Article
to the top
How to Save IT Jobs
CIO (04/01/06) Vol. 19, No. 12, P. 24; Sayer, Peter
High-level research from the United States and Europe is beginning to
follow lower-skilled jobs to India and China, according to a new report
from the Job Migration Task Force of the Association for Computing
Machinery. The development comes as India and China improve graduate
education and their numbers of qualified researchers grow, says the report
titled "Globalization and Offshoring of Software." As offshoring of IT
jobs continues, governments in the United States and other developed
countries can improve the employment outlook for tech workers by supporting
research and development, boosting education, making it easier for foreign
professionals to work in their countries, and encouraging fair trade. IT
workers know they must have a solid education, an understanding of the
technologies used in the global software industry, and the latest skills to
compete in the marketplace. And they can improve their job prospects if
they have teamwork and communication skills, management experience, and
knowledge of other cultures. The ACM report also says IT workers would do
well to select industries and jobs that are not as likely to be automated
or outsourced as low-wage tasks. Such positions would include those that
require discretionary judgment or knowledge of trade secrets. To view
the complete report from the ACM Job Migration Task Force, visit
http://www.acm.org/globalizationreport
Click Here to View Full Article
to the top
Cradle of Liberty Lags on E-Voting
IEEE Distributed Systems Online (04/06) Vol. 7, No. 4,Goth, Greg
The advancement of e-voting technology in England, continental Europe, and
Australia is overtaking the U.S. effort because of the first three regions'
wholehearted movement to endorse standards such as Election Markup Language
(EML) and make the e-voting process transparent, in contrast to America's
laissez-faire attitude and policies. The latest version of EML, which was
ratified by the Organization for the Advancement of Structured Information
Standards (OASIS) Election and Voter Services technical committee in
February, offers "a very generic set of XML schemas that handle data
exchanges that will support--as far as we know--all the known voting
regimes around the globe," according to U.K. Local e-Government Standards
Body Chairman John Borras. Numerous British e-voting technology pilots
have received generally good marks from the U.K. Electoral Commission,
while the Australian Capital Territory (ACT) used e-voting in its 2001 and
2004 general assembly elections to satisfactory reviews from its own
commission. ACT's e-voting system was designed to include open-source
software, built-in security, the independent audit of software code, and a
paper audit trail of electronic votes, and to fulfill such commission
requirements as the casting of all votes in a public polling place over an
isolated local network; the locking away and constant monitoring of polling
place servers; the storage of votes on a pair of identical hard disks as a
protection from hardware failure; and the encryption of vote data. U.S.
elections officials, on the other hand, have drawn fire from voters' rights
activists for sowing uncertainty among both vendors and government
officials over what voting equipment is reliable through iffy, ill-defined
guidelines and contradictory opinions. U.S. e-voting experts endorse paper
ballot backups as a short-term solution to the reliability problem, while
Borras maintains that "What we've tried to build into EML is sufficient
checks and balances so that your security regime, whatever that might
be...can operate and see what's going on." To view a recent report on
e-voting from ACM's U.S. Public Policy Committee entitled "Statewide
Databases if Registered Voters," visit
http://www.acm.org/usacm/VRD
Click Here to View Full Article
to the top
Designing Wireless Sensor Network Applications
Portable Design (03/06) Vol. 12, No. 3, P. 20; Bertholdt, Joerg
There is no single, universal network topology for wireless sensor network
applications, which is why choosing one of three network topologies--star,
mesh, and hybrid star-mesh--requires the careful consideration of the
application's power requirements, network extensibility, communication
reliability, and environmental conditions. A star topology organizes all
sensor nodes as endpoints around a gateway that transmits data and commands
to those endpoints as well as to a higher-level control or monitoring
system. The star topology offers the lowest overall power consumption for
endpoints by permitting them to enter sleep mode independently and activate
only for the brief period needed to take a measurement and send it to the
gateway, which aligns well to networks that need to encompass a limited,
well-defined range and offer low-power endpoints. Star topologies lack
fault tolerance because there are no alternate endpoint-to-gateway routes
in the event a path is obstructed. Mesh topologies, however, can recover
from node breakdowns because each node serves as a router, allowing the
network to automatically reconfigure itself around the broken node. Mesh
networks can theoretically have limitless extensibility, making them
suitable for low-power-battery-operated networks implemented in
environments with changing RF conditions, where extensibility is critical.
The hybrid star-mesh topology combines the star configuration's low power
and simplicity with the mesh architecture's extensibility and self-recovery
by arranging sensor endpoint nodes in a star formation around always-on
line-powered router nodes, which subsequently organize themselves into a
mesh network. Star-mesh hybrid topologies can be used in networks
requiring local processing at the routing device, a high-bandwidth,
minimum-latency backbone, or readily available main power for the routing
nodes.
Click Here to View Full Article
to the top
Got Metadata?
CLIR Issues (04/06)No. 50,Howard, Barrie
The deep Web is a vast repository of scholarly material inaccessible to
researchers combing the surface Web with Google or other traditional Web
crawlers. Home to innumerable text, audio, and video records stored in
various digital archives, databases, and institutional collections, the
deep Web is difficult to navigate with the full-text search indexes
designed for the surface Web. To address this challenge, the Open Archives
Initiative (OAI) for Metadata Harvesting provides access to the deep Web
through metadata, resource description, and accepted standards for handling
information concerning digital resources. OAI divides the world between
data providers, who create metadata records from repositories, and service
providers, who pull those records from the repositories and create user
services centered on aggregations of harvested metadata. Working under an
Institute of Museum and Library Services (IMLS) National Leadership Grant,
a project team from the Digital Library Federation (DLF) has created the
OAI Portal Prototype, a scheme for mining harvested metadata describing
digital resources in DLF member libraries. The prototype enables users to
search by single terms or phrases with Boolean operators, to set
limitations according to the resource type, and to sort by title, author,
and date. The DLF project team has also partnered with a group of
researchers from the University of Illinois, working also under an IMLS
grant, to optimize the reporting features of the Experimental OAI Registry
at the university's Urbana-Champaign campus. The IMLS-grant project is
also supporting the development of OAI best practices and implementation
training modules, which should be finalized this spring.
Click Here to View Full Article
to the top
Institutional Repositories: An Opportunity for CIO Campus
Impact
Educause Review (04/06) Vol. 41, No. 2, P. 10; Goodyear, Marilu; Fyffe,
Richard
In the aftermath of Hurricane Katrina, higher education institutions are
placing renewed emphasis on business continuity to ensure that the work of
their scholars and researchers is not disrupted. This necessitates an
examination of the management and preservation of the organization's
digital resources. Many campus systems that contain unique digital
resources are poorly organized with minimal assurances of integrity,
support for shifting formats, and metadata. Without these features, the
accessibility and usability of digital resources are uncertain. An
institution's CIO is responsible for providing the technical support for
the storage and preservation of digital assets, though attempts to do so
can be seen as running counter to the traditional academic values of
decentralized management and departmental independence. Institutional
repositories, the services and infrastructure that organize and make
available the intellectual output of an organization, can help CIOs
approach the challenge of preservation in an academic setting.
Institutional repositories are often used to share resources with
researchers and faculty outside the institution, as well as for promoting
the significance of the institution's research activities. Institutional
repositories can also help with preservation through the policies for
placing and organizing the resources that they describe. Repository
programs will typically include librarians, archivists, records managers,
and administrators, and benefit faculty in every discipline, particularly
those in the humanities and social sciences, who may be less likely to
embrace central information systems than their counterparts in the
sciences. A successful program will also provide a glimpse into the
research activities of the campus, showcasing the richness of its
scholarship in a central medium, unlike the highly diffuse system of
publishing in scholarly journals that is practiced today.
Click Here to View Full Article
to the top
What Do You Do with a Million Books?
D-Lib Magazine (03/06) Vol. 12, No. 3,Crane, Gregory
Digital libraries could carry dramatic implications for print collections
as well as the intellectual process of writing, notes Tufts University's
Gregory Crane. Vast digital libraries proposed by Google, Microsoft, and
others are, at minimum, one or more orders of magnitude greater than their
forebears in terms of scale, content heterogeneity, object granularity,
noise, and readership, while collections and/or distributors may face an
order of magnitude of reduction. The increase in these dimensions, and
their interactions among each other, raises the likelihood that the use,
support, and planning for digital collections will undergo a phase shift.
Academic digital libraries face three key issues: Analog to text, machine
translation, and information extraction. The analog to text challenge
involves the development of techniques to analyze page layouts in order to
parse out footnotes, tables, headers, tables of contents, indices,
marginalia, and other structural paradigms that partition and assign
meaning to the characters on the printed page. Machine translation, it is
suggested, could be supported by large digital libraries that supply
parallel texts and similar language resources, while the information
extraction challenge requires identifying individual elements (references
to personages, places, organizations, dates, etc.) and squeezing out
citations to secondary as well as primary sources, embedded quotations,
footnotes, and other textual links, along with assessing techniques of
producing higher level inferences. Notable efforts that complement the
development of very large digital collections include a projection of the
kinds of work scholars can execute when they build upon already existing
massive but uneven collections by Dan Cohen, and Wolfgang Schnibel's
collection development for a digital library on early modern culture that
mines documents about persons, organizations, places, and various semantic
fields that relate to early modern culture.
Click Here to View Full Article
to the top