Justices Will Hear Patent Case Against eBay
New York Times (03/27/06) P. C4; Hafner, Katie
The Supreme Court's consideration of MercExchange's suit against eBay has
attracted widespread attention as a referendum on the threat posed by
patent challenges to large companies. The technology in dispute is eBay's
"Buy It Now" feature, which a federal court ruled to be subject to an
injunction under a patent infringement precedent dating to 1908. Members
of the pharmaceutical industry, General Electric, the University of
California, and others have all filed briefs on behalf of MercExchange,
while Microsoft, Oracle, and Intel have rallied around eBay, alleging that
the threat of injunction quashes innovation and invites frivolous
litigation. MercExchange first filed suit against eBay in 2001, alleging
that it had infringed on three patents registered to Thomas Woolston, the
MercExchange founder who had patented an online auction system with an
automatic payment feature. In 2003, a federal court in Virginia found that
eBay had infringed on two of the three patents, and ordered it to pay
MercExchange $25 million in damages, but did not issue an injunction to
force eBay to stop using the patented technology. A federal appeals court
that specializes in patent cases overturned that decision, ruling that
injunctions were the general rule by which patent infringements are
handled, save for rare cases when an injunction would compromise public
health. Large companies have rallied behind eBay as part of their broader
attempt to reform patent law and contain the threat of patent trolls.
Unlike the recent case brought by NTP against BlackBerry maker RIM, eBay's
Hani Durzy says that even if the court rules in favor of MercExchange, he
does not anticipate any immediate effect on the company's operations
because of changes that eBay made to the "Buy It Now" feature after the
2003 ruling.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
Java Facing Pressures From Dynamic Languages
InfoWorld (03/25/06) Krill, Paul
Panelists at TheServerSide Java Symposium agreed that dynamic languages
such as Ruby are mounting a threat to Java, but that the language itself
can be improved and the ability of Java Virtual Machine could reach to
dynamic languages. Admitting that Java falls short on the low end, the
panelists agreed that Java's future lies with enhanced development of its
Web applications. "Ruby on Rails is quick and clean and that's the reason
it's taking off," said independent consultant Bruce Tate, who is closely
watching the JRuby project, which claims to be developing a Java-based Ruby
interpreter. While Java is faring well at the enterprise level, Tate
argues that the Java Virtual Machine should extend further into low-end
applications. Bruce Snyder, a founding member of the Apache Geronimo
project, said he is surprised by Ruby's popularity, given its lack of
enterprise capabilities. Tate counters that Java also began in a simple
form, arguing that the two languages should be able to co-exist. As the
Internet moves from a publishing environment to an application environment,
Web 2.0 becomes increasingly important. "Hopefully, we'll see a new breed
come along for developing lighter-weight applications and [using] Web 2.0,"
said Snyder, though others claimed that the entire Web tier is simply not
working, inviting the possibility of open-source systems undermining
commercial revenues, whittling down companies' research and development
budgets. Others contended that free and open source are not the same
thing, and that open-source software can provide services that add value
and produce revenue.
Click Here to View Full Article
to the top
Plotting the Road Ahead for Wireless Sensor
Networks
IST Results (03/27/06)
The IST project Embedded WiseNts is developing new cooperation techniques
for integrating wireless sensor networks with the objects from which they
draw data. In laying out their vision of the wireless sensor network of
the future, the scientists analyzed existing systems in the areas of common
application scenarios, algorithms, vertical system functions, and
middleware. "By looking at these four areas, we identify the gaps in our
knowledge, what is missing right now," said Pedro Marron of the University
of Stuttgart. "With this starting point, we can begin to work out what
people will be looking at in the next 10 years." Designers are currently
working to develop energy-efficient hardware to match the advances in the
energy efficiencies of software. The project participants also developed a
contest, inviting researchers to submit their work in cooperating objects
technology. A team of researchers from the Catholic University of Rio de
Janeiro won first prize for an application detailing a system for
monitoring animals that cattle ranchers could use to track the health of
their livestock and prevent against infection. Students at the University
of Zurich won second place with their proposal for an intelligent waste
system that would embed RFID tags in disposable consumer goods and place
tag readers in waste bins and to monitor the type of refuse being disposed
of and track the recycling efforts of waste producers. A doctoral student
from Lancaster University took third prize for his proposal of a traffic
system in which vehicles would communicate with each other to negotiate
space on the road in accordance with 'virtual vehicle slots.' "The promise
of cooperating objects in robotics is very big," Marron said, adding that a
new IST project, AWARE, "will be looking at how to develop a sensor system
for the robots being introduced for fire fighting, as well as for the
support of tiny autonomous flying objects."
Click Here to View Full Article
to the top
The Evolution of IM
Technology Review (03/24/06) Greene, Kate
Open source is beginning to reshape the instant messaging landscape, as
mainstay provider AOL has recently made available a development kit that
programmers can use to modify its AIM software, so long as they obtain a
license if they are developing content for a mobile device or distributing
software to large companies, and do not use the kit to build on the
platforms of AOL's competitors. Meanwhile, the open-source service Jabber
has emerged to offer greater innovation and fewer regulations. The Jabber
Software Foundation's Peter Saint-Andre likens Jabber's relationship with
proprietary instant messaging services to Linux's competition with Windows.
VoIP is also driving open-source IM innovation, according to David Reed of
MIT's Media Lab. Using Jabber, the Gizmo Project enables users to
communicate with each other through text or audio across different IM
networks through an open-source variation on Skype. Gizmo offers
audioconferencing for as many as 99 people (Skype's audioconferencing
capability can handle just five people), and offers a publication feature
that enables users to publish a conversation as a blog. Gizmo runs on the
open-source Jabber platform and the open Internet voice server SIPhone,
enabling any user to create software to connect with the network, unlike
AOL, MSN, and Yahoo!, which do not share details about their server or
client software. Software such as Meebo that has reverse engineered
proprietary networks only creates a unified interface, but falls short of
genuine interoperability, according to Gizmo Project founder Michael
Robertson. "The world I'm trying to create is one in which you have one
screen name that works everywhere, very similar to email," Robertson
said.
Click Here to View Full Article
to the top
The Automat Understands Me
Fraunhofer-Gesellschaft (03/06)
The Fraunhofer Institute for Computer Graphics Research (IGD) and
Fraunhofer Institute for Media Communication (IMK) are among a number of
research organizations that are attempting to improve interaction between
humans and machines. Researchers are trying to develop virtual humans that
are able to interact with people similar to the way in which people respond
to each other. "The idea behind the virtual character is to design the
human-computer interface as naturally as possible," according to Christian
Knopfle, head of Virtual Reality at the IGD. Researchers face the daunting
task of creating a virtual being that would have a human-like appearance,
speak and carry on a credible dialogue, communicate non-verbally through
gestures and facial expressions, interact socially, and respond to the
needs of users in real time. Building modules for generating dialog,
understanding speech, and for graphic output, interfaced via the Web, has
become a focus for researchers involved in developing virtual humans. Such
virtual beings could eventually serve as ticket sellers for railway
businesses, tutors for students taking e-learning courses, or as tools for
training employees on how to interact with customers.
Click Here to View Full Article
to the top
We're Flying Without Wing Flaps and Without a
Pilot
Innovations Report (03/22/06)
Researchers at the University of Leicester will play a key role in the
development of a flapless air vehicle. Engineering professor Ian
Postlethwaite and Dr. Da-Wei Gu are heading research into areas of
coordinated control, integrated control, and condition monitoring, in an
effort to improve the autonomy and performance of uninhabited air vehicles.
For the past year, the Leicester group has developed software for flight
path planning that makes use of several planning strategies, runs in real
time, and takes unexpected events into consideration. They are also
focusing on distributing sensors across an airframe to deliver virtual air
data, which could be used for health monitoring and improving the control
of future UAVs. Leicester's efforts are part of the larger flapless air
vehicle integrated industrial research (FLAVIIR) program, a five-year, 6.2
million pounds initiative that brings together researchers in aerodynamics,
control systems, electromagnetics, manufacturing, materials and structures,
and numerical simulation from all over the United Kingdom. "The concept of
a flapless vehicle, using fluidic thrust vectoring [where direction is
changed with a secondary air flow] and air jets, is one important area of
investigation," says Postlethwaite. "Another is the replacement of the
pilot by sophisticated software that can autonomously fly the vehicle
without collisions in what might be dangerous or remote environment."
FLAVIIR, funded jointly by BAE Systems and the Engineering and Physical
Sciences Research Council, plans to produce a single flying demonstrator
model by 2009.
Click Here to View Full Article
to the top
Drop in CS Bachelor's Degree Production
Computing Research News (03/06) Vol. 18, No. 2, P. 5; Vegso, Jay
The Computing Research Association's (CRA) Taulbee Survey has found a
declining number of students pursuing bachelor's degrees in computer
science at institutions with doctorate programs since the late 1990s.
Another survey reported a 70 percent decline in the number of entering
freshmen who intended to major in computer science at all degree-granting
schools from 2000 to 2005. CRA reports 7,952 new computer science majors
at the Ph.D.-granting institutions that it surveyed in fall 2005, compared
to 15,958 in fall 2000. CRA reports a 17 percent decline in the number of
computer science degrees awarded in academic year 2004/2005 compared to
2003/2004. The NSF suggests that degree production can be cyclical, as the
number of computer science degrees awarded almost quadrupled from 1980 to
1986 before dropping rapidly and eventually leveling off in the 1990s. It
is not surprising, then, that the number of computer science degrees
awarded has again fallen off after peaking five years ago.
Click Here to View Full Article
to the top
Concerns About Wireless Tracking Devices Discussed
National Journal's Technology Daily (03/22/06) Casey, Winter
Representatives from both the government and business are still confident
in the use of radio-frequency identification (RFID), despite research that
indicates it may contain security vulnerabilities. Governments and
businesses worldwide use RFID applications for tasks such as tracking
groceries and identification verification. A recent study by Amsterdam's
Vrije University, called "Is Your Cat Infected with a Computer Virus?,"
showed that computer viruses could move from RFID tags to exploit some
software systems. The United States plans to launch new passports with
RFID technology this summer, according to a State Department official. The
number of passports issued over the past few years has risen from 7.3
million in fiscal 2003 to more than 13 million expected to be issued this
year. The use of RFID has some privacy advocates concerned. "There is
absolutely no need to use an RFID technology," says privacy advocate Bill
Scannell, who adds that it is a bad idea to depend on RFID for security.
Evan Scott at Evan Scott Group International disagrees and says he has
confidence in the system and works with RFID companies everyday. "There
are risk and concerns with all technology," says Scott. "RFID issues will
be resolved and fixed through good technology. We are in the information
age now. Everything is on the Internet or through the airwaves."
Click Here to View Full Article
to the top
Survey Offers a 'Sneak Peek' Into Net Surfers'
Brains
USA Today (03/27/06) P. 4B; Baig, Edward C.
The difference between the Web content surfers claim to look at and what
they actually view was measured by Nielsen Norman Group using eye-tracking
technology, and the results were reported today. The firm requested over
230 participants to research specific tasks and companies on the Internet,
and the survey's outcome demonstrates that companies still have a
considerable amount to learn in order to be able to present a Web site or
online image in a way that attracts the most attention. Randolph Bias of
the University of Texas at Austin's School of Information says companies
would do well to subject their sites to more thorough testing before
rolling them out. The Nielsen study reveals that individuals read Web
pages in an "F" pattern, in which they tend to read longer sentences at the
top of a page and less and less as they scroll down; this makes a
sentence's first two words of prime importance. "People are extremely good
at screening out things and focusing in on a small number of salient page
elements," says Nielsen's Jakob Nielsen. In addition, surfers establish a
good connection to images of people that appear to make eye contact with
them, although pictures of models and other excessively attractive people
can turn surfers off. Also, pictures in the middle of a page can impede a
surfer's progress; people respond better to pictures that are informative
instead of just ornamental; and consumers will glimpse ads in search
engines as a "secondary thing" because they are usually targeting specific
products, according to Nielsen. The study shows that poorly designed Web
sites are rampant, and cases where the site was so confusing that its
status as an official company site could not be determined were also
reported.
Click Here to View Full Article
to the top
Work May Speed Interplanetary Communications
MIT Tech Talk (03/22/06) Vol. 50, No. 21, P. 1
A team of MIT researchers has developed a miniature light detector that
could enable ultra-fast broadband connections on an interplanetary scale.
Currently, wireless radio frequency applications can take hours to relay
scientific information from Mars, but optical links could do the job
thousands of times faster, according to Karl Berggren, assistant professor
of electrical engineering and computer science. The new detector boasts a
57 percent detection efficiency at the 1,550 nm wavelength, a significant
improvement over the current detection efficiency of 20 percent. The
almost threefold increase in speed will enable real-time transmission of
large volumes of data from space, and could eventually lead to full-color
video transmission between astronauts and scientists on Earth. Using
nanowires and superconductor technology, the detector senses laser signals
or very low light at the single-photon level in the infrared portion of the
optical spectrum. Though it is designed for interplanetary communication,
the detector could also be used for quantum cryptography and biomedical
imaging. Current optical systems consume too much power to be practical
for use in spacecraft, but the new detector is sensitive enough to receive
signals from smaller, more efficient lasers. The addition of a photon trap
improved the detector's efficiency, which had hindered the utility of
previous single-photon detectors. As the trap captures more photons, the
detector becomes more efficient, though Berggren and his colleagues are
still working to improve it further.
Click Here to View Full Article
to the top
Delving Into the Meaning of Artificial Life
EE Times (03/20/06)No. 1415, P. 38; Brown, Chappell
Advances in synthetic biology could eventually lead to the development of
artificial organisms with levels of complexity almost equal to biological
systems, blurring the definition of what it means to be alive. A recent
study, written mainly by Hubert Bernauer of ATG:Biosynthetics, found that
scientists in the United States lead the world in research papers on
synthetic biology, a field so new that the report's authors offered a
working definition: "Synthetic biology is the engineering of biological
components and systems that do not exist in nature and the re-engineering
of existing biological elements; it is determined on the intentional design
of artificial biological systems, rather than on the understanding of
natural biology." MIT's BioBrick project has amassed a database of common
DNA sequences that synthetic biologists can use to reliably synthesize a
strand of DNA. The study argues that the gap between synthetic life and
actual life is not merely a function of complexity, but also a question of
the relationship between the physical system and the information that it
represents. According to biologists, living organisms must be
self-creating, self-organizing, and self-sustaining. The report finds it
unlikely that systems based on silicon will ever attain the information
processing and physical replication capabilities of living systems, due
mainly to the natural differences between silicon and carbon. Without the
molecular flexibility of their carbon-based counterparts, artificial-life
systems running on silicon circuits will never be considered fully alive.
To bridge the gap, researchers are trying variously to engineer the
evolutionary principles that guide living systems, and to assign the basic
DNA functions in inorganic building blocks in a shift from molecular
biology to what the authors call modular biology. The idea of using past
experiments and research as a ready-made basis for future work is similar
to the prevailing ethos of the EDA industry.
Click Here to View Full Article
to the top
At the Five-Year Mark, Agile Manifesto Still
Stands
SD Times (03/15/06)No. 146, P. 1; DeJong, Jennifer
Adoption of agile software development methodologies is just now starting
to take off, five years after the drafting of the Agile Manifesto.
Manifesto contributor and software consultant Martin Fowler says the agile
concept that effective customer interaction is crucial to the production of
good software has taken root in the mainstream, while Forrester analyst
Carey Schwaber notes that even development teams that do not practice agile
methodology on a conscious level are making the transition to more
incremental software delivery and earlier testing. A November 2005
Forrester report found that 14 percent of European and North American
enterprises are using agile software development processes, while an
additional 19 percent are planning to go agile or are weighing the
possibility; the study also pointed to a second wave of agile adoption led
by enterprise IT shops that want to reduce time-to-market, improve software
quality, and bolster relationships with business stakeholders. There are
six agile methodologies--Extreme Programming (XP, the most well-known),
Adaptive, Dynamic Systems Development Method (DSDM), Scrum, Crystal, and
Feature-Driven Development (FDD)--with Fowler noting more similarities than
disparities between these frameworks. Exampler.com's Brian Marick, another
contributor to the Agile Manifesto, acknowledged in a Jan. 29 blog entry
that semi-flexible languages such as Java and the fast machines that run
them play an essential role in agile's adoption, whereas five years ago he
thought tools were less important. "Moreover, each new tool--JUnit, Cruise
Control, refactoring IDEs, FIT--makes it easier for more people to go the
Agile route," he wrote. Marick has also reevaluated the importance of the
customer to agile projects since the Manifesto's creation.
Click Here to View Full Article
to the top
Code Warriors
Network World (03/20/06) Vol. 23, No. 11, P. 46; Hope, Michelle
New tools to aid developers in the creation of secure code are being
employed by adroit security executives. Depository Trust and Clearing CISO
James Routh cites SecureSoftware's CodeAssure, a tool for automating
vulnerability scans, as a solution that helps developers become more adept
at writing secure code. "Our experience with CodeAssure has taught us that
the better the contextual help is at explaining the vulnerability, the more
valuable it becomes as an education tool that developers will understand
and incorporate going forward," he explains. Gartner analyst Neil
MacDonald notes that static and dynamic software scanning and assessment
tools can analyze the state of uncompiled code or a compiled application
and generate reports that identify the types of vulnerabilities found in
the application while also suggesting preventive or corrective measures.
Forrester analyst Michael Gavin adds that introducing such methods into the
development process earlier makes sense in terms of cost-effectiveness.
MacDonald and Gavin admit that the early application of security tools is
both a complicated and expensive proposition. "If you adopt more-secure
coding practices directly in the code cycle, it's going to add about
one-third more time to the process," notes MacDonald, who says most of that
additional time is spent educating and retraining developers in security
vulnerability recognition and prevention practices. He expects security
professionals, internal auditors, and compliance professionals to be the
most enthusiastic users of most dynamic black-box scanning tools, at least
initially.
Click Here to View Full Article
to the top
Wireless-Sensor Networks Find a Fit in the Unlicensed
Band
EDN (03/16/06) P. 46; Conner, Margery
A host of recently introduced standards, protocols, and enablement
equipment for unlicensed radio frequency bands is ushering in new
applications for low-power, short-range, low-data-rate wireless-sensor
networks. Wireless-sensor network application designers must select
between 900 MHz and 2.4 GHz in the unlicensed industrial/scientific/medical
(ISM) band, and the choice of frequency band depends on which nation a
vendor wishes to sell to, the vendor's power limitations, the desired
broadcast range, and the vendor's data-transmission rates. Analog Devices'
David Boylan explains that vendors frequently opt for the 2.4 GHz band
either because they or their clients prefer the security of standards, but
other companies favor devising their own proprietary networks because of
such networks' value-added and security aspects. The 900 MHz band can be
technically advantageous for building automation applications because of
the band's longer range and improved indoor penetration, which permits less
power consumption, according to Dust Networks CEO Rob Conant. The 2.4 GHz
band "is the only way to go for companies that want to bring out global
products," he adds. "Although 2.4 GHz products suffer a penalty in power
consumption to get the same range [as 900 MHz], 2.4 GHz is a standard, and
customers know it's going to be around for years." Both Boylan and
Chipcon's Karl Torvmark expect customers who prefer custom protocols to
choose 900 MHz. Chipcon's customers, for example, desire customized
networks, which offer benefits and shortcomings in terms of power
consumption, according to Torvmark.
Click Here to View Full Article
to the top
E-Paper Enters Practical Use
NE Asia Online (03/06) Otani, Takua
The growing use of hardware that employs electronic paper (E-paper) is
increasingly evident with new market entries and the development of a wide
spectrum of E-paper products, many of which are currently available.
Recent developments include Hitachi's practical E-paper display for general
use, Citizen Watch's E-paper-equipped device clock, a practical public
display terminal from Asahi Glass, and a time piece from Seiko Watch that
uses E-paper for the face display. E Ink and other manufacturers are
developing electronic inks, some of which are practical and being used by
certain equipment makers. Fueling the adoption of E-paper by these
companies is their desire to develop gear that is too complex or
impractical for use with traditional displays. The chief distinction
between conventional displays and E-paper displays is the latter's lower
power consumption. There are many less obvious instances where E-paper
could potentially be used, including smart cards and e-money, advertising
above windows, toys and home appliances, rear mobile phone displays, and,
looking further ahead, notebook PCs. E-paper products expected to reach
practical application in the next year or so fall into two general
categories: Particle-based E-paper that uses an electric field to
manipulate black and white particles to change the image, and cholesteric
liquid crystal E-paper that exhibits stability in both transparent and
reflective states by exploiting the unique characteristics of liquid
crystal. Fuji Xerox has embarked on an initiative to replace paper with
E-paper through an "optical rewritable" technique in which new data is
rear-projected onto the paper to change the displayed image. There are
still technical issues that must be addressed before E-paper can be widely
used: For example, flexible varieties of E-paper cannot accommodate dot
displays, while dot-display models lack flexibility.
Click Here to View Full Article
to the top
Software Insecurity
Scientific American (03/06) Vol. 294, No. 3, P. 26; Dupont, Daniel G.
Acting on the warnings of a team of Pentagon advisors that the reliance on
foreign-manufactured microelectronics compromises national security, the
Defense Science Board has recommended the creation of "trusted foundries"
to produce hardware for sensitive applications. The science board
cautions, however, that the effectiveness of these foundries would be
undermined without a parallel focus on producing secure software. The
Defense Department only writes the code for its most sensitive applications
in-house, importing the rest, including that used for fighter planes and
missile defense systems, from overseas. The Government Accountability
Office has found that in addition to the Defense Department's growing
reliance directly on foreign software, an increasing number of prime
contractors are farming out their software development to subcontractors
who frequently use foreign companies. Foreign software can contain
vulnerabilities that will inflict damage subsequent to installation and
back-door entry points into the systems that they power, as well as carry
the risk of being copied and distributed to opponents of the United States,
according to Nancy Mead of the Carnegie Mellon Software Engineering
Institute. Mead argues that because it is easiest to catch and correct
errors at the software development phase, that stage needs to be monitored
most closely. The Defense Department has begun conducting threat
assessments and high-level discussions about foreign software. Robert
Lucky, head of the science board study, notes that a higher level of
security comes at a premium, and the crucial question is, "How much
security can you get for how much money?"
Click Here to View Full Article
to the top
Isn't It Semantic?
ITNOW (03/06) Runciman, Brian
In a recent interview, Web inventor Tim Berners-Lee reflected on the
development and the future of the system that he created. Berners-Lee says
that in retrospect, he would have dispensed with the double slash and
reversed the order of the domain name. He identified the Google algorithm
as one of the Web's standout innovations, but says that he is most
impressed with the Web's diversity, noting the gratification that he feels
when he hears from users who have benefited from medical or dating sites.
Security is one of the major deficiencies of today's Web, Berners-Lee says,
noting that the padlock icon says nothing about the owner of the
certificate. In addition to security, Berners-Lee believes that the
greatest issues facing the Web are compatibility with the growing number of
mobile devices and the continually changing enterprise software of Web
services. He described the Domain Name Server as "the Achilles heel of the
Web," calling for the United States to demonstrate a willingness to share
control of the Web. Berners-Lee warns of superfluous patent claims, noting
that the exorbitant costs of patent litigation often prevent companies or
individuals from defending their intellectual property. He is also
conscious of accessibility issues, such as casting the content of a Web
site in a reasonable size font. In order for the Semantic Web to become
useful, it must tap into existing databases, model their content, and
develop a schema through an application such as the Web ontology language.
The Semantic Web will also be able to conduct much more thorough analysis
of machine data than humans can with HTML. Explaining why a Web year is
2.6 months, Berners-Lee describes the Web as nearing the end of its
adolescence, noting that phishing and spam have been an important part of
its education.
Click Here to View Full Article
to the top
Knowledge Discovery From Sensor Data
Sensors (03/06) Vol. 23, No. 3, P. 14; Tan, Pang-Ning
There has been a focus in recent years on applying data mining techniques
to the extraction of useful knowledge from raw sensor data, and achieving
greater data processing efficiency through these methods requires the
resolution of various technical issues, writes Pang-Ning Tan, PhD, at
Michigan State University's Department of Computer Science and Engineering.
Prior to the application of data mining techniques, the raw sensor data
must be converted into a suitable format for processing via feature
extraction, data cleaning, and data and dimension reduction. There are
four distinct data mining task categories: Predictive modeling (the
construction of a model that can be used to anticipate future values of a
target attribute based on known samples), cluster analysis (the division of
a data set into several groups to enable data points that belong to the
same group to more closely resemble each other), association analysis (the
discovery of strong co-occurrence relationships between events extracted
from data, which are encoded into logical rules), and anomaly detection
(the identification of unusual activity occurrences within the data).
Applying data mining methods to sensor data requires first and foremost the
determination of the most appropriate computational model, which is either
centralized or distributed. The centralized model consumes a lot of energy
and bandwidth, and cannot scale up to very large numbers of sensors. The
distributed model's drawback is the need to equip each sensor with an
onboard processor that features a reasonable volume of memory storage and
computing power. The noisiness and measurement uncertainty of sensor data
could be more robustly accommodated via probability-based algorithms, while
the challenge of missing data due to sensor malfunctions can be tackled in
either the preprocessing or mining phase. Concept drift--the effect of
attributes of the monitored process changing over time--must also be
accounted for.
Click Here to View Full Article
to the top