Companies Step Up to Fund Basic Research
Inside Bay Area (CA) (04/03/06) Grady, Barbara
With government funding on the wane, more universities are partnering with
companies to conduct high-risk, basic research. Corporate labs are too
tied to profit concerns to pursue research in applications that might never
turn into a commercially viable product, said Intel's Kurt Brown,
co-director of Intel Research at the University of California, Berkeley,
adding that academic partnerships enable companies to keep apprised of new
research. At Stanford University, in its close proximity to Silicon
Valley, the revolving door connecting industry with academia is as active
as ever, with the majority of Stanford's computer science faculty at any
given time reporting some involvement with a commercial venture.
Meanwhile, Intel, Yahoo!, Google, Sun, and Microsoft have all partnered
with Berkeley to form joint research ventures. Entrepreneurs and faculty
agree that the decline in DARPA funding for computer science and
engineering is the main impetus for closer ties between companies and
universities. At Berkeley, ACM President David Patterson had to find
corporate funding for 80 percent of the Reliable, Adaptive, and Distributed
Systems laboratory when DARPA denied his grant. The remaining 20 percent
of Patterson's funding is coming from government and the university, though
that is the inverse of the ratio of the past, when only between 10 percent
and 20 percent of funding would fall to industry. While DARPA funding has
actually increased over the past several years, funding for basic research
has declined, particularly in computer science, where funding for research
at universities dropped from $207 million in 2002 to $123 million in 2004.
When pursuing corporate funding, universities must be careful not to
subordinate academic inquiry to commercial interests. Intel's willingness
to make the research open and non-proprietary at its facility at Berkeley
has been critical to its success, said Eric Brewer, the other co-director
of Intel Research at Berkeley.
Click Here to View Full Article
to the top
Software Out There
New York Times (04/05/06) P. E1; Markoff, John
The proliferation of chunks of mix-and-match code available on the Web is
offering developers unprecedented flexibility to create an unlimited
variety of applications, in stark contrast to the traditional programming
model of inflexible code designed to run on individual machines. The
resulting decentralization has opened the door to smaller companies that
are delivering innovative programs and services directly to PCs or cell
phones with lighting speed. The genesis of modular software created from
standard compatible components came from Europe in the 1960s, and the idea
reached Silicon Valley by the 70s, though corporate proprietary interests
have historically bound programmers to exclusively use their own products.
The open-source movement has changed all that, however, with the computing
industry steadily moving toward the ethos that information should be shared
and free. While the open-source movement has sparked the most energetic
startup frenzy in Silicon Valley since the dot-com bust, much of which is
proceeding without venture capital funding, modular software is also
forcing industry leaders to re-evaluate their positions in the changing
climate. The advent of modular software is leveling traditional entry
barriers, as many startups are powered simply by a home PC and a broadband
connection. Early examples of virtual companies include Flickr and
Del.icio.us, both of which were acquired by Yahoo! last year. Community
development could also disrupt the economic motivation for outsourcing
programming jobs to foreign countries. With many of its standard
applications appearing on the Web for free, Microsoft is changing its own
stance toward open source, and CTO Ray Ozzie recently touted the potential
of RSS feeds, the free technology that competes with Microsoft's .Net.
Click Here to View Full Article
- Web Link May Require Free Registration
to the top
MIT Researchers Attack Wireless Shortcomings,
Phishing
Network World (04/04/06) Brown, Bob
MIT faculty members are pitching their latest research to university
partners in the business community at this week's MIT Information
Technology Conference. Assistant professor Dina Katabi, of the school's
electrical engineering and computer science department, presented her
research in opportunistic coding, or COPE, to enhance the performance of
wireless networks. Katabi says that with demand for wireless throughput
increasing steadily, a major breakthrough is needed, one that would go well
beyond the next 802.11 iteration. "We need a severalfold increase" in
throughput, she said. To accomplish this, Katabi says that systems must
take advantage of the shared nature of wireless networks, rather than
forcing them into a point-to-point mode. In her system, routers would
handle the mixing or coding of packets, and then relay them to senders and
receivers that can determine whether the traffic is directed toward them.
Katabi reports throughput increases of up to fourfold using this technique
in a three-floor MIT building containing 34 nodes. Assistant professor Rob
Miller described his research on anti-phishing techniques. Miller wants to
give browsers the ability to understand their users' intentions, so they
could confirm that a URL is the user's intended destination and legitimate.
Miller outlined his vision for the Web wallet, a suite of network security
features that presents the user with a list of suggested sites with similar
URLs to visit, and a separate form to enter his personal information.
Miller found in experiments that the wallet dramatically reduced the
percentage of users who fell for phishing scams.
Click Here to View Full Article
to the top
The Lessons of the $100 Laptop
eWeek (04/04/06) Spooner, John G.
Speaking at the LinuxWorld convention, One Laptop Per Child Chairman
Nicholas Negroponte said the company is poised to ship between 5 million
and 10 million devices by the end of the year or the beginning of next.
The computer, equipped with a seven-inch screen, a 500 MHz AMD processor,
and a Linux operating system, but shed of its hand crank, will be primarily
used as an educational tool, teaching children in developing countries to
write computer programs and enabling them to connect to the Internet. In
outlining the progress of the laptop, Negroponte was sharply critical of
the computing industry's cycle of software updates that add features but
not value, arguing that the industry needs to re-evaluate its approach to
development. The laptop sheds the costs of a proprietary operating system,
a large display, and sales and marketing support, while still being
readable and capable of connecting to the Internet, as well as serving as a
router for other machines. Energy consumption was a major concern in
developing the laptop, and Negroponte boasted that the device will consume
fewer than 2 watts of power. "That's very important because 35 percent of
the world doesn't have electricity," he said, adding that companies will
routinely boast of the efficiency of their products in the near future.
"That is the currency of tomorrow." The laptops will also contain Wi-Fi
mesh networking capabilities that work even when the machines are powered
down, enabling multiple machines to use the same Internet connection. The
hand crank will move to the device's power supply.
Click Here to View Full Article
to the top
A Case of Mind Over Matter
Boston Globe (04/02/06) Heuser, Stephen
After decades of promising results in the lab and millions of dollars in
research funding, the field of brain-computer interaction still has yet to
live up to its promise and bring a product to market. At the Upstate New
York public health laboratory, neuroscientist Jonathan Wolpaw has been
developing an electrode-studded mesh cap that can relay brain signals to
external devices as instructions, offering greater independence for the
severely disabled. Other systems in development surgically implant
electrodes to glean instructions directly from a person's neural cells.
Wolpaw's cap detects electrical waves outside the brain, similar to the
type that electroencephalograms have read for decades, though it interprets
them with sophisticated software that Wolpaw and his team developed.
"We're not talking here about mind reading in the science fiction sense of
the word," said Emanuel Donchin, a brain researcher who developed the
spelling application used in Wolpaw's device. "You can't listen in on the
conversations of the brain. You just make inferences about the state of
the brain." Sophisticated computers and scientists' growing experience are
bringing the technology closer to the market. Wolpaw expects to have his
devices in use by four or five patients by June, and is investigating
commercial avenues. The National Institutes of Health are stepping up
funding for brain-computer interface research, and Wolpaw, who had been
working largely under government grants, won an international prize from
the Altran Foundation for Engineering after he and a colleague published a
paper detailing how his device enabled a patient to move a cursor in two
dimensions. With the prize came more than $1 million worth of help from
engineers, who have worked with Wolpaw to improve and simplify the design
of his cap and bring the cost down, though limited demand could still be an
obstacle to commercialization.
Click Here to View Full Article
to the top
Boost for UK's Superfast Computer
BBC News (04/02/06) Fildes, Jonathan
The British government will invest 52 million pounds in the High-End
Computing Terascale Resource (Hector) supercomputer, which will be built in
2007. In announcing the investment, Science Minister Lord Sainsbury said,
"The computational limits of the existing facilities are now being
reached." British scientists currently use the CSAR computer at the
University of Manchester, which is scheduled to be decommissioned in June,
and the HPCx, which a University of Edinburgh-led consortium will continue
to operate until the end of 2008. Hector, which will be owned by the
Research Councils of the United Kingdom, would run up to a speed of 100
teraflops and perform up to 100 trillion calculations every second, making
it six times as powerful as the fastest supercomputer in the United
Kingdom. However, Jennifer Houghton, project manager of Hector, downplays
the upgrade in power because software still has to be designed to take
advantage of the supercomputer. "The technical barrier is getting the code
to scale up," says Houghton. Hector would pale in comparison to the
world's fastest supercomputer, IBM's Blue Gene/L at Lawrence Livermore
National Laboratory in California, which can exceed a speed of 367
teraflops and perform 280.6 trillion calculations per second.
Click Here to View Full Article
to the top
Software Agents Link Isolated Islands of Water
Data
IST Results (04/04/06)
Researchers in Europe have achieved most of their aims in developing tools
for analyzing inland water data at a local, national, and Pan-European
level, but current technology prevented them from reaching their overall
vision. The Environmental Data Exchange Network for Inland Water (EDEN-IW)
project sought to provide tools for accessing inland water data across
European Union countries, regardless of the different databases, software,
languages, data formats, and concepts of specific terms used by governments
down to individual researchers. The EDEN-IW project, funded by IST,
developed special software agents to "translate" a query so that all
databases are accessed simultaneously, and used open standards. In
addition to XML, EDEN-IW used OWL, the protocol for developing ontologies.
"In the laboratory we got the software working across a variety of
different platforms, using different software in different languages, so we
have a working prototype," says Dr. Palle Haastrup, coordinator of the
project and a researcher at the Joint Research Center's Institute for
Environment and Sustainability. The project also sought to provide tools
for analyzing data sets, inferring missing data, and modeling different
scenarios, as well as to apply their work to other areas of environmental
research. Although security and other issues prevent the tools from being
released, the project has impacted the European Water Framework, which
seeks to align data across the EU to create a common method for comparing
information.
Click Here to View Full Article
to the top
Binghamton University and STOC Launch Groundbreaking
Linux Collaboration
Binghamton University (03/30/06)
The Binghamton, N.Y., area could become a leader in open-source research
as a result of the opening of the Linux Technology Center on the campus of
Binghamton University. LTC is a collaboration between the university, the
Southern Tier Opportunity Coalition, IBM, and Mainline Information Systems,
that is dedicated to furthering basic and applied research in Linux-based
and open-source applications. In addition to helping to improve Linux and
open-source research and capabilities, LTC has the potential to create jobs
and improve and stimulate the economy locally and around the state. IBM
computer scientist Merwyn Jones will direct LTC, which will be used by
faculty and students from the Thomas J. Watson School of Engineering and
Applied Science, and the School of Management. "Building upon IBM's strong
commitment to open computing and Binghamton University's strong research
capabilities, the LTC will accelerate innovation in the information
technology arena and put the University in a leading role," says Jones.
IBM will provide equipment such as servers, storage products, software,
personnel, and other services. Mainline will aid the effort by offering
support for Linux applications, such as in digital video, as well as in
targeting small- and medium-sized business. "The LTC will bring together a
diverse team of people to learn, share ideas, tackle problems, pioneer new
approaches, and deliver innovation that matters to the local community,"
says IBM's Kyle VanKleeck.
Click Here to View Full Article
to the top
US Takes Interest in DDoS Attacks
Computer Business Review (04/03/06) Murphy, Kevin
Recent distributed denial-of-service (DDoS) attacks targeting the
Internet's domain name system (DNS) have attracted the attention of
high-level officials in the U.S. government, who fear that a new technique
enabling attack authors to direct far more traffic at their victims could
suggest the work of a new breed of cyber criminal motivated by the desire
to bring down the Internet altogether. The alarming series of DNS
amplification attacks began in December and rose appreciably in February,
using spoofed IP addresses and recursion to broaden the scope of attacks.
Traditional DDoS attacks use botnets either recruited through spammed
Trojans or worms or purchased on the black market, often sufficient to
overwhelm smaller sites, but the amplification attacks use a much larger
network to target large companies or critical elements of the DNS
infrastructure, such as the .com registry. "We're seeing some very
deliberate attacks against some high profile targets right now, to showcase
the talent of the attacker, so they can get work for the Russian mafia or
whoever it may be," said Internet Systems Consortium President Paul Vixie.
The ease with which a home PC can spoof its IP address when sending out a
packet enables these attacks, provided the author obtains control of a DNS
record. The attacker then instructs the bots to issue requests for a
particular piece of malware against open recursive name servers. About
50,000 recursive name servers were used in the recent attacks, estimates
CTO of UltraDNS Rodney Joffe, who was recently called away from a
presentation at an ICANN meeting to brief top U.S. officials. UltraDNS and
VeriSign were both targeted in recent attacks. Experts are debating
whether the attacks originate from hackers looking for recruitment or
terrorists more concerned with the wholesale disruption of economies.
Vixie and ICANN agree that the most effective prevention against such
attacks would be for ISPs to routinely validate source IPs.
Click Here to View Full Article
to the top
Rational Inequality
Los Angeles Times (03/30/06) Wertheim, Margaret
Despite the appearance in the 1970s that equal participation among men and
women in math and science was an inevitability, that hope has not
materialized, and today women are still under-represented in the sciences,
and in some areas, such as computer science, their involvement has actually
declined. The NSF reports that women comprise one-quarter of the country's
science and engineering workers, a percentage that has held steady over the
last decade. Research suggests that despite an equal interest in science
and math among boys and girls in fourth grade, by eighth grade, twice as
many boys are still interested. The notion that women lack the innate
ability of men to rationalize problems and think quantitatively runs deeply
through modern society, but dates back to the 5th century BC Greek
philosopher and mathematician Pythagoras. Voicing that sentiment last
year, Harvard President Lawrence Summers set off a firestorm of national
protest that ultimately contributed to his resignation. In the Pythagorean
scheme, which reached its modern apex in the 16th and 17th centuries, the
world is partitioned between the physical and the mental, the male and the
female. Math falls clearly on the male side, as women are too devoted to
their earthly bodies and lack the capacity for manipulating numbers. The
historical view that women are unsuited to math was held unquestioningly by
most Renaissance thinkers, which carried through to the founding of the
first scientific societies. It was not until 1945 that the Royal Society
admitted its first woman. The male-dominated climate carried through to
the 20th century, leaving an indelible imprint on a culture that still
regards men as more innately capable than women, arguing all the more
forcefully for encouraging girls at a young age to participate in math and
science.
For information about ACM's Committee on Women in Computing, visit
http://www.acm.org/women
Click Here to View Full Article
to the top
Technology Companies Bring Outsourcing Home
Chronicle of Higher Education (04/07/06) Vol. 52, No. 31, P. A43;
Carnevale, Dan
To curb the trend of exporting technology jobs overseas, Rural Sourcing is
partnering with five colleges to create office parks in three states that
will tap the ready-made and inexpensive labor supply of college towns.
Rural environments offer low costs and help keep jobs in the United States,
while also providing students with practical work experience. While the
low cost of living overseas has kept the price of labor down, many
companies have found their offshoring initiatives stymied by communication
barriers and time-zone differences. Frederick Niswaner, dean of the
College of Business at East Carolina University, claims that partnering
with Rural Sourcing is a win-win proposition, with the company benefiting
from inexpensive student labor, while computer science students gain
valuable work experience, often a scarce commodity in smaller towns.
Despite the growth in rural computer work, Rice University computer science
professor Moshe Vardi believes that many companies are still unconvinced
that small towns can provide an adequate supply of workers. "You're not
likely to go to a rural area and find a critical mass of skills in
technology," Vardi said. "Where you find a concentration of talent, it
tends to be more expensive." Vardi also notes that media reports heralding
the exportation of technology jobs have sapped student interest in computer
science, even though plenty of jobs remain in the United States. That fear
will start becoming a reality as companies find a shortage of skilled
workers, forcing them to look overseas to meet their staffing requirements.
Rural computing centers could handle security-sensitive projects that
cannot be sent overseas, notes Gartner analyst Helen A. Huntley.
ACM's Job Migration Task Force recently released an exhaustive study of the
"Globalization and Offshoring of Software. To review this report, visit
http://www.acm.org/globalizationreport
Click Here to View Full Article
- Web Link May Require Paid Subscription
to the top
Your Secrets Are Safe with Quasar Encryption
New Scientist (03/29/06) Knight, Will
Japanese scientists have encrypted messages using quasars, which emit
powerful radio waves and are believed to be produced by black holes. Ken
Umeno and colleagues at the National Institute of Information and
Communications Technology in Tokyo believe the intergalactic radio signals
of quasars have the potential to serve as a cryptographic tool because
their strength and frequency make them impossible to determine.
"Quasar-based cryptography is based on a physical fact that such a space
signal is random and has a very broad frequency spectrum," says Umeno. The
researchers view quasar radio signals as a way to create genuine randomness
when encrypting information at high speed, and make it easier for two
communicating parties to securely share the source of randomness. Users of
the method only need to know which quasar to target and when to start in
order to encrypt and decrypt a message. A large radio antenna is not
required, and the parties can be located in different hemispheres.
International financial institutions, governments, and embassies would
benefit from quasar encryption, says Umeno. However, some observers have
concerns about the practicality of the method, which is untested, and may
be vulnerable to an attacker who is able to mimic the radio signal.
Click Here to View Full Article
to the top
Machine-to-Machine Communications Still Mired in
Hype
TechNewsWorld (04/01/06) Koprowski, Gene
Industry experts say there is still a lot of hype surrounding
machine-to-machine (M2M) communications, which has been touted as a
potential facilitator of e-commerce and Internet services for machines for
several years now. A recent report by Strategy Analytics says cellular
networks are the most likely M2M enabler since they have become more
reliable, widespread, and secure than ever before. Strategy Analytics'
Cliff Raskind predicts that by 2011 as many as 110 million machines will
directly or indirectly use a cellular connection for M2M. Raskind says,
"Despite the staggering theoretical potential of M2M across many verticals,
cellular M2M will continue to demonstrate the best payback in utilities,
retail, transport/logistics, property management, and health care in the
short-to-medium term." One possible obstacle for M2M to overcome is the
lack of broad inter-industry cooperation among manufacturers of sensors, RF
modules, and the machines they will reside in. The lack of standards will
cause hardware integration and development for M2M to be very costly,
according to Strategy Analytics. Some examples of M2M are linking the
electric meter to a home or condo, and generating bills automatically.
Experts are still waiting to see if M2M can live up to its hype. In 2004,
FocalPointGroup predicted M2M communications would generate $180 billion by
2008, but today the industry is worth around $40 billion.
Click Here to View Full Article
to the top
Building Better Applications: Beyond Secure Coding
Enterprise Systems (03/28/06) Schwartz, Mathew
In the face of mounting security breaches, regulatory requirements, and
audits, more companies are working to educate their developers about secure
coding, with the goal of creating software with as few vulnerabilities as
possible. The premise is that improved training will lead to applications
with secure data encryption, strong passwords, and complete input
validation. Bad code accounts for as many as 80 percent of the security
problems in existence today, wrote security consultant Bar Biszick-Lockwood
in an IEEE report. As part of an IEEE group commissioned to study secure
computing, however, Biszick-Lockwood found that most security problems
emerge from constrained budgets, unreasonable deadlines, and a lack of
support from executives, rather than inadequate training. Bad code is more
often indicative of business problems than a flawed development team. The
data breach notification emails that customers receive with alarming
frequency speak more to a basic misunderstanding of the business value of
security at a decision-maker level than to an error in a specific
application. Executive education is the first place to start when trying
to develop a culture of secure computing, says Herbert Thompson of Security
Innovation. Since selling executives on the value of an education program
can be tough, developers can use a calculus that identifies potential flaws
at each stage of development, weighing the cost of fixing bad code before
it is released compared with fixing it after the release. With senior
management on board, development teams must then adjust their thinking to
account for what constraints need to be built into the application from the
outset, rather than simply focusing on the application's core
functionality. Once a project is completed, companies must subject their
code to rigorous security testing just as they test for functionality,
attacking it as a hacker would.
Click Here to View Full Article
to the top
Bits to Atoms (and Atoms to Bits)
Computerworld (04/03/06) P. 34; Anthes, Gary
In a recent interview, Neil Gershenfeld, director of the Center for Bits
and Atoms at MIT's Fab Lab, discussed his view that the world is on the
brink of a third digital revolution. Two distinct phases define the past:
communications and computing, asserts Gershenfeld, adding that the third
revolution will come in the form of fabrication, where technology begins to
imitate the molecular processes of living organisms. Gershenfeld argues
that computer science as a term for the discipline is limiting, as it
remains wedded to traditional forms of computing, while ignoring the
superiority of natural forces as agents of calculation, such as quantum
computing. Gershenfeld describes the Internet 0 project, which enables
anyone to create a Web server based on all the original principles of the
Internet for $1. "It will let you do IP to everything, at essentially the
cost of an RFID tag. It's the first step in breaking computation out of
the boxes you see today and integrating it with the physical world,"
Gershenfeld said. Another project at the center is fungible computation,
or raw computing power as a material that can be sprayed, poured, or
unrolled in the desired quantity and location. Self-organizing displays
and servers could accept piecemeal upgrades of processing power that would
greatly increase the flexibility of today's devices. At the Fab Lab,
student projects have included a Web browser for parrots and an alarm clock
that the user must wrestle with to convince it that he is awake. Though
largely overlooked by commercial enterprises, the Fab Lab projects are no
more at the bleeding edge than was the PC when companies running mainframes
still considered it a toy, notes Gershenfeld. "Conventional companies
don't recognize the extent to which these aren't just toys but
fundamentally threaten their business models."
Click Here to View Full Article
to the top
Engineers Urged to Find Their Voice
EE Times (04/03/06)No. 1417, P. 36; Merritt, Rick
In an interview, Segway inventor Dean Kamen discussed the role of the
engineer and his efforts to further education and improve the discipline's
image. Kamen launched the FIRST (For Inspiration and Recognition of
Science and Technology) Robotics Competition in 1989, which now matches as
many as 70,000 students with working engineers each year at events in more
than 33 cities. Kamen believes that engineers must take a more active role
in policy and education, noting that by the nature of their job--keeping
infrastructure running smoothly--they are often kept in the shadows.
Because policy makers often lack the technical expertise to make informed
decisions about issues such as energy use and renewable resources, and
sometimes fail to see the long-term consequences of those decisions, Kamen
says they should seek the advice of engineers. "The environment gets a lot
of political heat when people make bold statements, but ultimately, if the
facts are wrong the laws will be wrong," Kamen said. "Bad facts make bad
laws." Turning to health care, Kamen said that much of the current crisis
stems from inefficient technology, and that he welcomes the government's
apparent shift in favor of increased spending for research and development.
Kamen insists that the patent system is vital to preserving the integrity
of intellectual property, and that the few "bad actors" who bend the rules
should be sanctioned individually. He believes the government should
support the patent office by ensuring that it has the resources to maintain
a ready supply of trained examiners to speed the process and ensure that
only quality patents are awarded, which would also curb the trend of patent
litigation.
Click Here to View Full Article
to the top
2020 Computing: Champing at the Bits
Nature (03/23/06) Vol. 440, No. 7083, P. 398; Ball, Philip
Andrew Steane with the University of Oxford's quantum-computing group
believes a practical quantum computer could be realized by 2020, though
University of Michigan physicist Chris Monroe reports that advances in the
field are proceeding at a slow pace. While quantum computers are likely to
remain niche tools, their ability to simulate other quantum systems is
expected to revolutionize research in such fields as materials science,
chemistry, and perhaps molecular biology by facilitating super-fast
calculations. Just one quantum computer can basically simulate an entire
stable of classical computers by exploiting the superposition or dual-state
nature of quantum bits (qubits), while a quantum processor can also compute
with multiple qubits concurrently through the property known as
entanglement. The disadvantage is the tendency for qubits' superposition
to destabilize when they interact with the environment. Preventing this
phenomenon, known as decoherence, is difficult, and a practical quantum
computer must isolate qubits from the environment yet enable them to
interact with each other to execute computations. There are various
approaches to building quantum computers, including trapping ions or
neutral atoms, using superconducting circuits as qubits, or optical-based
methods such as encoding qubits into the quantum states of photons or using
quantum dots as qubits. There are also software issues, such as a profound
lack of algorithms that can scale up with quantum-level computational
problems. A major obstacle to the generation of new algorithms is the
difficulty of recognizing what problems stand to benefit the most from
quantum-computing techniques.
Click Here to View Full Article
to the top
An Image of the Future: Graphical Passwords
Information Today (03/06) Vol. 23, No. 3, P. 39; Poulson, Deborah
Computer users frustrated with having to remember a multitude of
alphanumeric passwords will welcome the development of graphical passwords,
writes Deborah Poulson. First patented by physicist and entrepreneur Greg
Blonder in 1996, graphical passwords work by displaying an image on a
touch-screen or pen-based computer, and prompting the user to select the
areas in the image, called click points, that form a password. To work,
the image must be sufficiently complex, such as a city skyline, and users
must be on the lookout for password thieves trying to shoulder surf, or
steal a password by observing the click points, just as thieves observe
keystrokes to steal conventional passwords. But researchers at the
University of Rutgers are developing a graphical password that is
invulnerable to shoulder surfing. In their tests, users chose 10 icons
from a pre-selected list, which were then mixed up on the screen with 200
other icons. Rather than clicking on the icons themselves, the subjects
clicked inside the geometric shape that would be formed by lines drawn to
connect the icons. Correctly identifying 10 shapes validates the user.
Shoulder surfing becomes impossible when a user never clicks on the actual
icons, said Rutgers computer science professor Jean-Camille Birget. The
problem with the icon-based password is that it takes too long, due to the
multiple rounds of selecting icons. Though Birget believes icon-based
passwords may only be used in environments where shoulder surfing is a
serious problem, he said test subjects in his experiments did not notice
the extra time required to select the icons.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
The Rise of the Smart Phone
IEEE Distributed Systems Online (03/06) Vol. 7, No. 3,Zheng, Pei; Ni,
Lionel M.
Microsoft software engineer Pei Zheng and Hong Kong University of Science
and Technology professor Lionel Ni envision advanced mobile wireless
applications and services that facilitate anytime/anywhere communications
and computing delivered over next-generation multifunctional and
multiwireless cell phones, also known as smart phones. Smart phones are
expected to boast such features as a backlit color LCD screen; a large
memory; persistent storage; augmented wireless capability such as
Bluetooth, Wi-Fi, and infrared; and a sophisticated operating system with
such applications as games, calendar, scheduler, address book, media
player, recorder, book reader, notation, and calculator functionalities.
The hardware of a smart phone typically features a microprocessor, a
mainboard, an antenna, ROM, RAM, a battery (usually NiMH, Li-ion, or
Li-polymer), additional storage such as flash memory or a secure digital
card, a keyboard or keypad, network interfaces, a thin-film transistor or
LCD screen, and a hard disk in some models. Smart-phone software platforms
such as Symbian OS, Windows Mobile and the .Net Compact Framework, Java and
Binary Runtime Environment (BREW), Palm OS, and Embedded Linux are
supplanting cell phone makers' proprietary systems. Zheng and Ni focus on
three varieties of emerging services and applications for smart phones:
Personalized location-based services such as navigation assistance,
location-enhanced asset management, mobile social networking, and mobile
local search, which are facilitated by positioning methods; m-commerce that
must resolve technological, security, and stability issues in order to
realize its full potential; and mobile enterprise applications such as
customer relationship management, supply chain management, and enterprise
resource planning, whereby a smart phone functions as an always-on end
point to provide real data access and transaction support.
Click Here to View Full Article
to the top