An Ominous Milestone: 100 Million Data Leaks
New York Times (12/18/06) P. C3; Zeller, Tom Jr.
Wired News senior editor Kevin Poulsen announced on his blog last Thursday
that with announcements from UCLA (800,000 records stolen), Aetna (130,000
records stolen) and Boeing (320,000 records stolen), over 100 million
records had been stolen since the ChoicePoint breach almost two years ago.
While perpetrators of the Aetna and Boeing laptop thefts were probably not
after personal records, the same cannot be said for the UCLA data theft,
where a hacker had been accessing the university's database of personal
information for over a year before being discovered. A Public Policy
Institute study, using data from the Identity Theft Resource Center, showed
that of the 90 million records stolen between Jan. 1, 2005, and March 26,
2006, 43 percent were at educational institutions. "College and university
databases are the ideal target for cyber criminals and unscrupulous
insiders," says Guardium chief technology officer Ron Ben-Natan. "They
store large volumes of high-value data on students and parents, including
financial aid, alumni and credit card records. At the same time, these
organizations need open networks to effectively support their faculty,
students and corporate partners." While some claim that 100 million is a
modest estimate, Indiana University Center for Applied Cybersecurity
Research director Fred H. Cate says the threat posed by loss of personal
data is exaggerated because people are too quick to equate the loss of data
with its illegal use. However, others argue that once a Social Security
number or birthday is stolen, it can be used indefinitely since these never
change. Criminals have not yet devised ways to make use of the massive
amounts of information they have obtained, but this inability will not last
forever. While Congress has failed to pass data security legislation, 18
states now allow citizens to freeze their credit lines, and seven more
allow victims of identity theft to do so.
Click Here to View Full Article
to the top
What If Your Laptop Knew How You Felt?
Christian Science Monitor (12/18/06) P. 12; Lupsa, Cristian
Using "affective computing" techniques, experts from a range of
disciplines are working to develop software that can detect human emotion
by analyzing the slightest details of facial expressions. Currently,
researchers are developing software that analyzes faces in videos and
photographs to determine the person's emotions. The software isolates a
face and extracts both rigid features (head and neck movements) and
nonrigid features (facial expressions); this information is then placed
into categories using codes for the various features. Finally, a database
of images displaying various patterns of expression is consulted, which the
program uses to identify the basic emotion shown by face in the image, or
the program describes the movements it has seen and makes a conclusion as
to their meaning. MIT's Affective Computing Group has developed a system
called "Mind Reader" that they hope can one day help those with autism to
pick up on the emotions displayed by others, something that the condition
makes very difficult. Mind Reader uses a camera to conduct analysis of
facial expressions in real time, and uses color-coded graphics to indicate
someone's response to stimuli. University of Pittsburgh psychologist
Jeffrey Cohn, one of the few specialists certified to use the Facial Action
Coding System that classifies over 40 Action Units (AUs) of the face, can
use subtle precise movements, such as those of the corners of the lips or
eyebrows, to identify the emotions of a subject. He is working with
computer scientists to teach machines to read AUs and describe the exact
muscle movement witnessed. The security industry is very interested in
emotion recognition technology for use in lie detection, identification,
and expression reading, but for now, controlled lighting is required,
meaning surveillance cameras could not utilize the technology. The
potential for confusion also exists, where one pattern of expressions could
be understood as multiple emotions that are quite different.
Click Here to View Full Article
to the top
Researchers Demonstrate Direct Brain Control of Humanoid
Robot
UW News (12/14/06) Hickey, Hannah
University of Washington researchers have developed a system whereby a
humanoid robot can be instructed to pick up objects and move to specific
locations by detecting signals from a human brain. UW associate professor
of computer science and engineering Rao Rajesh said, "This is really a
proof-of-concept demonstration. It suggests that one day we might be able
to use semi-autonomous robots for such jobs as helping disabled people or
performing routine tasks in a person's home." The "master," who wears a
skull cap with 32 electrodes attached to it that sense brain activity using
a technique known as electroencephalography, looks at a computer screen
that shows displays from two cameras mounted on and above the robot, upon
which objects and locations randomly light up. When the object the master
wants the robot to pick up, or the location they wish the robot to go to,
lights up, the user's sense of "surprised" brain activity alerts the robot
to execute the command. "One of the important things about this
demonstration is that we're using a 'noisy' brain signal to control the
robot," Rajesh says. "The technique for picking up brain signals is
non-invasive, but that means we can only obtain brain signals indirectly
from sensors on the surface of the head, and not where they are generated
deep in the brain. As a result, the user can only generate high-level
commands such as indicating which object to pick up or which location to go
to, and the robot needs to be autonomous enough to be able to execute such
commands." Further tasks, such as the robot avoiding obstacles through
awareness of its surroundings, will require giving it greater learning
ability. The system allows robot and master to be anywhere in the world,
so long as there is an Internet connection between them. Rajesh calls it a
"primitive" step in the direction of having robots aid disabled people or
perform household chores.
Click Here to View Full Article
to the top
Researchers Create DNA Logic Circuits That Work in Test
Tubes
Caltech (12/07/06) Tindol, Robert
Researchers at the California Institute of Technology have built DNA logic
circuits that can operate in saltwater, technology that one day could lead
to embedding intelligence in chemical systems that could be used for
bionanotechnology. While water and digital logic wouldn't normally mix,
these circuits are based on chemistry rather than electronics, explains
Caltech computer scientist and group leader Erik Winfree. Circuits are
encoded in high and low concentrations of DNA molecules, taking the place
of high and low voltage signals. Information processing is executed by
chemical logic gates, intricate bundles of short DNA strands, that release
their output molecule when they encounter the right input molecule.
Caltech postdoctoral scholar and head author of the paper Georg Seelig
explains, "We were able to construct gates to perform all the fundamental
binary logic operations--AND, OR, and NOT. These are the building blocks
for constructing arbitrarily complex logic circuits." The series of
circuits created, while small by normal computing standards, could prove
very helpful in scaling up biochemical circuits. "Biochemical circuits
have been built previously, both in test tubes and in cells," says Winfree.
"But the novel thing about these circuits is that their function relies
solely on the properties of DNA base-pairing. No biological enzymes are
necessary for their operation. This allows us to use a systematic and
modular approach to design their logic circuits, incorporating many of the
features of digital electronics."
Click Here to View Full Article
to the top
Your Buddy in the Sky
Engineering & Physical Sciences Research Council (12/15/06)
Researchers at the University of Bath have designed a new system for
computerized cockpits that would enable autopilot to handle more explicit
details, such as the next course of action and the objective of a maneuver.
Professor Peter Johnson and Rachid Hourizi believe that allowing the
flight computer to perform such calculations would ultimately improve the
way pilots and autopilots work together. Pilots usually oversee the more
explicit details of a flight. Communication problems between pilots and
autopilot rarely occur, but when they do it usually results in a moment of
confusion, but can also lead to an accident. The limited interaction and
communication capabilities of autopilot are usually at fault, and not the
pilot. "The interface is based on the communication procedures used in a
number of safety critical domains from fire fighting to military operations
where the current situation, action to be taken, and objectives are
explicitly stated," says Hourizi. "Our new system brings the interaction
between autopilot and pilot onto a more robust level." It should take 10
years or less to integrate the technology into active autopilots, the
researchers believe.
Click Here to View Full Article
to the top
P2P: From Internet Scourge to Savior
Technology Review (12/15/06) Roush, Wade
Having received a lot of blame for a large portion of digital piracy, P2P
networks are now proving their worth in helping the Internet deal with the
huge bandwidth demands brought about by digital video. Experts have
predicted the end of the Internet in the past, and each time they have been
proven wrong; now the threat posed by Internet video seems to have been
overcome as well. P2P programs, which allow users to download content from
other user's hard drives, could be very beneficial to service providers and
content distributors who are struggling to meet the bandwidth demands that
Web video imposes. Carnegie Mellon computer scientist Hui Zhang says,
"2006 will be remembered as the year of Internet video. Consumers have
shown that they basically want unlimited access to the content owners'
video. But what if the entire Internet get swamped in video traffic?" P2P
downloads may comprise 60 percent of network traffic, and 60 percent of
that traffic is video, according to CacheLogic. Researchers and others are
working on various P2P programs that take advantage of both the downlink
and uplink capacity in the Internet infrastructure. Several P2P programs
are being released that allow users to purchased movies and TV shows
legally and download them to a shared folder. Zhang points out that
although it would bring about increased traffic, P2P traffic could also be
labeled, allowing service providers to keep track of it and decide exactly
how much can pass through their network; "Otherwise, as applications like
video downloading really take off, we will see a congested network, which
will in turn impede the development of video-sharing technology."
Click Here to View Full Article
to the top
Just the Stats: A Closer Look at STEM Majors
Diverse: Issues in Higher Education (12/15/06) Majesky-Pullmann, Olivia
In response to an inquiry as to whether a department's predominant ethnic
composition shapes the educational accomplishment of international science,
technology, engineering, and mathematics (STEM) students of that race or
ethnicity, Olivia Majesky-Pullmann attempts to put the issue in
perspective. She notes that international students comprised close to 28
percent of science and engineering doctorates awarded at minority-oriented
institutions and 43 percent at all schools during the 2003-2004 academic
year. Majesky-Pullmann also cites a study by the National Science
Foundation, NASA, the U.S. Departments of Education and Agriculture, NIH,
and the National Endowment for the Humanities demonstrating that the
percentage of international students in doctoral STEM programs is high;
between 1974 and 2005, the segment of international doctorate recipients
rose from 11 percent to 33 percent, while temporary visa holders tended to
gravitate the most to engineering and physical sciences last year. Over 58
percent of all engineering doctorates, 44.5 percent of physical science
doctorates, and 27.4 percent of life science doctorates were awarded to
non-U.S. citizens. The population of foreign professors on U.S. campuses
increased 8.2 percent between the 2004-2005 and 2005-2006 academic years to
total 96,981, according to the Institute of International Education (IIE);
most international professors teaching in the United States in 2005-2006
specialized in STEM. Majesky-Pullmann referred to IIE data to determine
the 10 leading U.S. institutions with the highest international student
populace, but five of the schools could not or would not release data on
which fields their foreign scholars teach. The percentages of foreign STEM
students and foreign STEM professors at Columbia University are almost
mirror images, while the ratio of international STEM faculty to
international STEM students is four to one at Ohio State and the University
of Indiana-Purdue University Indianapolis and two to one at Bucknell
University and the University of Texas at Austin.
Click Here to View Full Article
to the top
Not YouTube, HugeTube: Purdue Researchers Stream Massive
Internet Video
AScribe Newswire (12/15/06) Talley, Steve
Purdue University researchers say their new approach to streaming video
over the Internet could offer real-time collaboration opportunities for
more than just scientists. In the entertainment industry, for example,
movie studios could have employees work together in real time as they
proceed with films that are in production, and they could also use the
technology to simultaneously stream a new release into theaters across the
country. The new method was on display at last month's SC06 conference in
Tampa, Fla., as the high-speed National Lambda research network was used to
stream an animated video, that was the equivalent of about 12 movie DVDs,
in two minutes. Researchers at Purdue's Envision Center for Data
Perceptualization transmitted video measuring 4096 pixels by 3072 pixels
(about 12 17-inch computer monitors arranged in a grid of three monitors
high and four monitors wide) at 7.5 gigabits per second, reaching a peak
rate of 8.4 gigabits per second. They were able to stop, replay, and zoom
in the video in real time. Laura Arns, associate director and research
scientist at the Envision Center, says the equipment used could be
purchased off the shelf for less than $100,000. "The video was not
compressed and it wasn't done using expensive, highly specialized
equipment," Arns says.
Click Here to View Full Article
to the top
NCSA Increasing Nontraditional Users
News-Gazette (12/18/06) Kline, Greg
The National Center for Supercomputing Applications at the University of
Illinois has become renowned for creating tools for scientific uses that
require large-scale number-crunching abilities, but the center has recently
found its services in demand by many nontraditional users, such as the
manufacturers of Mars candy, who wanted to use supercomputing technology to
improve their business. In response, NCSA has opened the new Institute for
Advanced Computing Applications and Technologies, which is designed to
integrate supercomputing experts into research groups throughout the
campus, specifically in disciplines the center does not regularly deal
with, although science and engineering users will be included. "It is an
incredibly powerful mixture that will profoundly affect the future of both
research and education," UI Chancellor Richard Herman said at the program's
introduction. NCSA director Thom Dunning predicts that this initiative
will help prepare the center for the upcoming challenge presented by
peta-scale computing. The institute will be organized by themes, and is
accepting theme proposals for one or two projects that will start next
year. Two new supercomputing clusters, one of which is capable of 45
trillion calculations per second, will be dedicated to the institute.
Click Here to View Full Article
to the top
Configuration: The Forgotten Side of Security
Linux.com (12/12/06) Byfield, Bruce
Configuration-centered security, also known as security architecture or
proactive security, is often overlooked in favor of reactive measures such
as anti-virus programs or security patches, even though it is more
efficient. The configuration security approach involves making the
computer system's design and installation a security component. "The right
time to apply best practices is during system design," says MIT professor
emeritus Jerry Saltzer. "That way, installation, configuration, and daily
use will automatically tend to be more secure." Saltzer says the stress on
reactive rather than proactive security is partly driven by vendors who
roll out flawed systems, and partly by organizations who erroneously
consider security to be an IT-only issue. A major reason why
configuration-centered security is ignored is the tendency to balance
security against user convenience, with convenience typically having
priority. A system's design and configuration should proceed with five
objectives in mind, according to Keith Watson with Perdue University's
Center of Education and Research in Information Assurance and Security:
These objectives include building for a particular purpose and inclusion of
the bare minimum for fulfilling that purpose; protection of idle data's
availability and integrity; safeguarding dynamic data's confidentiality and
integrity; disablement of all redundant resources; and restriction and
recording of access to required resources. Watson notes that an emphasis
on constructing secure and resilient systems at the outset makes reactive
security less necessary later on. Among the suggestions experts offer for
improving security awareness are enforcing a clear security policy, the
removal of "a culture of blame," and inclusion of "a clear line of
escalation."
Click Here to View Full Article
to the top
Flexible Electronics Advance Boosts Performance,
Manufacturing
EurekAlert (12/13/06) Orenstein, David
Researchers at Stanford University and UCLA have found a way to
manufacture an organic transistor that will offer a high performance level.
"Until now, the possibility of fabricating hundreds of [organic
single-crystal] devices on a single platform [had] been unheard of and
essentially impossible from previous methods," says Alejandro Briseno, the
lead author of the study who is no longer at UCLA. Their approach to
manufacturing large arrays of single-crystal transistors involves placing
electrodes on silicon wafers and flexible plastic; using the polymer
polydimethylsiloxane to make a stamp for the desired pattern, coating the
stamp with octadeclytriethoxysilane (OTS), and pressing it to the surface;
and then adding a vapor of the organic crystal material onto the surfaces.
Where the OTS is placed, semiconducting organic crystals will grow after
the vapor condenses, forming transistors as the crystal bridges the
electrodes. "The work demonstrates for the first time that organic single
crystals can be patterned over a large area without the need to laboriously
handpick and fabricate transistors one at a time," adds Zhenan Bao, a
chemical engineering professor at Stanford. The breakthrough may clear the
way for placing low-cost sensors on product packaging and making thin and
floppy e-paper displays.
Click Here to View Full Article
to the top
White Goods Become Smart Goods
Electronic Design (12/15/06) Allan, Roger
Household appliances are being transformed into smart machines thanks to
the incorporation of semiconductor ICs, whose inclusion is becoming more
practical because of the greater cost effectiveness they deliver. Design
engineers face the challenge of balancing a call to boost white goods'
intelligence, reliability, and ease of use with consistent market pressures
not to exceed consumers' cost expectations. Texas Instruments systems
applications engineer Arafee Mohammed says integration of both hardware and
software at the design level is essential to addressing this challenge.
Among the factors impacting the use of electronics in white goods are
energy-efficiency legislation, water conservation issues, the Restrictions
of Hazardous Substances directive, and radio-frequency
interference/electromagnetic interference mandates. Energy efficiency can
be greatly enhanced by advanced motors and motor controllers, which can be
driven by many commercially available microcontroller chips. Customer
satisfaction is impacted by the effectiveness of the appliance's
human-machine interface, and a well-designed graphical user interface can
marry ease of use, stylishness, and intimacy. Among the challenging design
factors for white goods is the fact the refrigerators and other devices are
in constant operation. It is anticipated that in the future, home
appliances will be organized into a network that is capable of
appliance-to-appliance communication through the Internet or some other
medium.
Click Here to View Full Article
to the top
Time to Cool It
Economist (12/13/06) Vol. 381, No. 8508, P. 82
As the processing power of computers increases, so does the need for a way
to keep them from overheating, so many experts have devoted themselves to
developing the next big advancement in cooling technology. One option is
paraelectric materials, which cool down when a current is applied to them.
Cambridge University researcher Alex Mischenko has achieved temperature
drops using paraelectric materials that are five times bigger than any
recorded. Advancements in heat sink and fan technology have reached their
limits, as has the technique of dividing processing power between two and
even four processors. A lot of work is being done with the thermoelectric
effect, which generates electricity using heat and creates a cooling effect
from an electrical source. In order for this technology to be maximized,
its crystal structure must allow electrons to flow freely, but the paths of
vibrations that carry heat are often blocked. Nextreme Thermal Solutions
researcher Rama Venkatasubramanian claims to have developed thermoelectric
refrigerators that can be placed on computer chips and cool them by 10
degrees Celsius, and UC Santa Cruz's Ali Shakouri claims to have made even
smaller refrigerators. However, a new system launched by Apple that cools
a PC by pumping liquid through channels in the processor, and then to a
radiator where heat is given up to atmosphere, may be the most practical
solution. IBM is working with tiny jets that can agitate this liquid so
all of it touches the outside of the channel, where the heat exchange
occurs. A combination of all of these technologies may eventually be used
to cool down processors.
Click Here to View Full Article
to the top
Four Key Trends in Building Better Datacenters
Business Communications Review (12/06) Vol. 36, No. 12, P. 45; Robb, Drew
The growth, cost containment, and efficiency improvement of datacenters
are rooted in four key operational trends--consolidation, virtualization,
network upgrades, and better power and cooling infrastructure--that are
inextricably connected. Infrastructure consolidation is seen as a route to
streamlining datacenter management; this trend is represented by increasing
server consolidation and processor density, as well as growing use of blade
servers and switches. "Many companies are consolidating their IT equipment
into fewer locations to improve management and reduce costs," observes
Kevin McCalla with Emerson Network Power subsidiary Liebert.
Virtualization, which is chiefly employed to consolidate servers, removes
lock-in to particular applications or operating systems, facilitating easy
sharing and automatic repurposing of servers according to service level
agreements and business priorities, notes Egenera executive Susan Davis.
Many organizations must upgrade to 10 Gbps Ethernet to handle the increased
network load that results from consolidation and virtualization, says
TheInfoPro's Bill Trousell. "In order not to have the network contribute
to any latency issues, companies have to make sure the backbone has the
bandwidth to handle the aggregate server total, which is much larger than
it used to be," he explains. Consolidation and virtualization not only
raise datacenter density, but also complicate sufficient power and cooling
provision. Increased cooling and heat management is only an interim
measure, when what is really needed is a reduction of power consumption and
heat generation.
Click Here to View Full Article
to the top
Bits on the Big Screen
IEEE Spectrum (12/06) Vol. 43, No. 12, P. 42; Wintner, Russell
Digital cinema standards are finally emerging after almost 10 years of
discussion and inaction. For moviegoers, the transition means better film
image quality, more diverse entertainment at local theaters, and more 3D
offerings; for theater owners, digital cinema will streamline and lower the
cost of handling, shipping, storing, and discarding films, as well as allow
on-site movie replication; and studios will save a lot in terms of film
processing and distribution. In the first stage of digital movie
distribution, the film is digitized if it is 35-mm or converted to cinema
format is it if a digital movie file, and then compressed using JPEG2000,
which combines the best possible image quality with the least-burdened
intellectual property. The compressed files are encrypted to thwart
piracy, and from there the film can take one of several directions to get
to the cinema: The files can be put in a hard drive that replicates
digital cinema, and from there get shipped to the theater; or the files can
be sent to a theater management system via satellite. The files are
transferred to a media player, which is mated to the digital projector.
The media players must be networked with a central management server in
order that the films can be shown in a multiplex. The digital projector
usually employs a Digital Light Processing (DLP) micromirror system. The
digital cinema industry group opted for standard uncompressed CD-quality
audio, specifically the Wave digital encoding format. Obstacles to mass
adoption of digital cinema include the upfront costs to studios and the
risk to theater owners, and one solution is to implement a "virtual print
fee" that the studio pays to the company that supplies and sets up their
digital cinema equipment and software for each screening of a movie on the
digital cinema system.
Click Here to View Full Article
to the top
The Science of Software
Redmond Developer News (12/06) Barney, Doug
The Microsoft Research European Science Program brings together Microsoft
scientists with leading researchers to tackle some of the most challenging
problems in the world by developing new software tools, and the initiative
has the potential to revolutionize software and even corporate development.
Hardware innovations will underlie the creation of new and distinctive
computing paradigms, but software will supply the muscle; Microsoft is
touting robust, peer-to-peer networks and replicated databases for common
access so that programs and data can be effectively shared by researchers.
Service-oriented architectures will play a vital role in next-generation
distributed environments by facilitating recognition, interaction, and
data-exchange between distributed programs and components. Source code,
executable code, and the "expertise" of their builders are defined by
Microsoft as software components, and new and streamlined program paradigms
must be crafted in order to make it easier for researchers to write code.
Microsoft Research's "Towards 2020 Science Report" says it is Microsoft's
aspiration to provide researchers with "a front-end that is something akin
to an 'Office for Science': an extensible, integrated suite of user-facing
applications that remain integrated with the development environment that
help address the many human-computer interaction issues of the sciences."
The codification of knowledge, which for Microsoft involves transforming
knowledge into a computer-manipulable discrete program or data for the
purpose of uncovering hidden meaning within large data sets, can be
achieved through emerging computing technology. The convergence of
computers and science raises the possibility of synthetic biology and
molecular computers, which could serve as platforms for smart drug systems.
Microsoft is also leading an inquiry into systems biology, which could
lead to a computational model constructed from bioinformatics and help
create a rich and executable programming language that describes
sophisticated biological systems and behaviors, according to Microsoft
Research European Science Program director Stephen Emmott.
Click Here to View Full Article
to the top