Planned Worker ID Called Vulnerable
San Francisco Chronicle (06/25/07) P. A4; Lochhead, Carolyn
Various proposals to control illegal immigration rely on an electronic
employer verification system that a Department of Homeland Security study
criticizes as susceptible to identity theft, employer abuse, data
inaccuracies, and privacy breaches, which could only be addressed through
heavy enforcement. The detection of identity theft is not designed into
the Web Basic Pilot system, and the size of such a system seriously
complicates practical application, according to experts such as [[SRI
International]] Computer Science Laboratory scientist Peter Neumann, who
cited the approach's many shortcomings in recent testimony before Congress
on behalf of ACM. He said lawmakers often have unrealistic hopes for
technological solutions to social problems, and referred to a series of
government software development blunders that include "many highly visible
projects that have been late, over budget, or indeed abandoned after many
years and large expenditures." Others mentioned that it was possible to
build such a system, acknowledging that it would likely cost billions of
dollars and require an immense technical effort. The DHS study concluded
that the system is vulnerable to anyone disguised as an employer to gain
access, and Neumann said there is no doubt that criminals would start
creating "phishing" emails claiming to be from the DHS requesting worker
data from unwitting employers. Mike Aitken with the Society for Human
Resource Management predicted that the increasing security of immigration
documents will raise the likelihood "that U.S. citizens' identities are
going to be stolen and fraudulently used for employment by those who don't
want to come out of the shadows," to the degree that the situation "will be
worse than what we have now." ACLU legislative counsel Tim Sparapani
warned that the system would empower the government to refuse people the
right to work on an unprecedented scale, while being ultimately
ineffective. [[For more on Peter Neumann's testimorny, visit
http://www.acm.org/usacm]].
Click Here to View Full Article
to the top
High-Tech Titans Strike Out on Immigration Bill
New York Times (06/25/07) P. A1; Pear, Robert
Representatives from leading technology companies have been pressuring
lawmakers to increase the number of foreign workers allowed into the United
States under the H-1B visa program, but have had limited success so far.
Currently, an immigration bill in the Senate would expand the number of
work professionals, but high-tech companies say the expansion is not
enough. Those opposed to expanding the visa program argue that H-1B
workers are allowing large technology companies to hire immigrant workers
because they can pay them less than a similarly skilled American worker.
Originally, the bill created a point system that would reward applicants
based on their degree level and job skills, but tech companies opposed the
system because it would prevent them from sponsoring specific applicants
for specific positions. An upcoming amendment would set aside 20,000 green
cards for immigrants with extraordinary abilities, such as outstanding
professors and researchers or managers and executives of multinational
organizations. The amendment would also give employers five years to
adjust their hiring practices to the point system. The number of green
cards for employer-sponsored immigrants would gradually decline from
115,000 during the first two years to 44,000 in the fifth year, with no
employer-sponsored green cards after that. The Senate bill would, however,
raise the limit of H-1B visas from the current 65,000 to 115,000 in 2008
with the possibility of a 180,000 limit in later years if labor demands
require another increase. The number of foreign students educated in the
United States and allowed to stay after graduation would also be raised.
The amendment would double the number of exempt graduate degree recipients,
as well as add 20,000 H-1B visas for people who hold an advanced degree
from a university outside the United States.
Click Here to View Full Article
to the top
K-12 Alliance Launched to Reverse Declining Participation
of Girls in Computing Careers
Business Wire (06/25/07)
The National Center for Women & Information Technology (NCWIT) announced
the creation of a new coalition intended to boost the number of girls
interested in pursing careers in computing fields. The NCWIT K-12 Alliance,
composed of 19 organizations including ACM, will work to improve the
visibility of girls' involvement in computing and information technology,
remove obstacles preventing female participation in the field, improve
computer education at the K-12 level, and raise awareness that strong
computer skills create success in many other careers. The U.S. Department
of Labor says that only 26 percent of IT workers in the U.S. are women and
predicts that more than 1 million computing jobs will be added to the
workforce by 2014. Surveys by the Higher Education Research Institute show
an 80 percent decline between 1996 and 2005 in the number of incoming
undergraduate women interested in computer science. "In the next seven
years, women will account for more than half of the nation's workforce,"
says NCWIT CEO and co-founder Lucy Sanders. "If U.S. companies wish to
maintain their competitive advantage in IT-related fields, they cannot
afford to miss out on the input of half the population. Women can, and
must, play a more significant role in building an innovative and
technically trained workforce." The K-12 Alliance's first project will be
to release a resource kit called "Gotta Have IT" that will contain posters,
career information, digital media, and more for teachers to use in the
classroom. The K-12 Alliance is also creating a permanent networking
system to help K-12 Alliance members distribute information to educators,
parents, and other K-12 influencers.
Click Here to View Full Article
to the top
Linux Coders Tackle Power Efficiency
CNet (06/25/07) Shankland, Stephen
Linux programmers are making Linux computers more power efficient by
creating a "tickless" kernel that abandons traditional time-keeping methods
for a more power-friendly technique. Intel is also developing a power
saver, called PowerTop, that makes it easier for users to know what
software is needlessly keeping a computer's processor running. "It makes a
lot of sense," says Illuminata analyst Gordon Haff. "Raw, flat-out
horsepower is less and less what the game's about--especially on laptops,
which are becoming more common." The tickless kernel is already making its
way into the Linux mainstream. Intel programmer and longtime kernel
developer Arjan van de Ven says a laptop running Linux now consumes 15 to
25 percent less power when idling than it did three months ago. Processors
consume a significant amount of power, frequently more than a 100-watt
light bulb, and heat reduction techniques such as air conditioning in data
centers and fans consume even more power. For some time now, processors
have been able to enter power-saving modes when a user sets the computer to
standby mode, but more can be done. Gigahertz-frequency processor cycles
last less than a billionth of a second, which would allow a processor to
enter and leave low-power states multiple times between two keystrokes of a
fast typist, and software often needlessly keeps kernels active. The
tickless kernel, instead of frequently checking for work to be done,
schedules the hardware to interrupt when it knows a future job will require
attention. The tickless kernel also allows for better use of
virtualization, which allows multiple operating systems to simultaneously
run on the same computer, which can reduce the total number of machines
operating to save even more power.
Click Here to View Full Article
to the top
Hopkins Center to Help War Effort
Baltimore Sun (06/25/07) P. 1A; Emery, Chris; Connolly, Allison; Gorman,
Siobhan
Johns Hopkins University will receive at least $48 million from the
Department of Defense to develop computer systems that will help military
and spy agencies process the massive amounts of intelligence information
they collect. The grant is for new research focused on improving
technology that can automatically translate and analyze speech and text
from multiple languages, university officials announced. The technology
would help overwhelmed intelligence analysts manage the huge amounts of
information, often in Arabic, being gathered in Iraq and the war of terror.
The Human Language Technology Center of Excellence is being established
near Hopkins' Homewood campus and will be staffed by engineers, computer
scientists, mathematicians, cognitive scientists, and linguists. Experts
from the University of Maryland, College Park, and BBN Technologies will
also participate in the research project. "It's really supposed to be a
fresh look at this problem," says Gary W. Strong, the center's executive
director. "This technology has hit a wall at this point." Mark M.
Lowenthal, a former senior intelligence officer who oversaw language
training across the intelligence agencies, says advances made in machine
translation of conversations would be a major boost for intelligence
agencies. "Everyone is busting their heads on machine translation,"
Lowenthal says. "That's sort of the Holy Grail." Carnegie Mellon's Alan
Black, a researcher at the school's Language Technologies Institute, says
current translation systems are slow and expensive and have difficulty
translating words when not spoken clearly.
Click Here to View Full Article
to the top
New York Legislators Keep E-Voting Software in Public
Hands
Computerworld (06/25/07) Songini, Marc L.
New York state voting activists are pleased that this year's New York
senate and assembly session ended without changing the state's strict
e-voting software escrow law. Activists were worried that pressure from
the e-voting industry would force changes in the law that requires voting
system vendors to place all source code and other related software in
escrow for the New York State Board of Elections so it can be examined as
necessary. The law also forces a voting system vendor to waive all
intellectual property and trade secret rights if the software needs to be
reviewed in court. Microsoft, whose Windows software is used in some
e-voting devices, sought to amend the law to avoid the strict escrow
provisions. New Yorkers for Verified Voting executive director Bo Lipari
says concerned citizens created a swell of support in the legislature to
ensure the law remained unaltered and that about 3,000 constituent calls
created a forceful reminder to lawmakers of their commitment to strong
voting laws. "The voting machine vendors have known for two years what our
laws said," says New York state assemblywoman Barbara Lifton. "We're
holding firm on our current state law which calls for open source code."
Lipari says Microsoft's proposed changes would ruin the source code escrow
and review procedures in the current law. Microsoft says it does not make
its source code available for escrow under election law because of concerns
that the code could be disclosed to third parties without adequate
protections for intellectual property rights.
Click Here to View Full Article
to the top
When Computers Attack
New York Times (06/24/07) P. 4-1; Schwartz, John
Doomsayers have long been forecasting the digital equivalent of Pearl
Harbor, when America's enemies attack U.S. computer networks in the hopes
of crippling vital infrastructure, but experts claim the reality of a
cyberwar scenario is considerably less extreme. Andrew MacPherson of the
University of New Hampshire reports that, unlike physical attacks, recovery
from cyberattacks requires less of an effort, given the resilience of the
Web. Although the U.S. government has been preparing for a major digital
assault, experts believe the United States gears up for cyberattacks every
day through exposure to malware, meltdowns, glitches, and crashes, while
there are very few points in the network where a single computer
malfunction can cause a systemwide crash. Furthermore, human beings are
also resilient, and a loss of services through one kind of medium can be
offset through improvisation. Still, Danny McPherson with Arbor Networks
thinks a full-scale cybertattack could have enormous ramifications,
although he contends that the effects of cyberwarfare on the Internet will
be much subtler than anticipated, in that "certain parts of the system
won't work, or it will be that we can't trust information we're looking
at." There is general consensus among experts that cyberwarfare is
unlikely to resemble the recent blockage of online access to Estonian banks
and government offices via distributed denial of service attacks, which was
eventually attributed to tech-savvy activists protesting the relocation of
Soviet-era war memorials, rather than the Russian government.
Click Here to View Full Article
to the top
A Sunny Hiring Season for Job Seekers
CNet (06/25/07) Olsson, Miriam
The job market is ripe for recent college graduates in the computer
industry. The overall unemployment rate for the computer industry at the
end of the last quarter was 2.1 percent, even lower than it was during the
peak of the dot-com boom. The unemployment rate for software engineers is
particularly low, down to 0.9 percent, according to the U.S. Labor
Department, while the unemployment rate for recent technology grads is only
2 percent. In the battle to attract new computing graduates, companies
such as Google, Microsoft, and IBM are spending significant amounts of time
and money to recruit students. It is estimated that IBM spends more than
$100 million on student activities annually. IBM has even gone so far as
to create its own academic discipline, known as Services, Sciences,
Management and Engineering (SSME), to ensure a flow of graduates with
desired skills. Many companies are having trouble filling positions.
Microsoft technical staffing manager Jeremy Brigg says "the pool of
qualified folks in tech as a whole has shrunken in the U.S." Microsoft
expects to high 2,500 students this year, both full-time and intern
candidates, and would like to hire even more, but there are just not enough
students with technical skills, Brigg says. Company recruiters often look
for different skills. IBM, for example, looks for candidates with "preset
skills," many of which can be found in SSME students, while Google prefers
to hire early career candidates and shape them as the company sees fit.
Despite the currently strong job market, there are some signs of
challenging times to come. Challenger, Gray & Christmas CEO John A.
Challenger cites the slowing economy, and Dell and Motorola both recently
announced job cuts. However, Challenger notes that other technology
sectors are still strong and growing.
Click Here to View Full Article
to the top
Human-Aided Computing
Technology Review (06/22/07) Greene, Kate
Microsoft researchers are trying to utilize some of the specialized, and
often subconscious, computing power in the human brain to solve problems
that have been difficult for computers to solve. Microsoft researcher
Desney Tan and University of Washington graduate student Pradeep Shenoy
have developed a system that uses electro-encephalograph (EEG) caps to
monitor the brain activity of people looking at pictures of faces and
objects. The researchers found that even when the subjects were not trying
to distinguish faces from other pictures, there was still a subtle
difference in brain activity. The researchers wrote software that examines
EEG data and classifies faces and non-faces based on the subjects'
responses. When one subject viewed an image once, the system was able to
identify faces with up to 72.5 percent accuracy. The accuracy increased to
98 percent when eight people viewed a particular image twice. "Given that
the brain is constantly processing external information," Tan says, "we can
start to use the brain as a processor." Although the current research is
mainly just proof of concept, eventually such face-recognition techniques
could be used to search surveillance videos to quickly find images with
faces and those without. Subconscious brain power could also be used to
improve automated image searches by preclassifying objects to help a
computer find the requested pictures more accurately.
Click Here to View Full Article
to the top
The Search for 'God's Number' in a Rubik's Cube
Boston Globe (06/25/07) Baker, Billy
Speedcubers are competing to solve a randomly scrambled Rubik's Cube the
fastest, while computer science researchers at Northeastern University say
the best-selling toy of all time can now be solved in 26 moves. Dan
Kunkle, a doctorate student in computer science, and Gene Cooperman, a
professor of computer science, used 128 CPUs running more than 63 hours to
do most of the calculations for determining the minimum number of moves to
solve the cube from more than 43 quintillion possible arrangements. The
researchers spent more than a year and a half on the project at
Northeastern's High Performance Computing Lab, and part of a $200,000 grant
from the National Science Foundation on the research. "It has wide
applications on mathematical group theory, enumeration, and search," says
Kunkle. The research has practical applications such as scheduling for
factories and air traffic control, he adds. Erich Kaltofen, a professor of
mathematics and computer science at North Carolina State University who was
not involved in the project, says the research is important because of the
computing power and mathematical ingenuity that was used. "The problem is
not finding the minimum number of moves to solve a Rubik's Cube, but to
demonstrate that you can carry out such a gigantic combinatorial search,"
says Kaltofen. Kunkle and Cooperman will present their research in July at
the International Symposium on Symbolic and Algebraic Computation in
Waterloo, Ontario.
Click Here to View Full Article
to the top
Imaging System Brings Skeletons to Life in 3-D
Popular Science (06/20/07) Mika, Eric
Researchers at Brown University have developed a new imaging system for
peering into living things at a desirable speed, resolution, and depth.
The imaging system makes use of computed-tomography (CT) scanners to obtain
detailed 3D views, and the fluoroscopy technique for turning a rapid
succession of x-rays into video. Called CTX imaging, the new process can
put 3D animations of bones in motion--walking, running, and jumping. After
the traditional CT scan creates a 3D model of the bone structure of a
subject, high-speed fluoroscopy records motion from two different angles,
and then the two data sets are fed into image-processing software that
combines them to generate animation in action from any angle. Resolution
reaches a tenth of a millimeter and motion is captured at 1,000 frames per
second. Researchers are using the room-size system to study how flight
evolved in birds, and they believe it will be useful to orthopedic
surgeons, who could look for better treatments for bone-, ligament- and
joint-related injuries. The technology is unlikely to be incorporated into
a pair of glasses in the near future, but a commercial version of the
system for producing real-time video could hit the market by 2010,
according to Elizabeth Brainerd, a biomechanics professor who heads the CTX
program.
Click Here to View Full Article
to the top
Welcome to the World of Haptics for Industrial
Applications
Basque Research (06/22/07) Perez, Rosa Iglesias
Vibrating cell phones, gaming controllers, and force-feedback control
knobs in cars are just the beginning of haptic technology, which applies
force, vibrations, and motion to connect users to computerized systems
through the sense of touch. Interacting with or manipulating a screen is
no longer limited to vision and sound, and with the emergence of PHANToM
haptic and hand exoskeleton devices "you can feel or touch what you see."
Current use of the devices can be seen in virtual modeling, medicine,
education, assistive technology for the vision impaired, industrial design,
and maintenance. In the industrial design field, haptic technology can be
incorporated into computer-aided design (CAD) systems to allow designers to
also feel forces and local stimuli similar to real situations during the
assembly process. The Collaborative Haptic Assembly Simulator (CHAS) was
developed to allow designers in different locations to grasp virtual parts
and assemble them into a digital engine, for example. Operators at Labein
in Spain and Queen's University Belfast in Northern Ireland used CHAS to
collaborate in real time, enabling the operator in Belfast to grasp and
feel collisions as the other operator assembled a part. Such collaboration
over the Internet and interaction over distance could allow doctors to
diagnose and operate on patients remotely, or an individual to shake
another person's hand virtually.
Click Here to View Full Article
to the top
Ahead of the Tape
Economist (06/21/07)
Trading algorithms are now being developed to take news reports into
consideration as they make their trading decisions. The computer programs
that generate buy and sell orders can now process trades in thousandths of
a second, and the new capability is viewed as a competitive advantage when
lightning-quick decisions on the best electronic prices for trades need to
be made. Morgan Stanley's Andrew Silverman says the news-perusing function
has room to develop, and there are some concerns whether the software will
understand the context of a headline. A third of all share trades in the
United States involves algorithmic trading, which the consultancy Aite
Group expects to account for more than half of all share volumes and a
fifth of options trades in three years. And the research firm TowerGroup
believes $480 million will be spent this year on developing technology for
algorithmic trading. On Monday, the London Stock Exchange introduced an
electronic system to accommodate the faster trades of algorithmic trading
technology, and processing rose from 600 orders a second to 1,500.
Britain's Financial Services Authority is also considering using the
technology to search trading data for clues of suspicious trading
activity.
Click Here to View Full Article
to the top
The New Metrics of Scholarly Authority
Chronicle of Higher Education (06/15/07) Vol. 53, No. 41, P. B6; Jensen,
Michael
The digital movement of information assets is leading to a time where
information is in great supply, and the way we establish authority,
relevance, and scholarly significance is changing dramatically as a result,
according to National Academies director of strategic Web communications
Michael Jensen. It is critical that sense and structure be imposed on the
massive flood of Web content on the horizon, and many technology-oriented
thinkers believe artificial intelligence will fuel the "Web 3.0" trend.
Jensen notes that algorithmic filtration models are starting to emerge, and
cites the National Academies Press book-specific Search Builder and its
Reference Finder as examples of Web projects that could provide insightful
hints into the characteristics defining the trend. With the Search
Builder, a researcher can choose terms from a chapter and generate
term-pairs for submitting a search query to the National Academies Press
(NAP), Google, and other services, yielding a precise answer. The
Reference Finder is a prototype that allows researchers to paste in the
text of a rough draft and retrieve related NAP books, based on
algorithmically mined and ranked key terms. Jensen speculates that the Web
3.0 era will feature the emergence of intensely computed
reputation-and-authority metrics, and probable elements of Authority 3.0
include publisher, peer reviewer, and commenter prestige; the portion of a
document quoted in other documents; raw links to the document; valued
links; nature of the language in comments; obvious attention; duration of a
document's existence; types of tags assigned to the document, the terms
used, and the taggers' and the tagging system's authority; percentage of
phrases valued by a disciplinary community; context quality; relevance of
author's other work; quality of the author's institutional affiliations;
the reference network; and inclusion of the document in "best of" lists,
syllabi, indexes, etc.
Click Here to View Full Article
to the top
Silicon Smackdown
Scientific American (06/07) Vol. 296, No. 6, P. 32; Frenkel, Karen A.
Creating a computer program that beats humans at the Asian board game "Go"
is an enormous challenge, given the immense number of maneuvers open to
players. The UCT (upper confidence bounds applied to trees) algorithm
developed by Levente Kocsis of the Hungarian Academy of Sciences' Computer
and Automation Research Institute and Csaba Szepevari of the University of
Alberta in Edmonton can reportedly outdo the win rates of the best Go
computer programs by 5 percent and compete with professional Go players on
small boards, via an extension of the Monte Carlo strategy. The Monte
Carlo approach assesses and ranks candidate Go moves by playing a large
sample of random games, while the UCT algorithm concentrates on the
maneuvers with the most potential. Kocsis explains that UCT must find a
balance by testing alternatives that seem optimal at the time to find
possible vulnerabilities and probing "less optimal-looking alternatives, to
ensure that no good alternatives are missed because of early estimation
errors." Kocsis projects that programs such as UCT could best professional
human Go players within a decade. UCT has applications beyond games,
because it can be applied to any problem that involves selecting the best
option, provided the alternatives possess an internal tree-like
organization and their values can be recursively computed. Targeted Web
advertising, the optimization of channel allocation in cellular systems,
and determining the best sites for industrial plants are just some of the
potential applications for UCT.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Embedded Web Services: Making Sense of Diverse
Sensors
Sensors (06/07) Culler, David E.; Tolle, Gilman
Web services promise to facilitate easy integration of diverse and
distributed sensor networks, by employing the service-oriented architecture
(SOA) model that is used to merge numerous data sources from physically
scattered dynamic processes. The integration of diverse information
sources transpires at three levels: The bottommost level is the
communication medium that permits the exchange of data and control actions;
the middle level is comprised of object and data representation; and the
topmost level is service discovery, through which participants can squeeze
value out of information exchanges by learning what actions other devices
can carry out, as well as how specific actions can be facilitated. The
majority of modern-day sensors are minuscule chips or circuit elements that
are directly linked to powerful yet cheap microcontrollers with complex
communication connections, and integrating these sensors into rich networks
should be a simple prospect, because intelligence and communication are
linked to each device. The integration of diverse information sources and
processes is becoming commonplace, even in an online shopping site. This
integration breakthrough was fueled by two developments. The first is a
dramatic simplification at all three integration tiers, represented by the
conversion of communication into the transference of sequences of
characters between named endpoints on hosts, the streamlining of
information representation to the recognition of nested XML-tagged
sections, and reduction of the set of behaviors to GET and POST a sequence
from, or to, a named endpoint. The second development is consistent SOA
application in which applications are designed as a composition of services
that is independent of protocol and deployment.
Click Here to View Full Article
to the top