ACM/CSTA Says AP CS Is Not Being Eliminated
ACM (04/07/08) Gold, Virginia
ACM and the Computer Science Teachers Association are clarifying that AP
Computer Science will remain as a choice for students. While a Washington
Post article on Friday, April 4, reported that several AP courses--
including the computer science AB course--were being eliminated by the
College Board, a second and more popular AP Computer Science course will
not be affected. The College Board is eliminating only the less popular
course known as AP Computer Science AB. The College Board also announced
they will continue to work closely with the CS community to redefine AP
Computer Science. An email message sent to teachers from the College Board
stated, "Appropriate College Board committees will focus their efforts on
improving and supporting the AP Computer Science A program, which will be
enhanced during the next five years to better represent a full-year,
entry-level college computer science sequence. Our intensified commitment
to AP Computer Science A will ensure that the course provides the best
possible college-level academic experience and is supported by an increased
array of curricular resources and professional development opportunities
that will benefit AP Computer Science teachers." For more information, see
http://usacm.acm.org/usacm/weblog/index.php?p=593
Click Here to View Full Article
to the top
Chip Industry Confronts 'Software Gap' Between Multicore,
Programming
EE Times (04/03/08) Merritt, Rick
The chip industry has just begun to deal with a perceived software gap
between multicore processors and a dearth of parallel programming tools and
programming methods, as evidenced at the recent Multicore Expo where a
number of chipmakers discussed their multicore product plans while others
cautioned that software designed to exploit these new chips has a long way
to go. A survey of embedded system developers by Venture Development Corp.
finds that about 55 percent are using or plan to use multicore chips in the
next year, while Intel's Doug Davis says the volume of shipped Intel
processors that use multiple cores will rise from around 40 percent in 2007
to 95 percent in 2011. On the other hand, vendors polled by the VDC survey
said only about 6 percent of their tools were enabled for parallel chips
last year, and this figure will only climb to 40 percent in three years.
RapidMind chief scientist Michael McCool expressed a need for a new
programming model to help developers achieve a better understanding of how
to optimize their applications for parallel chips, and this model would
have to offer a maximum degree of automation while delivering override
options and drill-down mechanisms to users. At the Expo, the Multicore
Association revealed that it has concluded work on an applications
programming interface for inter-core communications, and is currently
focused on the definition of an embedded virtualization standard. However,
University of Illinois at Urbana-Champaign professor Wen-mei Hwu warned
against the creation of a new computer language to better serve parallel
chips, arguing that "if you really want to have a million people do
something, don't ask them to speak Latin," and stating his preference for
an evolutionary approach.
Click Here to View Full Article
to the top
US Reveals Plans to Hit Back at Cyber Threats
ZDNet (04/02/08)
The U.S. Air Force Cyber Command (AFCYBER) is just as focused on being
able to attack through the Internet as it is on defending U.S. cyber
infrastructure. A senior U.S. general says AFCYBER is developing
capabilities to inflict denial of service, confidential data loss, data
manipulation, and system integrity loss on its enemies. These cyberattacks
could be combined with physical attacks. U.S. Eighth Air Force Lieutenant
general Robert J. Elder Jr. says offensive cyberattacks in network warfare
make kinetic attacks more effective. "Cyber gives us a huge advantage but
adversaries look at our capabilities and see areas they can undermine," he
says. "We need to protect our asymmetric advantage--on the one hand by
having people further exploit cyber, and on the other by having mission
assurance." The problem is made more important by the military's reliance
on the public Internet. The U.S. military infrastructure runs through the
public Internet system to both launch and defend against attacks, and
military networks such as the Global Information Grid are linked to U.S.
government and critical national infrastructure systems, which are linked
to the public Internet. Adversary systems are subverted through public
channels by the U.S. military, but it also leaves the military open to
attack through the same channels, Elder says. Other concerns for the
military include the possibility of supply-chain vulnerabilities, where
holes are introduced into chipsets during manufacturing that an adversary
could later exploit, as well as within electronics. Elder says AFCYBER
also needs to develop the ability to quickly pinpoint where an attack is
coming from and be able to retaliate, and to deter potential attackers.
Click Here to View Full Article
to the top
Security Pros Launch Open-Source CERT
eWeek (04/03/08) Naraine, Ryan
With backing from Google, security consulting firm Inverse Path, and the
Open Source Lab at Oregon State University, a group of computer security
professionals created the Open Source Computer Emergency Response Team
(oCERT), a new organization designed to be the go-to place for security
incident response when an open-source project has been affected. OCERT
will include Tavis Ormandy and Will Drewry from the Google Security Team,
Andrea Barisani and Rob Holland from Inverse Path, and Marcel Holtmann from
Intel. In addition, active open-source distributions or projects with a
good record of being responsive to dealing with security-related problems
will be asked to join and actively participate in the oCERT effort. The
team and its backers will work to manage advance vulnerability warnings,
coordinate the patch release notification process, and punish vendors that
delay offering security fixes. In addition, oCERT will provide security
vulnerability mediation for the security community, and maintain reliable
security contacts between registered projects and vulnerability researchers
that need to get in touch with a certain project about infrastructure
security issues. Barisani says oCERT hopes to reduce the impact of a
security incident on smaller projects that have some or no infrastructure
security.
Click Here to View Full Article
to the top
Usability or User Experience--What's the
Difference?
E-Consultancy (04/02/08) Stewart, Tom
User experience is often contrasted to usability, with the latter
frequently being defined as a system's ease of use while the former is
considered a blanket term for the relationship between people and
technology, writes Tom Stewart, chair of the ISO subcommittee responsible
for the International Standard for Human Centered Design. He says ISO's
definition of usability is much closer to the concept of user experience as
encompassing issues that include usefulness, desirability, credibility, and
accessibility, and the new version of ISO 13407 will employ the term user
experience. "In the revised standard we define [user experience] as 'all
aspects of the user's experience when interacting with the product,
service, environment or facility' and we point out that 'it is a
consequence of the presentation, functionality, system performance,
interactive behavior, and assistive capabilities of the interactive
system," Stewart says. He hopes that incorporating the user experience
within the human-centered design process will avoid marginalization and
turn user experience into a primary business motivator for a wide array of
systems. "Whatever we call it, getting the relationship between people and
technology right is critical to a project's success and the intelligent
application of a structured, people-centered approach to design can only be
a step in the right direction," Stewart says.
Click Here to View Full Article
to the top
Computer System Consistently Makes Most Accurate NCAA
Picks
Georgia Institute of Technology (04/03/08)
Georgia Institute of Technology professors have developed Logistic
Regression Markov Chain (LRMC), a computer ranking system that consistently
predicts NCAA basketball rankings more accurately than other available
methods. LRMC correctly picked all four of this year's finalists, and has
correctly identified 30 of the last 36 Final Four participants. Over the
same nine-year stretch, the NCAA seedings and various polls correctly
identified only 23 teams, and the Ratings Percentage Index identified 21
teams. LRMC was originally designed by Joel Sokol and Paul Kvam, and has
been maintained and improved by Sokol and George Nemhauser. The system
uses only basic scoreboard data, including teams played, which team had
home court advantage, and the margin of victory, as well as information
such as the quality of each team's results and the strength of each team's
schedule. In addition to choosing the Final Four, LRMC also correctly
identified several over-rated and under-rated teams as potential upsets.
When determining the value of home court advantage, LRMC considers how much
playing at home helps a team win, instead of how many points home court is
worth. The researchers also have been able to show that very close games
are often "toss-ups," in which the better team barely wins more than half
of the time, so winning a close game should not be worth as much as winning
easily, and loosing a close game should not hurt a team's rankings as much
as losing in a blowout.
Click Here to View Full Article
to the top
IBM Develops Natural Disaster 'Magic Potion'
VNUNet (04/02/08) Dixon, Guy
IBM says its algorithms for modeling and managing natural disasters are a
"magic potion" for allocating resources effectively during wildfires,
floods, famines, or diseases. Mathematicians at IBM developed the
algorithms, which are capable of determining the quickest way to provide
relief, uncovering fraud in health insurance claims, automating complex
risk decisions for international financial institutions, and detecting
patterns in medical data for new insights and breakthroughs. "The
challenge lies in matching high-end mathematical programming technologies
with high-impact business and societal problems, while using open platforms
and standards," says Dr. Daniel Dias, director of the IBM India research
laboratory. IBM Global Business Services teamed up with scientists in
India and the United States to develop the algorithms. "We are creating a
set of intellectual properties and software assets that can be employed to
gauge and improve levels of preparedness to tackle unforeseen natural
disasters," says Dr. Gyana Parija, senior researcher and optimization
expert at IBM's India Research Laboratory in New Delhi.
Click Here to View Full Article
to the top
A Lab With Big Ideas
Diamondback (04/04/08) Yu, Chris
Maryland University's Human-Computer Interaction Lab is responsible for
many devices used everyday by consumers, including a feature found on iPods
and embedded hyperlinks. Professor Ben Shneiderman says the goal of the
lab is to make technology easier to learn and more accessible to people.
One of the major contributions HCIL has made is the lift-off touch screen
found on the iPod Touch and iPhone, which lets people make selections on
the device only after their finger lifts off the screen, enabling them to
make adjustments if they touch the wrong button. HCIL was also the first
to develop embedded hyperlinks, which highlight words within the text of a
Web page so users know that word is a link that will lead to another page.
A major focus of the lab is to develop technology that will help children
learn. To accomplish this, HCIL created the International Children's
Digital Library, the largest online library for children's books in the
world. The library's interface is based on what kids say they want, and
many of the technologies used in the library are designed by children
through brainstorming sessions. The lab also is working with physicians at
Washington Hospital Center to develop a tool that will allow doctors to
search patient records more easily.
Click Here to View Full Article
to the top
Linux Kernel Community Grows, But Elite Group
Remains
IDG News Service (04/01/08) Kanaracus, Chris
A new report from the Linux Foundation shows that roughly 3,700 developers
from more than 200 companies and organizations have contributed to the
kernel since 2005. During the past three years, the top 10 individual
developers have contributed almost 15 percent of the changes to the kernel,
and the top 30 developers have submitted 30 percent of the changes,
according to the report. The top developer, Al Viro, has contributed 1,571
changes to the kernel, more than any other individual developer. The
report also shows the widespread penetration of Linux in enterprise
computing. Linux Foundation executive director Jim Zemlin says the report
is necessary given the lingering public perceptions of Linux. "I do think
there continues to be groups of people out there who perceive open source
and Linux as some kind of random hobbyist movement," Zemlin says. "It's
amazing that after Linux is running the New York Stock Exchange, that
people would still doubt it's ready for prime time." Kernel contributors
are divided into three categories--developers performing work on their own
time with no financial contributions coming from a company, developers for
whom a corporate affiliation could not be found, and developers tied to
companies and foundations. Zemlin believes corporate contributions will
increase as new products come to market. Nevertheless, he says the kernel
is not in danger of being taken over by corporate interests.
Click Here to View Full Article
to the top
The AI Chasers
Futurist (04/08) Vol. 42, No. 2, P. 14; Tucker, Patrick
Artificial general intelligence (AGI) remains an elusive goal after over
50 years of research, since the semantic and philosophical problems that
have impeded progress toward AGI are as challenging as ever despite the
resolution of many technical problems, writes the World Future Society's
Patrick Tucker. Adaptive A.I. founder Peter Voss outlines two paths for
AGI creation--the continued tinkering of mundane computer programs to
improve their sophistication or the specific engineering of an AGI
system--that form the core of the philosophical schism in the AI research
community. Many researchers say the first step on the road toward a
thinking machine is the development of a learning machine, while Google's
Peter Norvig says compelling humanistic thought from a learning system
involves teaching that system to understand language. Powerset CEO Barney
Pell projects that within the next five years people will be able to
interact with search engines using straight questions rather than keywords,
while AI will eventually become so common that people will regard it as
practically a household utility. Meanwhile, Novamente founder Ben Goertzel
is convinced that the development of AGI will be sparked by the advancement
of AI in virtual worlds and online games, and many AI watchers believe that
the growing volume of knowledge posted on the Web and the development of AI
systems to handle that knowledge will combine to yield systems capable of
humanistic behavior and information processing. Many researchers dismiss
the notion of AI systems eventually running amok and waging war against
humanity, although Self-Aware Systems founder Stephen Omohundro warns of
the possibility of "an AI that takes off on is own momentum, on some very
narrow task, and, in the process, squeezes out much of what we care most
about as humans." Another speculative scenario is that AI advances to such
a degree that a great deal of mankind's accumulated skills and knowledge
becomes superfluous to daily life.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Engineers Make First 'Active Matrix' Display Using
Nanowires
Purdue University News (03/31/08)
Purdue University engineers say they have developed an active-matrix
display using transparent transistors and circuits that could lead to
electronic paper, flexible color monitors, and "heads-up" displays in car
windshields. The transparent transistors are made of nanowires as small as
20 nanometers, which are used to create an organic light-emitting-diode
(OLED) display. The nanowires formed the basis of a proof-of-concept
active-matrix display, which is able to precisely direct the flow of
electricity to produce video because each pixel has its own control
circuitry. OLEDs are currently used in cell phones, MP3 displays, and
prototype TVs, but it is difficult to make them small enough for use in
high-resolution displays. Northwestern professor Tobin J. Marks says
nanowire-transistor electronics could solve that problem because the
fabrication method used in the active-matrix display is scalable. Unlike
CMOS chips, nanowire thin-film transistors could be produced inexpensively
under low temperatures, making them suitable for use with flexible plastics
that would melt at high temperatures. Liquid-crystal displays use a
backlight to light the screen, with the pixels acting as filters to create
colors and images. However, OLEDs emit light directly, which eliminates
the need to backlight the screen and could lead to more vivid, flexible,
and thinner screens. The displays also are transparent, and until the
pixels are activated the display area looks like lightly tinted glass.
Click Here to View Full Article
to the top
Conversations: Jon Bentley
Dr. Dobb's Journal (03/31/08) Blake, Deirdre
Avaya Labs research scientist Jon Bentley is currently working on a
mathematical theory of authentication that can quantify the assurance of
security secrets. Bentley says that his 1976 Ph.D. thesis included a
section in which he attempted to describe the process of designing an
algorithm, and that he believes his approach has stood the test of time.
"The principles include generalizing, using high-level and abstract
description of algorithms, examining degenerate cases, and employing
standard speed-up tricks," says Bentley. Before getting immersed in the
details of designing an algorithm, Bentley says, the most important step is
to find out what the real problem is. Bentley says that sometimes
algorithm design and algorithm analysis proceed hand-in-hand, such as when
students design an algorithm so that it can be analyzed. The purpose of
algorithm design is to develop a good algorithm, while the purpose of
algorithm analysis is to understand how good an algorithm is. "Sometimes,
though, people design algorithms and report that they are fast without
analyzing their runtime," he says. "What a delightful challenge for an
algorithm analyst! I've walked both sides of that street. My most
frequently cited paper was for the 1975 ACM Undergraduate Student Paper
competition; it introduced multidimensional binary search trees, which Don
Knuth called 'k-d trees.' I described an algorithm for nearest-neighbor
searching, but I couldn't even begin to analyze it. Many folks have made
great progress on the analysis since then." He says programming is subtle,
that we must learn to be "humble programmers," and that there are lot of
tools available, including precise specifications, formal methods, and
extensive tests. Bentley adds, however, that one of the best tools is the
eyes of really smart friends.
Click Here to View Full Article
to the top
IT Job Security Plummets Five Times Faster Than
Nationwide Average
Network World (04/02/08) Brodkin, Jon
Job security for IT professionals dropped more than 10 percent from
January to February of this year, greatly surpassing the average job
security declines seen nationwide, according to the ScoreLogix Job Security
Index. IT Job security fell 10.2 percent in February, the eighth decline
in 13 months and the largest drop in over a year, while job security
nationwide for all industries fell only 1.9 percent. "This reduced demand
for IT jobs, which has lowered job security level in the IT sector, can be
attributed to outsourcing, offshoring, and relocation of production to
cheaper, foreign locations," say ScoreLogix analysts in a report. "In
addition, companies have reduced their investment in IT infrastructure
because of lack of compelling, technologically superior upgrades--since
existing infrastructure works just fine. Besides, the economy is weak and
offers every incentive to cut costs and scale back non-essential, avoidable
investments in technology related products and services." ScoreLogix
founder and CEO Suresh Annappindi says the chances of an IT professional
losing his or her job seem to have flatlined and probably will not increase
or decrease significantly anytime soon, though the IT sector is performing
much worse than the overall national economy. Cisco security expert Jamey
Heary says that IT job cuts could have impacts well beyond the personal
suffering of the employees, and that economic recession and related budget
cuts in IT security could leave companies vulnerable to cyberattacks.
Click Here to View Full Article
to the top
Merging Man and Machine to Reach the Stars
Space.com (03/28/08) Hsu, Jeremy
Robotic space missions have emerged as a cheaper and less risky
alternative to manned missions, but former NASA historian Roger Launius
says that "the lack of a compelling story associated with robotic
spaceflight means that side of the equation has not been developed as well
as the human side." Launius has co-authored a book with American
University professor Howard McCurdy which makes the argument that humans
and robots are mutually dependent on each other to succeed in space
exploration. They say proponents of manned spaceflight have a legitimate
reason to get humans off the earth--to ensure the species' survival though
interplanetary colonization--but they must make this motivation plain.
Using survival as the primary rationale for funding space missions is a
tough sell, while Dittmar Associates CEO Mary Lynne Dittmar notes that
"young people seem to be able to relate much more easily to robotic
missions, and therefore get more excited about them." Interstellar voyages
are beyond the current physical capabilities of both robots and humans, and
Launius thinks merging man and machine into cyborgs is one possible
solution.
Click Here to View Full Article
to the top
Soccer Robots Compete for the Title
Fraunhofer-Gesellschaft (03/28/08)
The "RoboCup German Open" will take place April 21-25, 2008, in Hall 25 at
the Hannover Messe. Organized by the Fraunhofer Institute for Intelligent
Analysis and Information Systems (IAIS) in Sankt Augustin, RoboCup will
bring together more than 80 teams of researchers from more than 15
countries to pit their robots against each other. RoboCup will offer a
soccer tournament broken down into nine leagues, such as for robots on
wheels, on four mechanical paws, or two legs. The completely independent
robots will feature cameras and sensors for scanning the ball, other
robots, and the pitch; internal processors for converting data into game
tactics such as defense strategies; and innovative engines for powering
them across the field and faking out opponents. "Just like real players,
they fall down and get up again, go after the ball autonomously and score
goals," says Dr. Ansgar Bredenfeld, who is in charge of the RoboCup at
IAIS. In addition to the soccer games, RoboCup will offer a
"RoboCup(at)Home" category in which service robots compete on performing
domestic tasks. Also offered will be a "RoboCup-Rescue" category in which
rescue robots complete an obstacle course; a RoboDance in which robots
participate in a dancing competition; and a competition for people under 20
years of age. "Many components that were originally designed for robot
soccer have since made their way into other applications, for instance in
localization technology for inspection robots," says professor Stefan
Wrobel, executive director of IAIS.
Click Here to View Full Article
to the top
A Library Visit in 3D
ETH Life (03/20/08) Cosby, Renata
A group of students at ETH Zurich's Information Systems Lab used the
Second Life virtual environment to study and tackle a number of problems
typical of library book-lending by visualizing an automated library that
employs radio frequency identification (RFID) technology. The virtual
library featured a checkout counter, a help desk, detection gates, a self
check-in area, and a female avatar, while books were equipped with RFID
tags and RFID readers were placed at various locations. The Smart RFLib
System boasts a three-tiered architecture consisting of data acquisition,
query processing, and visualization. In the acquisition tier, RFID readers
and antennas capture data about tagged books and people with tagged library
cards, and this data is cleaned and compressed according to a time period
or an event; the query processing tier analyzes the collected data, spots
key events such as theft, and prompts the proper action or alert, updating
the database; the visualization layer triggered by the database update
delivers a 3D model of the results to the viewer. To ease the compression
of the immense volumes of data acquired by RFID tag readers, the project
group devised a pair of techniques--regular responses sent back by
book-tags or data that is harvested only when a tag captures new
information. The standard client-server architecture for Second Life spots
changes in the status or position of objects from within, but Second Life
cannot monitor all the information about library books, people, or policies
that one would wish to keep tabs on. The ETH Zurich students had to
develop an additional Web interface as a complement to their Second Life
visualizer that allowed virtual visitors to make their own inquiries about
books, check their current status as borrowers, and watchdog the system in
real time.
Click Here to View Full Article
to the top