Undifferentiated Networks Would Require Significant Extra
Capacity
Rensselaer News (06/29/07) Gorss, Jason
Researchers from Rensselaer Polytechnic Institute, AT&T Labs, and the
University of Nevada, Reno say a new study indicates that an Internet that
treats all traffic equally would require significantly more capacity than
one with differentiated services. The study focused on whether application
traffic that requires performance assurance, like VoIP, could be serviced
differently, and what the impact would be if all traffic was treated in an
identical manner. "We wanted to take one piece of the overall debate and
approach it quantitatively," says Shivkumar Kalyanaraman, principal
investigator and professor of electrical, computer, and systems engineering
at Rensselaer. "The study makes clear that there are substantial
additional costs for the extra capacity required to operate networks in
which all traffic is treated alike, and carrying traffic that needs to
still be assured performance as specified in service level agreements."
The researchers used computer models to compare the current "best-effort"
approach to a model that separates information into two classes, one for
regular information and one for applications that require service-level
assurance for high-bandwidth content such as video games, telemedicine, and
VoIP. The study estimates that the "required extra capacity," the
additional capacity needed for an undifferentiated network, could be 60
percent for even modestly utilized networks. Networks under heavy demand
could have a required extra capacity of 100 percent or more of the total
capacity required when differentiation is used. "Clearly an
undifferentiated network in this context is less efficient and more
expensive," says co-author K.K. Ramakrishnan of AT&T Labs. The researchers
presented their findings at the 15th IEEE International Workshop on Quality
Service.
Click Here to View Full Article
to the top
Wall Street Battles Silicon Valley for Top Tech
Grads
Bloomberg (06/30/07) Kassenaar, Lisa
Students with top grades in finance, math, and computer science,
particularly those with bilingual ability, are highly sought after by Wall
Street and Silicon Valley, more aggressively than any students since the
dot-com boom in the 1990s. Due to an increase in mergers, acquisitions,
leveraged buyouts, and hedge fund investing, U.S. securities firms are
having difficulty filling empty positions at investment banks, trading
rooms, and in quantitative financing. "It's ferocious," says Merrill
Lynch's director of campus recruiting Connie Thanasoulis. "You have to get
the technology part right because that's become the guts of the
organization." The most intense competition for new talent is over
graduates who can write algorithms for computer-based trading and search
engineers. Despite the high demand for computer science graduates,
enrollment in computer science programs dropped 39 percent in the five
academic years ending in 2006, according to the Computer Research
Association. Marketing consultant William Strauss says that many graduates
are smarter than their bosses were at the same age, and after seeing their
parents struggle to balance work and family, many are not willing to work
80 hour weeks. Many financial institutions are also having trouble
competing with the flexibility offered by technology companies such as
Google. Flexible hours are a standard practice at Google, as well as other
amenities that make the workplace feel like a college dorm room, including
pool tables, scooters to navigate the office, three free meals a day,
laundry and gym services, and free massages for employees on their
birthday. Such luxurious offers are making it difficult for other
industries to attract technology graduates. "Technology is the hardest to
hire for," says Goldman Sachs' head of U.S. campus recruiting Janet Raiffa.
"We really have to compete."
Click Here to View Full Article
to the top
The Vanishing American Computer Programmer
Christian Science Monitor (07/02/07) P. 15; Francis, David R.
The video posted on YouTube that shows an immigration lawyer talking to a
group of business people about the process of hiring foreign workers over
Americans has sparked outrage. "Our goal is clearly not to find a
qualified U.S. worker," says the immigration lawyer for the firm in the
video. University of California, Davis computer science professor Norm
Matloff says such efforts to hire cheap labor through loopholes in
immigration laws are "absolutely outrageous." Technology industry
executives have lobbied lawmakers to increase the number of H-1B visas and
other temporary visas for highly educated foreign workers, arguing the
visas are necessary because of a shortage of Americans educated in computer
technology and other sciences. Matloff says the H-1B debate is an example
of how some companies have become ruthless in their efforts to get what
they want from Congress. Proponents for increases in H-1B limits say there
is a shortage of computer professionals in the U.S., which is reflected by
an unemployment rate of 2.4 percent. However, John Miano, who runs his own
programming firm, argues that wages in the computer industry have been
stagnant after inflation for 10 years, which does not indicate a labor
shortage, and that the low unemployment rate is the result of programmers
and others leaving the industry because they are unable to find work.
Matloff notes that while the United States debates H-1B reform, the country
is losing considerable computer capabilities, as enrollment in university
computing programs continues to drop.
Click Here to View Full Article
to the top
SC07 to Feature 'Doctoral Showcase' Activity
HPC Wire (06/29/07)
The SC07 Technical Program will include the Doctoral Showcase, which will
give attendees of the supercomputing conference an opportunity to learn
about the HPC research pursuits of Ph.D. students, and help organizations
identify up-and-coming supercomputing talent. The inaugural forum will be
open to Ph.D. students who will be graduating within a year after the
conference, which is scheduled for Nov. 10-16, 2007, in Reno, Nev. The
Doctoral Showcase will be open to as many students as possible, and they
will have 15 minutes to present their latest results in HPC. Students have
until July 31, 2007, to register at the SC07 submissions Web site. "These
students represent the future of HPC and they are doing some incredible
work," says Jeffrey Hollingsworth, SC07 Doctoral Showcase chair and
professor of computer science at the University of Maryland. "We felt it
was important to create a forum to hear about their ideas." For more
information, or to register, visit
http://sc07.supercomputing.org/
Click Here to View Full Article
to the top
Aldrich Receives Dahl-Nygaard Prize for Outstanding Work
in Object-Oriented Programming
Carnegie Mellon News (06/25/07) Watzman, Anne; Spice, Byron
Carnegie Mellon University's Jonathan Aldrich, an assistant professor in
the Institutive for Software Research (ISR) at the university's School of
Computer Science, will receive the 2007 AITO Dahl-Nygaard Junior Prize for
his work in object-oriented programming. Aldrich's research addresses one
of the most important challenges in industrial software
development--correctly structuring large-scale programs. Software programs
can exceed 1 million pages of code, and software companies may have
hundreds of thousands of programmers on multiple continents. If one
programmer enters a single line of incorrect code, the entire system could
fail. Aldrich's solution is ArchJava, an extension of the Java programming
language that builds the high-level structure of a system inside the code
and automatically verifies that the code is consistent with the overall
structure. The objective is to summarize the architectural design of
large-scale systems on a single page and automatically ensure that every
page of code is consistent with the summary. "Jonathan's pioneering work
on ArchJava was the first to mathematically link a blueprint of the overall
architecture of an object-oriented system with the actual execution of the
object-oriented code," says professor of computer science and head of ISR
William L. Scherlis. "His work not only has theoretical interests, but it
can also be scaled to real-world software systems."
Click Here to View Full Article
to the top
Robot Works on Navigation Like a Human
Stuff (NZ) (07/02/07)
Auckland University of Technology researchers are researching ways to
improve robotic navigation by studying how animals find their way around.
Most robots use precise measurements to create detailed maps of their
surroundings, which they then use to navigate. Animals and humans however,
use spatial information that is often full of errors, but are still able to
successfully orient themselves and find their way home, says professor
Albert Yeap, director of the university's Centre for Artificial
Intelligence Research. Yeap and his team wrote software to simulate how
humans and animals navigate for their robot, "Albot." Albot was released
in a corridor and instructed to move around using its sonar sensors instead
of more precise lasers to map its location. Then it was instructed to find
its way back to its starting position. Albot scored top marks every time
it was released, using just a rough estimate of distances. Yeap believes
this is how humans create mental maps. Yeap says the main objective of his
research is to understand how humans create mental maps, but more adaptable
robots could be a bi-product of the research. Yeap's next project will
involve programming robots to use symbolic reasoning with concepts such as
"home" to create more complex maps.
Click Here to View Full Article
to the top
Star Trek Tech Will Let People Meet Virtually,
Researchers Say
CBC News (CAN) (06/28/07)
Business people will one day use videoconferencing technology that makes
it appear as if someone who is halfway across the world is present at their
meeting, if Edmonton researchers successfully develop their
three-dimensional virtual reality technology. Inspired by Star Trek's
holodeck, which produced holographic computer-simulated environments that
seemed real to crew members, the virtual reality technology will allow
users to meet in a virtual space filled with virtual products, and give
them the impression that they are meeting face-to-face in a room. Computer
scientists at the University of Alberta have been working with
Hewlett-Packard on the project for the past year. Their goal is to bring
such a level of realism to the technology that users will be able to
distinguish non-verbal cues of others, such as the twitch of an eye and
beads of sweat. Non-verbal cues often influence business decisions, but
current teleconferencing and videoconferencing technology are not able to
render such detail for users. "It doesn't convey the sense of presence,"
says lead researcher Pierre Boulanger. "So in some ways, we are in the
first phase of this telepresence revolution."
Click Here to View Full Article
to the top
Computer Program Makes Night Sky Searchable
Exduco (06/28/07) Franca, Sara
Computer scientists at the University of Toronto and astronomers at New
York University have developed a system that examines a portion of the
night sky and determines which stars are in the picture. The purpose of
the project is to use high-powered computing and machine learning to help
manage huge astronomical data sets. "We call it a blind astronometry
solver," computer science Ph.D. candidate Dustin Lang says. "It's a bit
like going outside on a dark night and trying to find the constellations,
except we're trying to recognize images that come from all kinds of
cameras, amateur telescopes, large ground-based telescopes, and space
telescopes such as the Hubble Space Telescope." Land says that because the
project is processing information on about a billion starts everything
needs to be done efficiently. The project is a part of astronometry.net,
and will have a significant impact on both professional and amateur
astronomers. "Amateur astronomers can take great pictures but they rarely
record where their telescopes are pointing," Lang says. "We can figure out
exactly where the image came from and combine images into a high-resolution
picture of the sky that is always being updated." The system can also help
correct possible telescope errors by checking information telescopes
record. The next step for the project is to make the system faster, more
flexible, and more robust.
Click Here to View Full Article
to the top
Grid Computing Misses the Point, Says Academic
Computerworld UK (06/28/07) Knights, Miya
Grid computing may not be the facilitator of e-science that it has been
touted to be, according to David De Roure, a professor of computer science
at the University of Southampton in the United Kingdom. De Roure will
present a paper on the issue at the eResearch Australasia conference in
Brisbane on Friday. Grid computing focuses on the raw power of a new
infrastructure for bringing grid services to users. However, De Roure says
there is more to new science than infrastructure, adding that the grid
community should take a step back and consider the evolution of the Web.
"If we want to enable new science then we need to empower the scientist,"
he says. "It remains a point of debate as to whether the functionality of
the Grid can be delivered through the far simpler programming interfaces of
the Web--I believe it can." De Roure is behind the Semantic Grid
Initiative, which gives defined meaning to information and services through
descriptive processes that get the most out of opportunities for sharing
and reuse.
Click Here to View Full Article
to the top
See a City Change in Four Dimensions
New Scientist (06/27/07) Marks, Paul
Frank Dellaert at the Georgia Institute of Technology has teamed up with
Grant Schindler and Sing Bing Kang of Microsoft's research lab in Redmond,
Wash., to create software that offers a virtual historical tour of Atlanta
by showing how the city has changed over time. The software, called 4D
Cities, can automatically sort snapshots from Atlanta's past into date
order, then construct an animated 3D model that shows changes, such when
buildings were constructed or demolished. 4D Cities is designed to work
with scanned historical photos that were snapped from similar vantage
points, making it easier for the system to identify 3D structures within
the images and break them down into a series of points, and then compare
views to determine why some points are visible and others are not. A
building may not have been in a shot, or it could have been blocked out by
another building. "If we can rule out those two possibilities, then we
know that the reason we don't see a building is because it didn't exist
when the image was taken," says Schindler. "Either it was not yet built or
it had already been demolished." The researchers want to develop models
for other cities, and improve the software's recognition of photos, which
would allow for the use of larger sets of time-sequenced images.
Click Here to View Full Article
to the top
An Agile Hypertext Design Methodology
University of Southampton (ECS) (07/01/07) Wills, Gary B.; Abbas, Noura;
Chandrasekharan, Rakhi
Lead times for software are being reduced to a matter of months as a
result of mounting pressure from customers, and organizations generally
prefer an iterative, incremental software engineering strategy to alleviate
the short lead time while supporting quality. Existing hypertext design
models fail to consider the issue of the requirements and analysis process
that usually fuels design, and the authors offer an agile, more holistic
approach to hypertext application development to address this process.
Elements of this method include personas (detailed descriptions of end
users) and scenarios (textual descriptions of a persona's mode of
interaction with the system and other personas); multimedia resources and
ontologies, which require the knowledge of end users and stakeholders;
storyboards that represent the user interface design; UML use cases built
from the scenarios; and Web service design via the Service Responsibility
and Interaction Design Method, which distinguishes abstract service
profiles from their implementation. Agile software development methods are
linked in certain principles, including the frequent delivery of functional
software within a short period of time, intimate communications within the
developer team and with clients, a greater focus on programming than
documentation, and simplicity. The authors say their approach favors
limited documentation that still guarantees effective communication within
the team and with the customer. The process features a feedback loop via
which developers continuously improve scenarios as the clients are refined.
"In addition the use of Web service provides the rapid and flexible
response to change, in that the complexity of the functionality can be
delivered incrementally and at different iterations," the authors
conclude.
Click Here to View Full Article
to the top
The Tech Lab: Bradley Horowitz
BBC News (06/29/07) Horowitz, Bradley
Yahoo's Bradley Horowitz envisions an "Internet of things" made possible
by a universal resolver that covers any entity, real or digital, physical
or conceptual. The challenge of transitioning to a world where everything
boasts a digital identifier is classified by Horowitz's colleague Marc
Davis as the "W4" problem, with who, when, what, and where representing the
four "W's." "When" and "where" are already handled very well with
innovations such as GMT and GPS, but resolving "who" (identity) and
especially "what" is proving more difficult, according to Horowitz. An
important question he asks is who or what should act as the authority for
assigning a digital ID to real-world objects, and he wonders whether a
standards body such as ICANN should be used to decide or arbitrate on the
universal resolvers. Horowitz sees a lot of promise in an effort that
involves crowd sourcing and tagging. "Where we find people codifying big
blocks of entities--whether in a movie database or books or restaurants, or
business entities--I am comfortable taking a pragmatic approach so long as
the companies contributing their respective intellectual property are
committed to open standards and strategies," he says. Horowitz notes that
the addition of microformats--coded, machine-readable bits of
structure--allows machine-to-machine communication and eliminates ambiguity
over the entities being discussed, and is a significant advancement toward
the Semantic Web.
Click Here to View Full Article
to the top
HP Lab University 2007: Future Promises 'Insanely Simple'
Technology
ITPro (06/28/07) Holland, Maggie
HP chief technology officer of personal systems Phil McKinney predicts
that social dynamics will change between now and 2025 as virtual worlds
such as Second Life become increasingly central to business and consumer
activities by 2020 and eventually gain a legal status by 2025. Personal
entertainment and smart devices will continue to become more popular, as
will intelligent networks and seamless connectivity. However, since not
everyone buys technology from the same supplier, it needs to be easier for
consumers to buy devices that work cohesively. McKinney says consumers do
not care what wireless technology a device uses as long as they are always
connected, so the industry needs to work with standards organizations. "We
believe that in the next two to three years consumers will have that
always-connected experience," McKinney says. McKinney believes the answer
to privacy and information overload problems is to create devices that are
"insanely simple to use." He says today's devices force users to go
through technology to get to the benefits, and the challenge will be to
take technology and move it to the background.
Click Here to View Full Article
to the top
Beating Congestion With Mobiles
BBC News (06/29/07) Reid, David
Massachusetts Institute of Technology researchers are using data from
mobile-phone networks to create real-time maps of people moving around
Rome, a system that could help ease traffic congestion. Mobile networks
track users to ensure signals stay strong, and because so many people have
mobile phones, particularly in Rome, the network information can give an
accurate picture of where people are in a city. "This is really the first
time that you can take an urban system, like a big city, and try to see in
real time how it lives, how people move, and what's happening in the city,"
says MIT's Carlo Ratti. "In the city, for example, you've got taxis with
GPS, you've got buses with GPS, and also you've got mobile phones. If you
take that information and you apply artificial intelligence and algorithms
to it, then you can understand very interesting things about the urban
system." The project, called Real Time Rome, could be used to help ease
traffic congestion, help drivers find alternate routes, and help Italy's
transport agency more efficiently allocate transportation resources by
tracking where people are, which would allow for flexible and more
efficient bus routes.
Click Here to View Full Article
to the top
'I Want to Reduce GUI as Much as Possible'--Toshiyuki
Masui, Apple
Tech-On! (06/26/07) Nozawa, Tetsuo
Real-world graphical user interfaces have become the research focus for
Toshiyuki Masui, a well-known software researcher in Japan who joined Apple
in Silicon Valley last year. Masui says the goal is to limit the number of
GUIs so that users can move more toward the direct operation of an item
such as a home appliance. An alarm clock can be set by adjusting the
hands, which means a complex GUI for setting the time to record a video
would not be needed, according to Masui, who adds that the key is to
actually touch the equipment, and not operate it indirectly. Turning off a
ceiling light with a switch is an example of indirect operation. Masui
believes different kinds of sensors and real-world interfaces can be used
to reduce GUIs as much as possible. He joined Fujitsu in 1984 and was
involved in the development of semiconductors, but his overwhelming
interest in software led him to move to Sharp, where he participated in the
development of proprietary GUIs for word processors. Masui is also known
for his pioneering work at Sony Computer Science Laboratories in developing
"predictive entry" technology, which is used by most mobile phones in Japan
today.
Click Here to View Full Article
to the top
Getting Data Centers to Chill
HPC Wire (06/29/07) Vol. 16, No. 26, McCann, Tim
The heat generated by increasingly dense rack systems employed in the
quest for high-performance computing raises the risk of system failures and
reduced equipment lifespans, and disbursing heat through water is becoming
popular as a thermal dissipation solution, writes SGI chief engineer Tim
McCann. One available solution is a closed-loop rack airflow system that
employs water-chilled coils affixed within the rack to remove the heat
after air flows through the computer's electronics, and then recirculates
the cooled air back into the rack. Meanwhile, the open-loop rack airflow
solution also cools the system's electronics with fan-circulated air, which
is cooled via water-chilled coils and then exhausted at the rear of the
rack. This solution tries to maintain the exhausted air's temperature at a
slightly higher degree than the ambient data center's temperature, and
lower the likelihood of hot spots. The growing deployment of water-chilled
coil solutions is encouraging vendors to enhance the solutions' efficiency
and convenience. One notable advancement is the containment of the water
cooling mechanism in a hinged rear door that can be opened at any time to
allow easy access to air-movers and cables, as well as the door itself.
The implementation, maintenance, and servicing of racks can be greatly
streamlined with this method.
Click Here to View Full Article
to the top
Open Code and Culture at Merced's New
Collaboratory
CITRIS Newsletter (06/07) Slack, Gordy
The School of Engineering at UC Merced's concentration on open source as a
software infrastructure and as an engineering education tool is part of an
effort to erase the intimidation students feel at the prospect of working
with hardware or software, explains UC Merced's CITRIS director and Dean of
Engineering Jeffrey Wright. The school's CITRIS-supported Collaboratory is
an open-source computer teaching facility that can be run for just about 15
percent of the budget and less than 15 percent of the power consumption of
a typical lab of its size, according to Wright. The Collaboratory is an
entirely user-designed lab with absolutely no reliance on proprietary
software; in addition to open-source software, it is supported by commodity
hardware, requires only a modicum of administration, and boasts
student-to-student and student-to-instructor interactivity as well as
facile remote access and interaction. The lab is safely upgradeable thanks
to its open-source nature, and Wright notes that "there is also an
educational pedagogy here that drives our classes. We are trying to get
our students to think differently about information technology by placing
an extremely strong emphasis on information and its management rather than
on the technology." The functionality of the Collaboratory is
faculty-designed, removing a reliance on vendors. The lab is highly
adaptive, which helps promote the school's CITRIS-sponsored agenda to craft
joint ventures with local community colleges and high schools so students
are better prepared for courses comparable to those offered at Merced.
Click Here to View Full Article
to the top