Stemming Spam: Internet Routing and Spam Data Reveal
Trends to Help Researchers Build Better E-mail Filters
Georgia Institute of Technology (09/12/06)
Researchers at the Georgia Institute of Technology have found that
addressing spam at the network level could be a more effective solution for
Internet service providers than today's message content filters. They have
also developed algorithms that can detect when a computer is a member of a
botnet, as well as a technique for bolstering the security if the
Internet's routing structure. "Content filters are fighting a losing
battle because it's easier for spammers to simply change their content than
for us to build spam filters," said Nick Feamster, an assistant professor
of computing. "We need another set of properties, not based on content.
So what about network-level properties? It's harder for spammers to change
network-level properties." The research will be presented at the ACM
SIGCOMM conference on September 11-15 in Pisa, Italy. The researchers
spent 18 months collecting Internet routing and spam data from one domain.
They found that they can identify which Internet service providers are
transmitting spam, as well as the numerous narrow ranges of IP address
space that are only producing spam. Spammers exploit vulnerabilities in
Internet routing protocols by broadcasting a route for that space to the
routers on the Internet, enabling them to assign their machines any IP
address within that space. They then send spam from those machines and
promptly withdraw the route of transmission. The IP address is no longer
reachable and the route disappears by the time the recipient can file a
complaint. "Even if you're watching the hijack take place, it's difficult
to tell where it's coming from," Feamster said. "We can make some good
guesses. But Internet routing protocols are insecure, so it's relatively
easy for spammers to steal them and hard for us to identify the
perpetrators." Feamster hopes that his research will lead to more secure
Internet routing protocols and improved spam filtering.
Click Here to View Full Article
to the top
Sandia Fingerprinting Technique Demonstrates Wireless
Device Driver Vulnerabilities
Sandia National Laboratories (09/12/06)
Researchers at Sandia National Laboratories have demonstrated a
wireless-networking vulnerability that could enable a hacker to identify an
802.11 wireless driver without modifying the device. By making the unique
"fingerprinting" technique publicly known, the researchers hope to improve
the security of wireless communications. Device drivers have become a
principal vulnerability in today's operating systems, Sandia's Jamie Van
Randwyk says. Video and keyboard drivers are unlikely targets because it
is difficult to gain physical access to them, but some types of drivers,
such as wireless cards, Ethernet cards, and modems, can be compromised
without physical access, Van Randwyk notes. "Wireless network drivers, in
particular, are easy to interact with and potentially exploit if the
attacker is within transmission range of the wireless device," he said.
The research demonstrates that an attacker can monitor a victim's wireless
traffic so long as he is within transmission range. Since the attacker is
not sending data, he essentially operates invisibly, making the attack
difficult to detect. Wireless configurations periodically send out probe
request frames to scan for access points, but the requests are not governed
by any standard 802.11 specifications. The fingerprint technique
highlights the vulnerabilities that arise from different wireless device
drivers performing the probe request function differently. The
fingerprinting technique tested at accuracy rates between 77 percent and 96
percent, depending on the setting of the network.
Click Here to View Full Article
to the top
Will Airport of the Future Fly?
CNet (09/13/06) Olsen, Stefanie
At the opening session of the FAA/NASA/Industry Airport Planning Workshop,
Cisco Systems' Dave Evans articulated a bold vision of technological
transformation for airports, where virtual intelligence agents could check
in bags, new sensor networks could improve security, and pilots could even
fly a plane from home using a remote brain-machine interface. Evans
described RFID readers that could enable airlines to identify passengers by
their cell phones and check them in remotely, while new display
technologies could change the way that flight information is presented
inside the airport terminal. Evans told the audience that he has developed
software that could enable virtual intelligence agents to learn from their
interactions with human airport workers. Executives in attendance from the
airport industry reacted to Evans' predictions with a mixture of excitement
and fear, as well as a healthy dose of skepticism, given that airports
still lack some of the most basic technological needs, including devices to
scan passengers and luggage for dangerous devices such as bombs.
Government regulations also stall the adoption of new technologies. "I
think it's a real challenge for government to react to technology changes
whether it's security or flying," said Steve Martin, CFO of policy and
planning for Airports Council International, North America. "I don't see
government agencies being able to keep up with technology's exponential
growth." Nevertheless, the participants expressed measured optimism that
policymakers might cut through some of the red tape if they were shown
simulations of how new technologies could improve the airport industry.
Click Here to View Full Article
to the top
DePauw to Host Midwest Celebration of Women in Computing
Conference
DePauw University (09/12/06)
DePauw University is expected to draw approximately 100 women from 25
schools across the Midwest to its campus in Greencastle, Ind., for the
Midwest Celebration of Women in Computing (MidWIC) conference, scheduled
for Sept. 29-30, 2006. DePauw computer science professor Gloria Childress
Townsend, co-organizer of MidWIC, says the event will build on the success
of the Indiana Women in Computing annual conference in February, which she
chaired. Sheila Castaneda, associate professor and chair of computer
science at Clark College, will deliver the keynote address. The conference
will have a papers program and publish conference proceedings that include
copies of papers, with hopes of preparing female computer science students
for the writing, reviewing, and publishing process. ACM's committee on
Women in Computing (ACM-W) is a sponsor of MidWIC, and will join Microsoft
and Google in providing scholarship grants. "MidWIC will connect women in
computing from these diverse settings, emphasizing the importance of
graduate school and remaining in the computer science field to affect the
technological future where women's influence and unique perspective will be
essential," says Townsend. For more on ACM-W, visit
http://women.acm.org
Click Here to View Full Article
to the top
New Global Grid Computing Technology Demonstrated by
Researchers in US and Japan
Carolina Newswire (09/11/06)
U.S. and Japanese researchers have demonstrated an automated
interoperability between a national grid computing testbed in each country.
Linking Japan's G-lambda project and the United States' Enlightened
Computing project is the first demonstration of interoperability between
grid initiatives in two countries at this scale. Many believe that the
seamless connection of geographically distant grid environments will be the
key to the next generation of Internet services. Grid computing is of
particular value for large-scale scientific research projects, enabling
scientists to share equipment and work on high-speed optical networks
operating at 10,000 times the speed of the broadband connections at
individual users' homes. The U.S. and Japanese researchers demonstrated
how network connections to the grid, which typically take weeks to
establish, can be created "on demand" using new software applications,
providing access to the grid resources only for the time that is needed,
whereas connections previously have been tied up for months or even years.
"This has been a wonderful collaboration to demonstrate the
interoperability of resource management middleware," said Tomohiro Kudoh of
Japan's National Institute of Advanced Industrial Science and Technology.
"In addition, we together obtained a lot of knowledge about the middleware
integration in terms of the architecture as well as the implementation.
This collaboration will become a model of future service infrastructure
provided by multiple organizations in multiple network domains."
Researchers developing next-generation optical networks hope to use what is
called the optical control plane to deliver control of the network and
resources to software programs. The optical control plane regulates the
creation, management, and release of optical-network connections, as well
as the algorithms that determine the most direct path between resources.
Click Here to View Full Article
to the top
Simulating September 11
The Engineer Online (09/12/06)
A team of researchers at Purdue University has developed a detailed
simulation based on scientific principles and mathematical models to
examine what likely transpired when the World Trade Center's North Tower
was struck by a hijacked commercial airliner on Sept. 11, 2001. The plane
and its mass are represented as hundreds of thousands of "finite elements,"
or tiny squares with specific physical properties. The simulation could
help determine which parts of the building's structural core were affected
and how the tower eventually collapsed from the fire that was fueled by
some 10,000 gallons of jet fuel. The first simulation, which depicts how
the plane ripped through several stories of the building in a half-second,
was the product of 80 hours of work by a 16-processor high-performance
computer, said Purdue computer science professor Christoph Hoffmann. "This
required a tremendous amount of detailed work," Hoffman said. "We have
finished the first part of the simulation showing what happened to the
structure during the initial impact. In the coming months, we will explore
how the structure reacted to the extreme heat from the blaze that led to
the building's collapse, and we will refine the visual presentations of the
simulation." Hoffmann and his colleagues are trying to determine how many
columns in the building's core of 47 heavy steel I-beams were initially
destroyed. It now appears that 11 columns were destroyed on the 94th
floor, 10 on the 95th floor, and nine on the 96th floor, said Purdue
engineering professor Mete Sozen. "This is a major insight. When you lose
close to 25 percent of your columns at a given level, the building is
significantly weakened and vulnerable to collapse," Sozen said. The
researchers, drawing partially on the findings of a similar study on the
September 11 attack on the Pentagon conducted in 2002, have concluded that
most of the structural damage in such a collision is the result of the
impact of the mass of fluid on board.
Click Here to View Full Article
to the top
Australia's Jobs Boom for Robots
Age (Australia) (09/12/06) Timson, Lia
Scientists and computing experts will gather to discuss the future of
robotics and other technologies at the Computing the Future Symposium at
the University of Sydney tomorrow. Australia's vast expanse of land,
hostile environment, and sparsely populated remote areas make it an ideal
environment for the development of autonomous systems. "Australia may end
up being one of the biggest users of robotics in the world," said leading
Australian roboticist Hugh Durrant-Whyte, director of the Australian
Research Council Center of Excellence for Autonomous Systems at the
University of Sydney. At cargo terminals, field robotics applications have
already demonstrated their potential for commercial settings, and in the
coming decade, automated systems will be deployed in mining, agriculture,
and defense, Durrant-Whyte says. Despite the technological advances,
consumer applications remain farther off, as robots still have difficulty
negotiating stairs or executing tasks such as pouring a glass of water.
Also speaking at the conference will be Howard Charney of Cisco Systems,
who will challenge the audience to advance innovation by using the Internet
as an educational tool, and to continue investing in broadband
infrastructure. "The network of tomorrow will be evolutionary and it will
build on the (Internet) of today. It will be much faster, much more secure
with (clear) video and audio, but all dreams will come to a halt if
broadband rollout can't be done faster," Charney said. Albert Zomaya, head
of the School of Information Technologies at the University of Sydney, will
discuss Australia's role in bioinformatics and biomedical imaging. Zomaya
likens the role that computer science plays in biology to that historically
played by math in physics, though he warns that there needs to be more
funding in Australia if it is to compete with the United States and
Europe.
Click Here to View Full Article
to the top
Net Neutrality Bill May Die This Year
CNet (09/12/06) Broache, Anne
Before Congress left Washington for its August recess, Senate Commerce
Committee Chairman Sen. Ted Stevens (R-Alaska) suggested he was confident
that he would be able to round up the 60 votes needed to end a filibuster
on a sweeping communications bill that includes everything from changes to
the way the government subsidizes rural telecommunications to a revival of
the controversial "broadcast flag" copy protection. But at a committee
event in Washington on Tuesday, Stevens said the debate over Net neutrality
is holding up the bill and could cause it to be derailed this year. Top
Senate committee aides say it is impossible to predict whether their bosses
will be able to pass the communications legislation this year, particularly
since Congress is set to recess again in a few weeks. Even if the
legislation stalls this year, the debate over Net neutrality will not
likely end anytime soon, some aides say. "That issue is not going to go
away until we have a whole lot more (broadband) competition than we do
today, at least in my view," said James Assey, senior counsel to Democrats
on the Senate Commerce Committee. Others say the current bill offers
sufficient protection in the "Internet consumer bill of rights" section of
the legislation, and note that failure to pass any new laws is worse. Lisa
Sutherland, staff director for the committee's Republican side, says, "If
we don't get a bill up at all, we basically have the status quo, and there
are zero protections on Net Neutrality."
Click Here to View Full Article
to the top
How Harmud Pilch, Avid Computer Geek, Bested
Microsoft
Wall Street Journal (09/12/06) P. A1; Jacoby, Mary
A German linguist and self-styled computer geek, Harmut Pilch is leading
the charge against a new patents court in Brussels that is supported by
Western technology giants including Microsoft and Siemens. "Patents on
software mean any programmer can be sued at any time," Pilch said. Last
July, some 200 programmers followed Pilch's call to protest at the European
Parliament, demanding the right to a free exchange of computer code. In
response, Parliament scuttled a law that technology companies had spent
years and millions of euros trying to get passed. The battle is now over
the creation of a special patents court that would hear appeals cases from
across Europe. The court is supported by the likes of Microsoft because
national courts often reject software patent claims. Pilch's group, the
Foundation for a Free Information Infrastructure (FFII), believes that
computer language should be no more a marketable commodity than human
speech. Claiming that copyright law already offers sufficient protection
from piracy, Pilch wants to protect Europe from the software patents that
he says have already undermined innovation in the United States. The
current debate dates back to 2002, when the European Commission proposed a
law that would subordinate the national courts to the pro-software patent
European Patent Office, arguing that a streamlined patent system would
improve European competitiveness. Having won over pro-industry members of
Parliament, who voted in 2005 to abandon the proposed software law, the
FFII has now redirected its energies toward defeating the proposal for a
patent court.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Personal Data Protection Vital to Future Civil
Liberties
IST Results (09/13/06)
Researchers working under the SWAMI project set out to determine the
privacy implications of the ongoing miniaturization of intelligent devices
that can be embedded throughout the environment to capture and relay
personal information. With microelectro-mechanical sensors the size of a
grain of sand capable of detecting a whole spectrum of environmental
conditions, from light to vibrations, the environment is becoming much more
intelligent, but the era of continuous communication could have troubling
implications for security, privacy, and civil liberties. Observers believe
that ambient intelligence could be a major boon to Europe's economy, and
the field has already received considerable research funding. But in order
to deliver customized information and services to individual users, an
inordinate amount of personal data must be stored, where it could be
vulnerable to abuse. "Most people would be shocked to find out just how
much information they consider private is already in the public domain,"
said David Wright, the project's information coordinator. The SWAMI
researchers explored several everyday scenarios that demonstrated how
information could be misused in a world of intelligent environments, such
as a hacker accessing the control system of a traffic grid powered by
ambient intelligence, or the theft of a large volume of personal data from
a data-aggregation company whose main system is powered by ambient
intelligence. Wright and his colleagues compiled a list of proposed
measures for safeguarding personal data, including the privacy-enhancing
technology that can be incorporated into fourth-generation mobile devices.
They also call for legislation at both the national and European levels to
meet the challenges of increasingly intrusive technologies.
Click Here to View Full Article
to the top
Google Seeks Help With Recognition
Business Week (09/07/06) Holahan, Catherine
No stranger to massive projects, Google is now attempting to digitize the
entirety of the world's printed material with its Books, Scholar, and News
Archive initiatives. But as it tries to build the world's largest online
library, Google is seeking help, having released its character-recognition
scanning software to the open-source development community. Google's first
step in improving optical character recognition (OCR) technology was to
debug Hewlett-Packard's old OCR engine, which had recently been released to
university researchers in Nevada after sitting idle since 1995. Google
opened the debugged version to the development community in the hopes of
greatly expanding its capabilities to surpass existing OCR search engines.
OCR technology makes documents readable to search engines; without it, a
scanned page simply appears as an image, and the search engine is unable to
locate keywords or phrases within the text. The HP program, Tesseract,
lags well behind the standards of current commercial OCR applications,
however. Google hopes that by enlisting the development community, the HP
engine will not only overcome its deficiencies in areas such as reading
gray scale and text with background color, but also vault ahead of existing
OCR search engines, which often have difficulty reading foreign languages,
handwriting, and irregular fonts and layouts. In the past, Google has also
had trouble with blurry or off-center scans that are sometimes unreadable
to the OCR engines. "If you look at OCR over the past 10 years, not much
has happened. There are some programs out there that are pretty good, but
we wanted to see if by putting OCR out there we could improve it," said
Google's Chris DiBona.
Click Here to View Full Article
to the top
Supercomputing: The Next Industrial Revolution
Industry Week (09/13/06) Ahalt, Stanley
As small and midsize manufacturing companies throughout the country
struggle to compete with foreign rivals, they must embrace new technologies
that can reduce production costs, streamline work processes, and improve
product quality, writes Stanley Ahalt of the Ohio Supercomputer Center.
The high-performance computing power that facilitates computer modeling and
simulations is no longer beyond the reach of smaller enterprises. Just as
PCs have become smaller and more affordable for consumers, high-performance
computing technology today is scalable for companies of all sizes and for
any purpose, and Japanese and Chinese companies have already begun to
invest heavily in the technology. Supercomputing technology can generate
better product models and reduce time to market, while simulations can
clarify the choice between alternative processing techniques. Pringles,
for example, is borrowing the airline industry's aerodynamic analysis
methods to prevent potato chips from flying off the assembly line. Using
simulations, Goodyear has dramatically cut its spending on physical tire
prototypes. The supercomputing charge has been taken up by Microsoft
Chairman Bill Gates, who said, "We see as a key trend here is that we'll
have supercomputers of all sizes, including one that will cost less than
$10,000 and be able to sit at your desk or in your department and be very,
very accessible...we need an approach here that scales from the smallest
supercomputer that will be inexpensive up to the very largest." In that
vein, the Ohio Supercomputer Center has developed the Blue Collar Computing
program, which aims to provide smaller companies with access to
supercomputing resources. The program has drawn the support of legislators
and President Bush.
Click Here to View Full Article
to the top
Beer Project Is a Lot of Froth and Bubble
Age (Australia) (09/11/06) Hearn, Louisa
Researchers at the Australian research agency CSIRO plan to spend the next
four years developing software tools that will allow animators to more
realistically recreate the movement of water for movies and games.
Creating the animation of liquids is difficult because the researchers also
have to factor in the behavior of motion, foam, bubbles, splashes, waves,
eddies, and whirlpools. The team of scientists recently completed the
first stage of the project by using simulated scenes from the sinking of
the Titanic as an example of animated motion of water. The researchers had
to use difficult mathematical algorithms to calculate the behavior of
flowing water, which was then rendered in software for simulating the
motion of different objects. "That was quite challenging to do using
proper physics and is something we believe is quite unique in our
simulations," says Mahesh Prakash, a research scientist at CSIRO.
Resolution was an enormous concern for the researchers because if they did
not have enough the animated fluid would resemble flowing ice. The
researchers solved this problem by using Computational Fluid Dynamics,
which gave animators control of every individual particle in a fluid.
CSIRO scientists are now turning their attention to other liquid behaviors,
and hope to simulate the formation of bubbles, foam, and spray, by
animating actions such as the pouring of beer, by the end of the year.
Click Here to View Full Article
to the top
Breakthroughs in Open Source
InfoWorld (09/04/06) Vol. 28, No. 36, P. 20; Binstock, Andrew;
McAllister, Neil; Venezia, Paul
Community-driven or open-source software development is highly valued
because it cultivates software products organically, and the resulting
innovations often accommodate functional areas not covered by proprietary
software. The basic building blocks of enterprise Java, which include the
Eclipse and NetBeans development environments, the PMD source code
validator, Ant, Hibernate, Maven, and JUnit, are all open source, while the
JavaServer Faces, Struts, and Spring open-source frameworks have also
proven valuable, along with containers such as Apache Tomcat, Geronimo,
Jonas, Jetty, and Resin. There is an assortment of open-source multimedia
projects underway, among them the Ogg Vorbis "lossy" audio compression
technology, which is royalty free and can deliver better sound quality than
MP3 at a similar compression grade via advanced psychoacoustic modeling;
meanwhile, the BBC Research-sponsored Dirac project seeks to improve video
delivery through the use of wavelet compression. To address the content
problem that is hindering the adoption of open-source multimedia products,
Sun Microsystems is supporting TheOpenMediaCommons initiative to develop
DRM technology in a community-driven manner. The open-source Linux
operating system can deliver advantages to embedded system devices through
its low cost, openness, and flexibility, easing the construction of complex
embedded applications, accelerating the production of prototypes and
time-to-market, and nurturing a climate of "competitive collaboration,"
among other things. Open-source has started to penetrate the security
industry because the products are developed by scores of quality assurance
teams, which entails the faster detection of bugs and far more scrutiny
paid to fixes than in the commercial sector. Nearly all scripting
languages are open source, and this property is critical to their success
because it is directly responsible for building developer communities.
Finally, open-source enterprise messaging is gaining credibility as an
alternative communications medium for small to medium-sized organizations;
the advantage over proprietary messaging is a long-term guarantee to
customers that their data will be accessible when they need it.
Click Here to View Full Article
to the top
Are You Being Served?
Public CIO (09/06) Vol. 4, No. 4, P. 52; Douglas, Merrill
As the U.S. economy increasingly shifts toward services and away from
manufacturing, business researchers are starting to pay more attention to
services as well, and this holds some promise for the public sector.
Arizona State University has long been paying attention to the service
economy, and its W.P. Carey School of Business has spent 20 years helping
companies achieve excellence and innovation in services through its Center
for Services Leadership. Mary Jo Bitner, the center's academic director,
says the center is working with the university's computer science and
engineering programs to seek National Science Foundation (NSF) funding for
a cross-disciplinary doctoral program in services science. Other
universities that have entered this field are Stanford, North Carolina
State, Rensselaer Polytechnic Institute, Georgia Institute of Technology,
and Penn State, urged on by organizations such as IBM and the NSF. Among
the goals of the NSF's Service Enterprise Engineering program is to help
boost the service sector's productivity growth--which has been just 0.4
percent a year, compared to 4 percent in manufacturing. Bob Glushko,
adjunct professor at UC Berkeley's Services Science, Management, and
Engineering program, says academic attention to services stands to benefit
the public sector and its CIOs as well. He says services science brings
together various disciplines involved in creating effective e-government
applications, and employees with training in services science could allow
for the creation of more complex systems than are viable now. Meanwhile,
Bitner says the topic of "customer co-production" or "customer
co-involvement" is another area of services-science research that could be
of particular interest to government executives: "Government is doing a
lot more with self-service through technology, and the research we've been
doing in that area suggests there are a lot of ways you could do that more
effectively."
Click Here to View Full Article
to the top
5 Paths to the Walking, Talking, Pie-Baking Humanoid
Robot
Popular Science (09/06) Vol. 269, No. 3, P. 58; Mone, Gregory
Roboticists are working toward a vision of a robotic, humanoid,
multi-tasking servant, but challenges in the areas of interaction,
locomotion, navigation, manipulation, and intelligence must be met in order
to realize this vision. Interactive challenges include making robots
capable of understanding people's orders, memorizing names and faces, and
assigning significance to words through experiential data. In terms of
locomotion, the ideal design for a bipedal robot optimizes efficiency and
stability, marrying low-power consumption with reliable balance; better
actuators are viewed by some as a short-term solution, while artificial
muscles are considered to be a longer-term measure. The advantages of an
ambulatory robot as opposed to a wheeled robot include the ability to climb
over obstacles, which requires significant advances in navigation. One
school of thought advocates equipping robots with sensors to enable a
360-degree perspective, while another prefers binocular vision because it
is closer to human perception and vital to understanding how machines can
be designed to replicate human abilities. Giving robots the ability of
precise manipulation requires breakthroughs beyond versatile hands and fast
reaction time; it requires advances in tactile sensitivity, examples of
which include MIT roboticist Eduardo Torres-Jara's experiments with
artificial skin. The ultimate decision of what kind of artificial
intelligence robotic servants will have--top-down or bottom-up--will be
left to consumers. Top-down AI is a system is which robots are guided
through their chores by dedicated algorithms, while bottom-up AI uses an
artificial brain that can learn and mature on its own.
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Congestion Avoidance Based on Lightweight Buffer
Management in Sensor Networks
IEEE Transactions on Parallel and Distributed Systems (09/06) Vol. 17, No.
9, P. 934; Chen, Shigang; Yang, Na
Shigang Chen and Na Yang with the University of Florida's Department of
Computer and Information Science and Engineering propose a lightweight
buffer management scheme designed to avoid congestion in wireless sensor
networks. Preventing data packets from inundating the intermediate
sensors' buffer space is the goal of Chen and Yang's approaches, which
automatically adjust the sensors' throughput to close to optimal levels
without inducing congestion. Buffer-based congestion can easily occur in a
network where the packets converge toward a sink as the result of a surge
of sensor input triggered by a critical event. Chen and Yang's
buffer-based scheme, in their words, "eliminates the complicated rate-based
signaling that is required by many existing congestion control approaches,
yet it can produce much larger network throughput and, unlike the
rate-based approaches, it does not drop packets." According to the
researchers' simulations, a sensor can achieve high throughput and deter
congestion simply by allocating a small buffer. Chen and Yang explain that
energy can be saved and radio collisions with other sensors transmitting in
the vicinity reduced through the elimination of unhelpful transmission and
the maintenance of upstream sensor silence via their suggested
congestion-avoidance scheme. Comparison through simulation demonstrates
that Chen and Yang's buffer-based scheme, unlike other congestion
control/avoidance schemes (global rate control, backpressure, etc.), rarely
drops packets because of buffer overflow, and can automatically adapt the
sensors' data rates to network conditions and yield congestion-free rates
that are a marked improvement over other schemes as well.
Click Here to View Full Article
to the top