Standards to Stimulate E-Voting?
CNet (10/06/06) Lombardi, Candace
The CalTech/MIT Voting Technology Project at the Massachusetts Institute
of Technology last week convened a panel of election and data specialists
to discuss the challenges of incorporating technology into the voter
registration process in order to assure accuracy and maintenance. A major
problem identified by the panel is a lack of standardization in the way
that voter information is stored. The way in which first and last names
are broken down in some registries while being lumped into a single name in
others was cited as one obstacle. Suffixes such as Jr. and Sr. only add to
the confusion. The Help America Vote Act of 2002 included no such rules or
recommendations for standardization. A system known as Texas Election
Administration Management (TEAM), scheduled to be operational by 2007, is
accessible via the Internet and allows local changes to state ballots. The
service allows a voter's information to be transferred within the state
should they move, but before such a system can be implemented, the old data
must be cleaned up. Current possibilities for exchanging data between
states include Election Markup Language (EML) and Extensible Markup
Language (XML). TEAM uses an XML-based format called EDX; 254 of Texas's
counties have chosen to implement TEAM, but 27 have chosen to remain
offline. Meanwhile, an unwillingness to join the e-voting trend is causing
problems for the standardization efforts as some districts prefer their
own, traditional, systems. Panelist Thad Hall, a professor at the
University of Utah and co-author of "Point, Click, and Vote: The Future of
Internet Voting," compared this reluctance to the VHS vs. Betamax debate
where consumers sat back and waited to see which format would gain
prominence. "I am confident that three or four years from now, everyone
will come online," said panelist Ann McGeehan, director of elections for
Texas. The Healthcare Insurance Portability and Accountability Act is
cited as a success of standardization; where 450 formats were condensed
into a single standard in six years. For information about ACM's many
e-voting activities, visit
http://www.acm.org/usacm
Click Here to View Full Article
to the top
Conference Gives Pep Talk to Encourage Women in
Technology
San Diego Business Journal (10/09/06) Yarnall, Amy
Last week's Sixth Annual Grace Hopper Celebration of Women in Computing
Conference, sponsored by ACM and the Anita Borg Institute, provided a place
for women to discuss both the technology industry and the state of women in
it. Attendance exceeded expectations by a third, totaling 1,200 men and
women. Anita Borg Institute President Telle Whitney said "there simply
aren't enough people to do the technology jobs" on a global level, making
it a valuable, yet often neglected, resource for women. "This conference
is a wonderful place where women in various technologies can come together.
These women have conversations not around the issues of their companies
but of the commonalities they share with other women," says Fran Berman,
director of UC San Diego's Supercomputer Center. At the Technology Leaders
Workshop held at the conference, Berman observed that mentoring programs
are clearly needed, "to connect the mentees of the company with the
mentors." This collaboration is what the conference is beginning to
embody, as many women go back to work with a list of suggested
modifications. One attendee, Rivi Sterling, said when she returned to
Microsoft following the 2000 conference, she expressed the need to "get
involved with this conference and start doing something to get women into
the pipeline," a suggestion Microsoft took. The company is now a supporter
of the annual event.
Click Here to View Full Article
to the top
Flaw Found in European Voting Machines
IDG News Service (10/06/06) McMillan, Robert
Electronic voting machines used by 90 percent of Dutch voters can be
easily tampered with, says Dutch e-voting researchers in a report published
Friday. The researchers say, "We don't trust voting computers. Anyone,
when given brief access to the devices at any time before the election, can
gain complete and virtually undetectable control over the election
results." Radio emanations can be studied to find out what votes were
being cast, according to the researchers, who also claim that all that is
needed to break into theWedap/Groenendaal ES3B voting machine, the same
type used in France and Germany, is a key that can be purchased on the
Internet. The same type of key is also available for the Diebold voting
machines used in the U.S., according to Edward Felten, the director of
Princeton University's Center for Information Technology Policy. Felten
and his colleagues conducted a test in which they claim to have been able
to install vote-altering software on Diebold's AccuVote-TS machine in less
than a minute. While Diebold disputes these claims, Felten calls the
security problems facing e-voting "very difficult or even infeasible to
address." The manufacturer of the voting machine used in the Netherlands,
Nedap, claims that it is significantly more difficult to tamper with the
results of an e-voting system than a paper ballot system. When asked if
manipulation of their machines was possible, the company responded,
"everything can be manipulated." Edward Felten is a member of the
Executive Committee of ACM's U.S. Public Policy Committee;
http://www.acm.org/usacm
Click Here to View Full Article
to the top
Bottlenecks in Parallel Programming Hurt
Productivity
Electronic News (10/10/06) Davis, Jessica
Difficulty programming the code on which supercomputers run is the major
impedance to productivity in engineering and scientific discovery,
according to a survey of 500 users of parallel high-performance computers
(HPCs) by the Simon Management Group. The survey found that writing
parallel code, programming efficiency, translation, debugging, and the
limits of software are the most common bottlenecks throughout all
industries using supercomputers. Even though C and Fortran are often used
for prototyping, those surveyed said decidedly that an interactive desktop
tool would be preferable, if only it could be easily bridged to work with
HPCs. The dilemma is a result of the fact that machines are unable to deal
with the processing and memory requirements of the large data sets produced
by scientific and engineering research. The survey found that the average
median-sized data set used in a technical computing application today
ranges from 10 GB to 45 GB, and is expected to rise to 200 GB to 600 GB in
only three years. "The study demonstrates that programming tools have not
kept pace with the advances in the computing hardware and affordability of
HPCs," says Simon Management Group President Peter Simon.
Click Here to View Full Article
to the top
WWW2007
Mark Little's Weblog (10/08/06) Little, Mark
The Web Services Track of the Sixteenth Annual International World Wide
Web Conference is accepting original papers describing work in all areas of
Web services. Possible topics include, but are not limited to: service
contract and metadata; orchestration; choreography and composition of
services; large scale XML data integration; dependability; security and
privacy; tools and technologies for Web Services development, deployment,
and management; software methodologies for service-oriented systems; the
impact of Web Services on enterprise systems; Web Services performance;
architectural styles for Web Services computing; application of Web
Services technologies in areas including e-commerce, e-science, and grid
computing; and impact of formal methods on Web Services. Papers are due
Monday, November 20, 2006, and acceptance will be announced on January 29,
2007. The conference will be held Tuesday-Saturday, May 8-12, 2007.
Papers will undergo three rounds of peer review by members of an
International Program Committee, and those accepted will be printed in the
conference proceedings published by ACM. For additional information go to
http://www2007.org.
Click Here to View Full Article
to the top
EU and Industry Shun MIT-style Research Project
Financial Times (10/11/06) P. 3; Bounds, Andrew
Both national governments and industry have decided not to contribute any
funds toward the project of creating a European Institute of Technology
(EIT), intended to compete with MIT and the cutting-edge research it
contributes to the U.S. Jose Manuel Barroso, president of the European
Commission, still supports the plan, which he sees as a way to join
business and academic resources to benefit the EU economy, and will use
existing funds, including regional research budgets, to meet the 2.4
billion euro budget scheduled between 2008-2013. Approximately 1.3 billion
euro will be taken from structural funds intended to aid development in
poorer regions and train workers. There is a possibility of the Institute
bringing in 200 million euros from the products it develops, but money will
still be taken from the yet-to-be approved general research fund that
supports universities. "This will not gain the EIT any support in the
research community. They would prefer to have funding for concrete
projects than something that provides a promise for the future," says a
senior Commission official. The EIT is expected to be approved by the
Commission next week, and its board will be comprised of business
professional. Many opponents are critical of the central structure
proposed by Barroso, claiming "there is already top notch research going on
across Europe. How can we help the institutions that are there to
co-operate better and know what the others are doing? asks one European
diplomat." The EIT is planned to have a staff to manage "knowledge
communities" that some worry will interfere with the currently established
European Research Council, consisting of leading university scientists,
which is only a year old. Barroso maintains that the EU is falling behind
because its "institutions are too small."
Click Here to View Full Article
- Web Link to Publication Homepage
to the top
Spooky Steps to a Quantum Network
New Scientist (10/04/06) Merali, Zeeya
A technique known as quantum entanglement is being used in studies which
are beginning to overcome the two major problems facing quantum computer
technology. First, in order to be transmitted over long distances, quantum
bits (qubits) must be made into photons. Second, errors occurring during
transmission must be identified and corrected. Quantum entanglement links
particles together no matter how far apart they are; measuring a quantum
property of one particle immediately affects the other, and by doing so,
information can be "teleported" between pairs of entangled particles.
UCLA's Todd Brun explains that for a quantum network to exist, qubits must
be contained in atoms or ions, processed, and then turned into qubits of
light to be transmitted between computers. He claims that this can be done
by teleporting the state between a photon and an atom. "Potentially, the
only limit is how far light can travel without the signal becoming
degraded," says the University of Copenhagen's Eugene Polzik. Polzik notes
that quantum states can be easily distorted during transmission, and the
problem often lies in identifying which of the two possible errors is
present. Heisenburg's uncertainty principle helps explains why trying to
measure one type of error can create the other type of error, which Brun
says is a problem. But Polzik is impressed by what he calls "such
efficient quantum error correction codes" developed by Brun. The
breakthrough was a result of sharing entangled pairs of particles between
transmitter and receiver before the transmission of data. In theory, when
the receiver combines these particles with its entangled twins, it can
determine between both types of quantum error.
Click Here to View Full Article
to the top
$100 Laptop May Be at Security Forefront
Associated Press (10/09/06) Bergstein, Brian
In creating the $100 laptops planned to be distributed to 7 million
children around the world, software developers have instituted
groundbreaking security measures. The developers working on the One Laptop
Per Child project envision a computer that does not need virus protection,
because applications are run in a "walled garden," meaning that an
application does not have access to all files on the computer, unlike
conventional systems that are vulnerable to exploitation, theft or erasure
of information. "It's essentially unbelievably difficult to do anything to
the machine that would cause permanent hardware failure," says Ivan Krstic,
a software architect at One Laptop Per Child who focuses on security. The
specialized encryption technology serves as a security back up, preventing
the BIOS software, which runs when the computer is first turned on, from
being overwritten, thus the computer could not be made unbootable. When
the machine enters the child's school's wireless range, all data will be
backed up on a server. While these measures are believed to be effective,
children can tweak the computers and learn how they operate, meaning they
could potentially turn off the security measures. One thing that could
worry developers is the fact that the machines will be able to interact in
a "mesh" network, sharing data and programming code, but Krstic promises
that this element would be "really scary if we were not paying attention to
it...But we think we have solutions to all of these problems." The
bright-colored, hand-cranked, wireless-enabled laptops will be distributed
in Thailand, Nigeria, Brazil, and Argentina.
Click Here to View Full Article
to the top
Video Searching by Sight and Script
Technology Review (10/11/06) Borrell, Brendan
University of Leeds computer scientist Mark Everingham has created a
system that can search for videos on the Web by actually knowing what
appears on screen. The system, which was developed for searching "Buffy
the Vampire Slayer" on YouTube.com, using face recognition,
close-captioning information, and original scripts, and is able to name the
faces that appear on screen. "We basically see this as one of the first
steps in getting automated descriptions of what's happening in a video,"
Everingham says, who presented his research at the British Machine Vision
Conference in September. Current searches are limited to metadata or text
descriptions written by the users who submit each video clip. By combing
the script, which reveals "what is said," and subtitles, which reveal "what
time something is said," speakers can be identified by the program, says
Everingham. The program is able to recognize faces using distinct
features, as well as detect whether or not the person shown is the one
talking. The result is a detailed shot-by-shot annotation of the clip.
Oxford computer scientist Josef Sivic, who helped develop the system, says
the research could pave the way for more advanced search programs that
would be able to give descriptions of everything going on in a scene, such
as "Buffy and Spike walking toward the camera hand-in-hand." Alex Berg of
the University of California, Berkeley's Computer Vision Group says, "The
general idea is that you want to get more information without people having
to capture it." However, AOL Video's Timothy Tuttle warns that legal
barriers, similar to those that have dogged efforts to index print
material, could slow searchable video initiatives.
Click Here to View Full Article
to the top
Leapfrogging the Petaflop Race
HPC Wire (10/06/06) Vol. 15, No. 40, Wenk, Herbert
The high performance computing race is heating up, following the September
announcement that Japan's RIKEN Institute plans to build a 10 petaflop
system within the next six years. The United States is considered to be
the favorite to develop the first petaflop computer, but Japan views
supercomputers as "Key Technology of National Importance" and is not
focused on watching out for nuclear stockpiling activity. The first
petaflop system could be built within the next couple of years, and it
could very well have a mixed hardware environment that is similar to the
special purpose hardware that RIKEN uses in its MD-GRAPE3 machine, which is
used for molecular dynamics and multi-body calculations. Earlier in the
year, a system based on the chip setup performed at a level that surpassed
1 petaflop. Dr. Mitsuyasu Hanamura, head of the applications software
group at the RIKEN Next-Generation Supercomputer R&D Center, says the
system's architecture could consist of scalar nodes, vector computers, and
special purpose computers. Japan says the Next-Generation Supercomputer
Project will lead to new research discoveries, developments in science and
engineering, the emergence of helpful predictive models, and strengthen the
economy and industries, improve medical care, and make the nation safer.
The government is contributing a grant of approximately 750 million euros
to the project.
Click Here to View Full Article
to the top
Novel Workflow Language Tackles Climate Change Computing
Challenge
Innovations Report (10/06/06) Goode, Matt
The BBC Climate Change Experiment has taken a unique approach to handling
the complex distribution of large datasets for analysis. The experiment is
using a workflow language that is able to adjust to the specific needs of
the data at runtime and dynamically accommodate any changes in the location
or subdivision of data. Oxford University researcher Daniel Goodman
developed the workflow language, Martlet, in response to the experiment's
use of climateprediction.net, the major e-Science project in the United
Kingdom that relies on the spare computer capacity of more than 200,000
people around the world to model the Earth's climate. The distribution of
data between servers in different parts of the globe by
climateprediction.net was problematic considering the dataset was too large
to return to a single location for analysis, and because the dataset splits
into a number of pieces. "Existing workflow languages are not up to the
task because they implement a style of programming where the number of data
inputs and the paths of data flow through the workflow are set when the
workflow is submitted," explains Goodman. "This makes them unable to cope
with subsequent changes to the dataset." Martlet, which is based on an
alternative programming style often used in workflow languages, could
influence other researchers to take a similar approach to developing
sophisticated algorithms.
Click Here to View Full Article
to the top
Tactile Passwords Could Stop ATM 'Shoulder
Surfing'
New Scientist (10/06/06) Simonite, Tom
A new system for entering a PIN number at the ATM is being developed using
a system based on feel rather than sight. The practice of
"shoulder-surfing," where someone watches numbers typed by an ATM user, is
the target of new technology being devised by computer engineers at Queen's
University in Belfast. Users would move a pointer over a grid of nine
blank spaces displayed on the ATM screen using their fingertips. When they
pass over a different box, the tactons beneath their fingertips change.
When the user comes across each specific pattern of their code, they click.
"The tactile displays are under you fingertips so there's less chance of
an observer "shoulder-surfing," says Ravi Kuber, who created the system
with colleague Wai Yu. Rather than remembering a number, ATM users would
have to remember the feel of four distinct patterns made up of nine pins
that can create many unique patterns. "Even if someone tried to share
their information, there's no guarantee another person could replicate it,"
says Kuber. The feasibility of this technology is being tested. In one
study, 16 subjects used the tactile system to log into their computers
everyday for two weeks, and were able to remember their code after two
weeks of not using it and sign in by the second attempt, but the average
sign in time was 38 seconds. "Finding patterns that aren't too hard to
identify is the biggest problem...an array of nine pins is crude compared
to our sense of touch, there's no reason the hardware couldn't be
improved," Kuber says. The system was presented last month at the British
Computing Society's Human Computer Interaction Group conference at Queen
Mary of London.
Click Here to View Full Article
to the top
Scientists Build Better Navigation Aids
Associated Press (10/08/06) Bluestein, Greg
Portable devices are being developed that will help people find their way
around on foot, accurate to a much higher degree than current GPS
navigation devices that cannot tell the difference between details such as
walls and paths. Scientists at the Georgia Institute for Technology are
working on a product that could help blind users find things as specific as
doors or a bathroom indoors and outdoors. The System for Wearable Audio
Navigation (SWAN) would include a headband holding sensors and a wearable
computer. Light meters and thermometers would determine between indoors
and outdoors. Cameras would determine distance. A compass would determine
direction, and an inertia tracker would determine orientation of the user.
The headset would create audible blips that would quicken as a
predetermined object or destination is approached, much like sonar.
Bone-conducting headphones could be worn right behind the ear in to keep
the user's ear unobstructed. "It's going to take time...But getting floor
plans for buildings is possible. We're trying to show that given a map, we
can show the blind how to get places," says Bruce Walker, an assistant
psychology professor who contributed to the development of SWAN. The
project will not be completely finished until 2010 and could still face the
limitations of GPS, including limited range indoors. "We all know that GPS
is a marvelous addition to our array of options...But it does have
limitations as far as accuracy goes. If they could come up with some way
to make the system more accurate, it would be appealing to a lot of
people," says Melanie Brunson, director of the American Council for the
Blind. Other uses for this technology include guiding emergency response
teams and soldiers in unfamiliar grounds, Walker says.
Click Here to View Full Article
to the top
Faster Development Through Modeling
Dr. Dobb's Journal (10/05/06) Cahoon, Jeff
Development can be accelerated through a modeling method that employs free
tools and OMG's Model-Driven Architecture (MDA), writes CubeModel founder
Jeff Cahoon. The method, which can be used for any application with a set
of repeated steps, has five components: Creation of the application model;
writing of a miniapplication that deploys the first instance of the
repeated set of steps; deconstruction of the miniapplication into template
files that supplant the named parts in the repeated set of steps with
recognizable strings; writing of code that can reassemble the
miniapplication from the templates and model; and generation of all the
code for the entire application. Cahoon says the most probable candidate
applications that the modeling via code generation technique would benefit
are those with many repetitive parts, of which a data warehouse is a prime
example. "An enticing aspect of the technique is that the method is not
theoretical," notes the author. "There is no need to wait for tools or
guess at the details of how it works--all of the components and artifacts
for a working data warehouse application are available for review and a
test drive, and all the tools are either open source or available free for
the purpose of making a prototype." Cahoon also points out that the use of
modeling with code generation means fewer typos and bugs, faster
implementation of requirements changes, improved modeling with objects in
the proper context, actual construction of what is designed by developers,
and a better possibility of exploiting future tools by following
standards.
Click Here to View Full Article
to the top
Raytheon Engineer Wins USC Software Award
USC Viterbi School of Engineering (10/03/06)
The University of Southern California Center for Systems and Software
Engineering (CSSE) is awarding its Lifetime Achievement Award to Gary D.
Thomas for his work in developing widely used tools for calculating the
cost and time required for the development of software. Thomas, an
Engineering Fellow at Raytheon Intelligence & Information Systems in
Garland, Texas, was awarded for his "seminal contributions to software cost
models," according to a commendation to be presented as a CSSE forum in
late 2006. The system he began working on in the 80s, and still does,
Constructive Cost Modeling (COCOMO), made him "a role model for many cost
estimation researchers and practitioners," says CSSE co-director and USV
Viterbi School of Engineering department of computer sciences director
Barry Boehm. "Much of the clarity, consistency, and relevance of the model
relationships and data definitions in the COCOMO family can be traced to
Gary Thomas' contributions, creativity, and experience in applying the
models in wide varieties of applications and situation," adds Boehm.
Thomas developed a customized version of COCOMO, called SECOST, for
Raytheon, which has "become the industry standard for estimating systems
engineering cost and has provided a competitive advantage by providing a
framework for establishing an estimate in much shorter time," says a
colleague, John E. Reiff. Professor Stan Settles, co-director of CSSE,
says "Thomas' talent has supported CSSE's mission of Evolving and Unifying
Theories and Practices of Systems and Software Engineering; and whose work
has stood the test of time."
Click Here to View Full Article
to the top
Adapting Multimedia Quality to the User Device
IST Results (10/09/06)
Partners involved in the DANAE project in Europe are helping to develop a
new file format that would adjust video quality on the fly for cell phones,
PDAs, media players, televisions, and game consoles that are used to access
the Web and download from the Internet. The new file format is designed to
optimize video for a multimedia device, and adapt media content for the
device in real time. "It means that, for example, the quality of the
content improves or reduces as coverage improves or degrades," says Renaud
Cazoulat, coordinator of the DANAE project. "In the home, it means that
bandwidth is shared optimally between all the devices accessing content."
The solution makes use of a chain approach in which a master file on a
server holds the content, and a media gateway performs the heavy video
encoding and adjusting of content, before delivering it to multimedia
devices. The DANAE approach uses the scalable video coding (SVC) media
standard for quick streaming that would be the equivalent to broadcasting
online. Although new microchips for the solution still need to be
developed, the technology could begin to appear by the end of 2007.
Click Here to View Full Article
to the top
Googling for Code
Technology Review (10/09/06) Greene, Katie
Google says its new Google Code Search software code search engine will
allow developers to solve problems more easily and thus produce better
products faster. The free service will make billions of lines of code
available, a good deal of which has not been searchable, including code in
.zip and .tar formats. "The first thing someone does when writing a new
piece of software is to search for existing things that are related," says
Google's Tom Stocky. Searches can be limited to any of 33 programming
languages and 18 different licenses. "Programmers can create really
advanced queries that can search for obscure function definitions," he
says. Stocky says Google makes an effort to detect licenses for each piece
of code, but at times none can be detected. "For anyone who didn't want
their code to be posted publicly, we have methods for them to remove it,"
he adds. By allowing programmers to check if anyone has already written
the same code they are developing, Google Code Search should actually help
prevent plagiarism. Two other companies, Koders and Krugle, also offer
code search engines. Krugle founder Ken Krugler says programmers spend 20
percent to 27 percent of their time looking for reusable code. He says,
"Everyone talks of code reuse as being the silver bullet to the problem of
improving the software creation process...to me search is a key part of
that." Google Code Search has already been available to Google engineers,
and Stocky expects it will invigorate open-source development by providing
programmers with "one place where they can do [comprehensive code searches]
quickly." At this point, users can add code that has been missed.
Click Here to View Full Article
to the top
Cobol: Not Dead yet
Computerworld (10/04/06) Mitchell, Robert
Although Cobol is widely considered an outdated programming language, its
use is still widespread, according to a recent Computerworld survey.
"Nobody wants Cobol, but realistically they can't get rid of it," says
Gartner's Dale Vecchio. The survey found that 62 percent of 352 responding
IT managers use Cobol, although 36 percent say they plan to gradually move
away from it, while 25 percent say rewriting all the code is too expensive.
Cobol has been around since 1960, but its procedural approach is not well
suited to writing interactive programs and Web-based front ends. However,
rewriting mainframe-based Cobol programs is a large and risky undertaking
that most organizations are carrying out with great caution. "What are you
getting for the expense?" says Mike Dooley, a software engineering manager.
"You have to have a valid business reason to do that." Vecchio says the
combination of transferring and rewriting Cobol applications, which require
as much as five times as many lines of code as Java or C#, in a single step
is a "recipe for disaster." New applications are being written in more
recent languages, unless they require batch processing, for which Cobol is
still utilized. The developers who originally worked on Cobol are mostly
retired, leaving the transition to be done by those who are unfamiliar with
the rules under which the code was written, a discovery often made once
rewriting is underway. Instead, some have chosen to insulate themselves
from the back-end through links to Web applications. "If we get into
Cobol.Net, then the Visual Basic .Net application can call the same
routines...without having to jump through hoops," says Dooley. After
moving Cobol applications off of mainframes with as few changes as
possible, many companies are taking the opportunities to reevaluate and
restructure applications.
Click Here to View Full Article
to the top
Content-Sharing Apps Complicate Code Debug
EE Times (10/02/06)No. 1443, P. 1; Lammers, David; Mannion, Patrick
Among the technical challenges of peer-to-peer content sharing on mobile
devices is keeping the ever-growing lines of software code secure and
bug-free. "The problem is that [debug] is not considered 'productive'
work: If you write good code, then you shouldn't have to debug," notes
Silicon Insider analyst and editor Jim Turley. "But designers spend more
time debugging than coding." Complicating matters is a relative paucity of
coders, the advent of multicore processing, and the growing vulnerability
to hacking and other security issues that are a natural consequence of
increasing connectivity. The proliferation of social networks requires the
distribution of data to disparate platforms via middleware, as well as
novel development tools and service libraries, says Encirq's Jan Liband.
"As more data presents itself to be managed, the challenge is to build a
process to handle all of these different data formats, coming in from
sensors, from Bluetooth and from the Internet," he explains. Derek
Ledbetter of ADI says debug and data-logging methods far in advance of
current techniques must be provided. This trend is spawning a strong
market for sellers of software development and code analysis products.
Click Here to View Full Article
to the top