HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 7, Issue 789:  Monday, May 9, 2005

  • "Push to Replace Voting Machines Spurs Confusion" USA Today (05/09/05) P. 6A; Drinkard, Jim

    Local election officials are at loss over what voting machine technology to purchase with federal grants as the deadline for using the money approaches. Governments have until Jan. 1 to purchase new equipment that will improve accuracy, but academic experts and state officials continue to offer different--sometimes changing--opinions about which voting technology is most reliable. "The people who are trying to get this done at the local level are just running blind," warns Ohio Association of Election Officials President Keith Cunningham. Without federal guidance and clear leadership at the state level, the $2.3 billion allocated for voting infrastructure rehaul in 2002 could be wasted, he warns. Backup methods such a paper receipts are particularly controversial. While officials and experts are in agreement that some form of backup is needed for computerized voting systems, there is no clearly preferred method. A new Massachusetts Institute of Technology (MIT) study found long-favored paper receipts may not be the best way for ensuring voters' ballots were read correctly. The study provided ballot feedback to 36 test voters through paper receipts and voice recordings. Test subjects identified just 8% of the introduced errors with the paper receipts, but caught 85% of the errors when the results were provided through voice recording. MIT researcher Ted Selker, who led the study, says he observed similar reactions in recent Chicago and Nevada elections that used paper receipts for computerized voting systems. "I have lost confidence in paper trails," he says. In addition, the federal Election Assistance Commission will not offer recommendations on paper receipts in an upcoming technical standards report.
    Click Here to View Full Article

    For information detailing ACM's e-voting activitities, visit http://www.acm.org/usacm.

  • "U.S. Dominates Supercomputing, But for How Much Longer?"
    Federal Computer Week (05/09/05); Sternstein, Aliya

    Congressional legislation currently under review would boost government funding for high-performance computing, even as some departments are reducing their research investment in that area. The House recently sent the High-Performance Computing Revitalization Act of 2005 to the Senate, amending the High-Performance Computing Act of 1991. The bill would require the National Science Foundation, The Energy Department, and other federal agencies to guarantee that U.S. researchers have access to the most advanced supercomputers. The White House Office of Science and Technology Policy director would also be placed in charge of federal supercomputing efforts. Academic education that support high-performance computing would also be expanded, including software engineering, computer and network security, and computer science fields. House Science Committee Chairman Rep. Sherwood Boehlert (R-N.Y.) said the U.S. risks losing its dominance in the field of high-performance computing unless federal backing for that discipline is secure. Energy's fiscal 2006 budget cuts National Nuclear Security Administration's Advanced Simulation and Computing Program funding by five percent, while the Office of Science's Advanced Scientific Computing Research program is slated for a 11% funding reduction in 2006. Funding cuts could endanger the federal high-performance computing ecosystem, warns University of North Carolina at Chapel Hill CIO Dan Reed.
    Click Here to View Full Article

    To reason the High-Performance Computing Revitalization Act in full, visit http://www.acm.org/usacm.

  • "Court Tosses FCC Rules On Copying, Sharing TV"
    Washington Post (05/07/05) P. E1; Krim, Jonathan

    The Federal Communications Commission (FCC) overstepped its authority with the "broadcast flag" rule that would have imposed device design requirements for digital television, according to a federal appeals court ruling. The D.C. Circuit U.S. Court of Appeals said the FCC is meant to regulate transmissions, not the design of devices that receive them. The broadcast flag rule would have required new equipment to record or play back the digital programs, though existing televisions would still be able to view incoming transmissions. Additionally, video cassette recorders would not have been affected. Hollywood studios said the rule was needed to protect digital broadcasts from being pirated over free broadcast networks; such digital programming is already protected on cable and satellite television because providers scramble the signal. Critics of the FCC's decision said the rule would not prevent piracy and would unnecessarily require consumers to buy new equipment, but the court ruled primarily on the principle of FCC authority. The Motion Picture Association of America is likely to renew lobbying efforts in Congress, which to date has refused to legislate technical standards that would secure digital television over free broadcast. The Information Technology Association of America (ITAA) supported the federal appeals court decision, even though consumer electronics makers would have benefited from the FCC decision in the short term. An ITAA statement said higher-quality digital broadcasts would spur new market growth and opportunities for content producers in the same way video recorders and DVD players have.
    Click Here to View Full Article

  • "Professor Believes Software Can Determine Quality Work"
    Associated Press (05/08/05); Sedensky, Matt

    More educators are using automated analysis programs to help grade student essays. University of Missouri-Columbia sociology professor Ed Brent created a program called SAGrader with support from the National Science Foundation. He uses the software to identify basic, necessary components in student essay drafts. Students are encouraged to submit their draft versions to SAGrader to improve their chances with the final version, which Brent and two assistants grade in person. The idea is to eliminate common errors and enable reviewers to focus on more important aspects, such as whether the student makes a good argument, demonstrates understanding, and shows creativity. Such software is becoming widespread in middle schools, high schools, and universities, and is even used to score essays submitted with the GMAT business-school application test. Automated essay-testing software could be seen as a technological counterweight to the Internet, which allows students to quickly research topics. Indiana higher education commissioner Stan Jones admits the software is not as good as a teacher, but says it cuts costs, saves time, and allows teachers more leeway in assigning writing work. But machine-scoring introduces new problems as well, such as students writing to meet software requirements instead of assigned points. University of California at Davis lecturer Andy Jones tested a commercial product by inputting a letter of recommendation but replacing the name with a keyword--his letter scored a five out of six. With the word "chimpanzee" included randomly, the score improved to six, probably because unusual words were equated to better writing. All automated analysis programs currently available still require significant preparation on behalf of teachers.
    Click Here to View Full Article

  • "H-1B: Patriotic or Treasonous?"
    InfoWorld (05/06/05); Schwartz, Ephraim; Gross, Grant

    Politicians, academics, labor representatives, and industry leaders are engaged in a fierce debate about whether the H-1B visa program harms U.S. workers or empowers U.S. businesses. At a recent panel discussion in Washington, D.C., Microsoft Chairman Bill Gates said, "The whole idea behind the H-1B thing is, 'Don't let too many smart people come into the country.'" Gates suggested eliminating H-1B visa caps--currently set at 65,000 per year--would increase U.S. technology industry competitiveness. But University of California, Davis, computer science professor Norman Matloff says the H-1B program boils down to regulating "cheap labor." Even though H-1B rules stipulate workers must possess needed skills and be paid on equivalent pay scales, U.S. companies are often able to hire experts at less-than-premium costs, says Matloff. In addition, arguments that China, India, and Russia produce more engineers per capita than the United States ignores those engineers' quality and the positions they take, he says. Some foreign nationals seeking to live in the United States claim companies take advantage of the green-card application process since applicants are dependent on the company for green card sponsorship. Programmers Guild President Kim Berry also claims companies hire young foreign workers instead of Americans with more experience, and also use H-1B hires that are less likely to invest in matching 401(k) plans. High-tech employers say they continue to face a shortage of qualified workers; TechSource executive Randy Williams says the quality of U.S. college graduates, especially for computer engineering, has fallen over the past 10 years. Government officials seem to support limits on H-1B visas, citing continued high unemployment rates among U.S. IT workers.
    Click Here to View Full Article

  • "A Virtual World With Peer-to-Peer Style"
    CNet (05/09/05); Borland, John

    A number of developer groups are merging online "massively multiplayer" gaming with the peer-to-peer computing model, hoping to create a radically new mode of online interaction similar to that depicted in Neal Stephenson's "Snow Crash" science fiction novel. That book described a "Metaverse" as rich and complicated as the real world, where people could visit one another's virtual spaces instead of simply exchanging emails, for example. France Telecom researchers are working on a project called Solipsis that aims to enable this type of scenario by tapping participants' computers to support the virtual world, as opposed to centralized systems run by companies. Centralized virtual worlds are bounded by the creativity of company employees, but a peer-to-peer approach would allow anyone to contribute their own ideas. The difference between the centralized and peer-to-peer virtual worlds would be similar to the difference between a Web page and the entire Web, said one France Telecom researcher. The Open Source Metaverse Project is working with similar concepts, and is further along in terms of providing the developer tools to create virtual spaces. Linden Labs has already signed on 28,000 users for its Second Life virtual world, which is run on a distributed model and enables participants to create their own mini-worlds and even host their own games. Linden Labs uses a distributed network of servers to run Second Life, with each server governing what happens on a 16-acre plot of land. In theory, those servers could be administered by anyone, but Linden Labs maintains control in order to provide reliability. Virtual world developer Crosbie Fitch says there is an anarchic edge surrounding the peer-to-peer virtual world concept, where copyright infringement and dangerous activities are likely to take place.
    Click Here to View Full Article

  • "Technology for Social Inclusion: An Interview With Mark Warschauer"
    Digital Divide Network (05/04/05); Raven, Francis

    The digital divide wrongly suggests a digital solution, says educational IT expert Mark Warschauer. Instead of focusing solely on access to technology, solutions should consider the social, political, and economic context. Warschauer is an assistant professor of education and information and computer science at the University of California, Irvine, and recently wrote a book titled "Technology and Social Inclusion: Rethinking the Digital Divide." Technology can effectively address social problems, but it requires social supports such as language literacy, computer skills, and a plan to maintain the technology. Warschauer suggests a "technology for social inclusion" approach that focuses on allowing individuals, families, and communities to better control their lives through the use of technology. Unlike the idea of bridging divides, social inclusion addresses barriers that even well-to-do people might have, such as political persecution or discrimination. The digital divide concept also ignores the gradated degrees of access people have to information. Just because someone can go to an Internet cafe does not mean they have complete access to information, for example. In addition, understanding at least one of the world's major languages is another factor that could render a computer useless. Many people think of educational technology according to a "fire model" where the mere presence of computers will bring benefit, just as a fire brings warmth. However, many schools currently have computer equipment that is unused or used poorly because support is lacking. Educators should also consider pedagogy, curriculum, training, software, and maintenance in order to made computers effective.
    Click Here to View Full Article

  • "Virtual Music Box Makes Sound Visual"
    Discovery Channel (05/05/05); Staedter, Tracy

    Researchers from Sao Paulo, Brazil, are using human-computer interface technology to enable people to interact in a virtual environment that records their movements visually as a sound. The work of artist Daniela Kutschat Hanns, coordinator of post-graduate studies at SENAC Communication and Art College, and Rejane Cantoni, an associate professor in technology and digital media at the Catholic University, is on display at the Beall Center for Art and Technology in Irvine, Calif. The art exhibit, Op_era: Sonic Dimension, acts as a virtual music box, in which computer software picks up whispers, giggles, footfalls, and other activity, and produces a corresponding vibration and sound. A 10-foot-by-13-foot blackened room, projectors casting images of 300 white strings onto three black screens, an overhead microphone picking up sound, and 80 motion sensors above the screens capturing movement make up the exhibit. "It's very different from sitting passively in a darkened room looking at screen, rapidly pushing buttons," says Simon Penny, professor of art and engineering at the University of California, Irvine.
    Click Here to View Full Article

  • "Wireless Developers Use Diverse Toolset"
    Application Development Trends (05/04/05); Gates, Lana

    Applications that wireless developers are working with continue to evolve, and signs indicate that there will be a dynamic market in the years to come, according to a survey of nearly 500 wireless developers by Evans Data. The survey reveals that the chance to mobilize applications more easily will prompt 56 percent of wireless developers to consider changing programming languages or tools. Also, 78 percent of wireless developers are writing new applications from the ground up, while 58 percent are extending legacy applications. Although the survey found that wireless developers on average are developing for six different platforms, the data also suggests that wireless developers may not be pleased with the tools that are available to them, particularly mobile device emulators. The survey also reveals that 75 percent of wireless developers are building solutions that are extensions for mission-critical applications, and more than half are extended to mobile devices. Meanwhile, Evans Data found that 25 percent favor XML/SOAP/XML-RPC to ADO.NET or messaging to connect back-end applications to wireless applications, and voice XML and VoIP use has doubled to 23 percent and has risen 53 percent, respectively, over the past six months.
    Click Here to View Full Article

  • "Speech Technology for Persons with Disabilities: Are We Breaking Down Barriers or Creating New Ones?"
    Speech Technology (05/03/05); Thompson, Terry

    University of Washington senior computer specialist Terry Thompson writes that speech technologies' mainstream penetration can make lives easier for persons with disabilities, but it can also complicate them because some disabilities may limit the technologies' usefulness. For example, a person with impaired speech may not necessarily be able to use devices or systems that operate by speech input, while a hearing-impaired user could have little use for devices or systems that supply speech output. A universal design approach is needed to address such limitations, and Thompson recalls that three out of nine speech applications developed by teams at SpeechTEK 2004 were successful in this regard by providing the user with multimodal functionality, enough time to respond to questions, and the means to pause and resume the application when necessary. Universal design also has legal precedent, as dictated in legislation such as Section 508 of the Rehabilitation Act and Section 255 of the Telecommunications Act of 1996. Technologies that include software applications and Web-based intranet and Internet information and applications are covered by amendments to Section 508 requiring accessibility of electronic and information technology purchased, maintained, or employed by the federal government. Section 255, which encompasses voice mail and interactive voice response (IVR) systems, requires telecom equipment manufacturers to ensure the accessibility of their products to people with disabilities. Accessibility issues, especially those pertaining to IVR systems and services, are the focus of a forum organized by the Alliance for Telecommunications Industry Solutions. Thompson says the future of universal accessibility is sunny as the mainstreaming of speech technology gains momentum.
    Click Here to View Full Article

  • "Grassroots Supercomputing"
    Science (05/06/05) Vol. 308, No. 5723, P. 810; Bohannon, John

    Public distributed computing efforts have grown significantly since SETI@home was launched in 1999, enabling scientists from a number of fields to tap tremendous computing power. "There simply would not be any other way to perform these calculations, even if we were given all of the National Science Foundation's supercomputer centers combined," says Stanford University chemical biologist Vijay Pande, whose Folding@home program helped simulate how the BBA5 protein was formed. The idea of harnessing unused PC cycles for distributed computing has been around since the early 1980s, when hundreds of machines were put to work at universities. The Great Internet Mersenne Prime Search (GIMPS) program was an early distributed computing effort popular among hackers and other computer enthusiasts. But software designer David Gedye was the first to conceive of much larger efforts that called on public volunteers, and worked with colleagues to create SETI@home for the University of Berkeley's SETI SERENDIP project, which analyzes deep space radio signals from the Arecibo Observatory in Puerto Rico. By 2001, SETI@home was installed on 1 million desktops. Scientists have launched a number of other efforts, such as the LHC@home program that is helping design the CERN Large Hadron Collider. Scientists in that project encountered an unusual problem where different PC types were returning different results for the same model; the glitch was caused by a lack of a standard method of handling mathematical rounding errors. Estimates put the number of Internet-connected PCs that have distributed computing programs installed at just 1 percent. The Berkeley Open Infrastructure for Network Computing (BOINC) template is helping scientists create new programs at much less cost.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "Europe Annexes Caribbean Islands"
    Register (UK) (05/06/05); McCarthy, Kieren

    ICANN has come under fire for its failure to garner support for its country-code Names Supporting Organization (ccNSO). Most recently, the chairman of Centr, an organization representing domain name registries worldwide, sent a letter to ICANN accusing it of trying to regulate sovereign nations and calling for changes to the ccNSO bylaws. So far, ICANN has only gotten 15 percent of the world's country-code top-level domain (ccTLD) administrators to join ccNSO. According to ICANN rules, the ccNSO requires "a minimum of 30 members with at least four from each of the five ICANN geographic regions," but only three European nations--the Netherlands, Czech Republic, and Gibraltar--have signed on to the organization. However, ICANN has justified its launch of the ccNSO as a legitimate body by counting the Cayman Islands, a territory of the United Kingdom that has joined the ccNSO, as part of Europe. Critics of the ccNSO are expressing outrage at this move and demanding that ICANN make changes to the ccNSO. Centr is calling for six specific changes to the ccNSO bylaws that would make it "a forum for information exchange and discussion of best practices, not for developing policies binding on participants." For example, Centr would like to see ccNSO members deciding their own procedures, fees, and policies that could only be rejected by ICANN under extreme circumstances.
    Click Here to View Full Article

  • "Glacial Environment Monitoring Using Sensor Networks"
    University of Southampton (ECS) (05/04/05); Martinez, Kirk; Padhy, Paritosh; Riddoch, Alistair

    GlacWeb is a wireless sensor network deployed to study how climate change affects the dynamics of the Briksdalsbreen glacier in Norway, and represents one of the few environmental sensor networks that demonstrate their value. Eight sensor nodes, or probes, in the ice and sub-glacial sediment take various readings in four-hour intervals, which are collected by a base station on the surface that passes the information on to a sensor network server in Southampton, England, by long-range radio modem. Each probe consists of four modules--a power module, a sensing module, a processing module, and a transceiver module--divided into digital, analog, and radio subsystems and enclosed in a polyester capsule; a helical antenna is also appended to the radio module. The probes were deployed in a non-arbitrary manner, with the site scanned for sub-glacial geophysical anomalies by Ground Penetrating Radar before deployment. The base station is composed of a pyramid framework that is weather- and movement-tolerant, with separate housings for batteries and electronics. A GPS antenna records the station's position, while solar panels provide energy in addition to the batteries. Communication with five of the eight probes was lost in the first few months of GlacWeb's operation, and the researchers see three factors individually or wholly responsible for this failure: A power disruption at the base station, probe malfunctions caused by water or the stress of the ice, or probes that were carried out of transmission range by sub-glacial movements. The experiment proves the sensor network is strong and can function in glacial environments, while communication failures could be avoided in the future through a multiple-hop, self-configuring ad-hoc network that is also more scalable and power-efficient, bearing in mind that later networks would employ additional probes and multiple base stations.
    Click Here to View Full Article

  • "Edgy About Blades"
    Computerworld (05/02/05) P. 24; Mitchell, Robert L.

    Blade servers reduce cable clutter, are easy to manage, and save floor space in the data center, but companies are still hesitant about adopting the relatively immature technology; IT managers say blade servers generate excessive amounts of heat, lack architectural standards, and cost more than normal rack servers in many cases. Still, IT research groups IDC and Gartner predict rising sales of blade servers over the next several years. Power and cooling issues associated with the blade format drive up total cost of ownership, says Embarcadero Systems technical services manager Umesh Jagannatha, who uses blade servers for a port security application but nothing else. IBM BladeCenter director Tim Dougherty says increasing power and heat is the result of a general trend in computing technologies, and that IBM is redesigning its blade server technology to deal with those issues, possibly using liquid cooling; this prospect adds to customer hesitation about blade investments, says Gartner analyst Jane Wright, while KeyCorp server engineering vice president Robert Kreitzer says IBM representatives admit, when pressed, that completely filling racks with blade servers could cause overheating problems. IBM opened its I/O specification last fall so that third-party I/O vendors could make best-of-breed modules, although those components still must conform to IBM's proprietary chassis design. IDC notes that IBM and Hewlett-Packard control over three-quarters of the blade server market and will have to cooperate on standards, possibly including InfiniBand or Ethernet. Another aspect affecting blade adoption is virtualization technology that makes large multiprocessor servers more attractive. In the end, industry observers say blade technology will likely be just one option suited for certain applications.
    Click Here to View Full Article

  • "The View From California"
    Issues in Science and Technology (04/05) Vol. 21, No. 3, P. 21; Barbour, Heather

    Heather Barbour, an Irvine Fellow with the New America Foundation's California Program, writes that the flipside of California's vaunted status as a science and technology (S&T) pioneer is its poor management of S&T policymaking. She describes the current California legislature as ill-equipped to handle S&T policy for a number of reasons: Shortened term limits mean that too often legislators end their service before they can become familiar with S&T issues, and there is no formal mentoring or training program for incoming legislators. Since S&T-related bills are passed through several standing committees with multiple jurisdictions and conflicting goals, analysis and evaluation of such legislation is for the most part superficial. Moreover, legislators willing to sponsor or advocate S&T bills or issues in the long term are engaging in a thankless task, as there is no institutional reward system for such actions. Barbour argues that any remedy for this situation requires compliance with fundamental standards of good governance, including the provision of expert, nontechnical advice on S&T issues to state policymakers; citizens' easy identification of and access to elected officials directly responsible for S&T policy; more effective information management; and rewards for public officials who champion S&T policy in the form of personnel, authority, and media coverage. The author recommends several former and current federal S&T governance institutions that could improve the state's S&T policy infrastructure, such as the establishment of standing S&T committees in the California Assembly and Senate, hosting a team of volunteer scientist-fellows based on the State Department's Jefferson Science Fellows program, inclusion of non-partisan S&T literacy training for new members at the state and local levels, formation of a California Office of Technology Assessment, and appointment of an S&T advisor to the governor.

  • "Silver Bullets for Little Monsters: Making Software More Trustworthy"
    IT Professional (04/05) Vol. 7, No. 2, P. 9; Larson, David; Miller, Keith

    Professors David Larson and Keith Miller of the University of Illinois at Springfield agree that while a single solution for all software development problems may be a pipe dream, existing "silver bullets" can solve a few problems in the near term as long as developers select them carefully. This will allow developers to address specific software defects and then confirm that such defects have been eliminated. The authors detail three defects for which solutions already exist: Memory leaks, buffer overflows, and files that remain open when a program terminates. Larson and Miller recommend the eradication of memory leaks before the rollout of commercial software, and list a broad range of solutions, including generic tools such as mtrace, YAMD, and Valgrind; the C Model Checker, which can scan the actual code without a programmer-developed model; and a pointer analysis framework suggested by Berhard Scholz, Johann Blieberger, and Thomas Fahringer that detects leaks via static analysis of program behavior. The greater number of eliminated memory leaks translates into greater software trustworthiness and reliability. Larson and Miller cite several papers detailing techniques for identifying conditions that give rise to buffer overflows, including the application of "randomly deformed data streams," analysis of errors injected into the software, and checking of basic-block signatures to confirm the non-maliciousness of executing instructions. The unclosed-file problem can be addressed with available static analysis tools that are comparatively cheap in terms of programming or computer time. One such tool is Microsoft's Slam, which takes a C program as input and authenticates its characteristics using a rule authored in Specification Language for Interface Checking.
    Click Here to View Full Article

  • "10 Emerging Technologies"
    Technology Review (05/05) Vol. 108, No. 5, P. 43; Talbot, David; Jonietz, Erika; Savage, Neil

    Among a bevy of promising technologies is a software-driven airborne network of planes that removes the need for ground-based air traffic controllers; the system could yield short-term benefits such as shorter flights and lower fuel consumption, and long-term benefits such as the multiplication of aircraft in the sky without the need for additional infrastructure and personnel. Nanotube-based quantum wires promise to conduct electricity more efficiently and at greater distances than conventional copper wires, while their higher strength and lighter weight would allow existing towers to support fatter and more capacious cables than steel-reinforced aluminum cabling. Nanotubes also play a key role in Nantero's effort to develop ultradense "universal" memory systems in which individual bits are encoded by the physical orientation of nanoscale structures; potential applications of such a breakthrough include instant-on PCs and the elimination of flash memory. Light-emitting silicon would enable the transmission of exponentially more data than copper wiring while also eliminating electrical interference issues, and University of Rochester professor Philippe Fauchet envisions an electrically powered silicon laser as a critical ingredient in optoelectronic communications. The nascent discipline of biomechatronics focuses on the development of robotic prostheses that read the user's neural signals. Drug development could be substantially improved through 3D imaging of atomic- and molecular-level structures via magnetic-resonance force microscopy, while the emerging field of environmatics spawned by new data and the wider adoption of sensing, simulation, and mapping tools could lead to more reliable environmental forecasts and more effective urban growth management. One of the more sinister emerging technologies is malware that can be transmitted by cellular telephones, which could open the door for all kinds of cyber-mischief as cell phones become capable of more sophisticated applications.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM