HomeFeedbackJoinShopSearch
Home

Be a TechNews Sponsor

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of ACM. To send comments, please write to [email protected].
Volume 7, Issue 862:  Wednesday, November 2, 2005

  • "Why Are Tech Gizmos So Hard to Figure out?"
    USA Today (11/02/05) P. 1B; Baig, Edward C.

    In response to the proliferation of features embedded in gadgets that are often poorly explained by instruction manuals and counter-intuitive to use, an emerging body of usability advocates is attempting to influence the way technology companies design their products. The Usability Professionals' Association has declared Thursday World Usability Day, seeking to promote design based on users' needs and calling on them to tell companies what those needs are. "We as an industry need to do a better job," said University of Maryland professor Ben Schneiderman. "The public needs to be banging on the table and saying it should be better than it is." Technology experts acknowledge that it is difficult to make products easy to use. While the functions of many devices defy conventional logic, such as shutting off a computer by clicking on "start," Apple's iPod stands in stark contrast, demonstrating the profitability of a product that is easy to use and performs a single function. Self-explanatory products typically sell better, and demand less of an investment in tech support from the manufacturer. Many companies sacrifice usability to meet the pressures of getting a product to market quickly, and others neglect older users when considering questions of usability. The convergence of audio, video, and television has made home entertainment a complex proposition, while the miniaturization of mobile devices has required individual buttons to perform multiple functions. The preponderance of features has also undermined the popularity of many gadgets, as many were added to address demands that never existed. In response, many companies are conducting usability tests that gauge the merit of a product by who in a test group can execute an application and how quickly.
    Click Here to View Full Article

  • "Internet Postings Targeted in Court"
    Baltimore Sun (11/02/05) P. 1A; Smitherman, Laura

    Today a Maryland appeals court will hear testimony concerning whether an Internet user's identity could be unveiled if he makes disparaging remarks in chat rooms or on message boards in a case that may add to the growing raft of legal discourse concerning free speech and the Internet. The case originates from the efforts of an Arizona drug company to subpoena the names of subscribers to a Rockville, Md.-based financial newsletter. The ACLU, the Electronic Privacy Information Center, and other advocacy groups have taken up the fight to preserve online anonymity. As online financial discussion groups become more popular, there is a mounting concern among companies that under the cloak of anonymity, speculators could use the forum to manipulate the price of stocks and competitors could post defamatory claims. Online postings have also altered the way political campaigns are run. By driving down the price of a company's stock, speculators can profit by selling short, or paying back a stock loan with shares purchased at a lower value. While message boards often have rules banning the posting of defamatory content, hosts often enforce them inconsistently because they are under no legal obligation to do so, and indeed often choose not to because of the increased visitation rates caused by controversial postings. Bruce Fischman, an attorney who has represented many companies in cases against anonymous posters, argues that computer analysis and subpoenas could obtain identities, which could then be compared with employee and stock transfer records to ascertain whether the person is a shareholder or an employee. Some companies are also posting their own content in response to the defamatory posts.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Sony CD Protection Sparks Security Concerns"
    CNet (11/01/05); Borland, John

    Sony uses a cloaking tool known as a rootkit to shield its copy protection software on its CDs, a technique that is not inherently dangerous, but is often used by virus writers to obscure their activities on a computer. While the threat rootkits on CDs pose to computers is still largely theoretical, many in the software security community have voiced concerns, and the discovery has also breathed new life into the debate between digital rights management and fair use. The rootkit's creator, First 4 Internet, claims that the cloaking device is designed to make it difficult to hack the contents of CDs or other products, but that it worked closely with Symantec and other antivirus companies to ensure it was secure. Sony has said the software can be uninstalled easily, and First 4 Internet has not heard of any malware incidents in the eight months that CDs with rootkits have been out. Rootkits are designed to embed themselves deep within an operating system to mask the existence of certain programs, and are ordinarily difficult to remove. Because it remains in a computer's memory, the rootkit has the potential to be exploited by virus writers, though many security experts dismiss that threat as theoretical. The controversy over protection techniques strikes at the heart of the balance the entertainment industry is attempting to strike between security and digital rights. At present, commercial CDs can be copied onto backup discs or ripped onto a computer, with the caveat that such activities are intended for personal use.
    Click Here to View Full Article

  • "Computer Scientists Create 'Light Field Camera' Banishing Fuzzy Photos"
    LinuxElectrons (11/01/05)

    A team of Stanford researchers has developed a light field camera that can create photographs whose subjects universally appear in sharp focus, regardless of their depth. Presented at the 2005 ACM SIGGRAPH conference in August, the light field camera is a modified version of traditional cameras that augurs advances in scientific microscopy, surveillance, and commercial and sports photography by surmounting existing problems of high-speed and low-light conditions. The light field camera offers greater flexibility over existing technologies because it captures a large amount of information and allows its focusing decisions to be made after the exposure has been taken. The light field camera, developed by Pat Hanrahan, an engineering professor, and Ren Ng, a computer science student, adds a microlens array to conventional cameras that contains almost 90,000 miniature lenses to sift through converged light rays. Processing software then produces a synthetic image drawn from a consideration of the many different depths where the various rays would have landed. The light field camera disentangles the relationship between the depth of field and the aperture size, which traditionally entailed a tradeoff between scope and clarity. The microlens array yields the benefits of larger apertures without compromising the clarity or depth of the image. Surveillance cameras could be improved significantly by this technology, as they frequently produce grainy images with poorly defined shapes. At night, when a security camera is attempting to focus on a moving object, it is difficult for the camera to follow it, particularly if there are two people moving around, said Ng. "The typical camera will close down its aperture to try capturing a sharp image of both people, but the small aperture will produce video that is dark and grainy."
    Click Here to View Full Article

  • "A Not-So-Simple Matter of Software"
    HPC Wire (10/28/05) Vol. 14, No. 43; Dongarra, Jack

    Despite the numerous and significant achievements of computer science throughout the 20th century, the field is still in its infancy. The software and algorithms at the core of the discipline are the most ripe for growth, and will require the attention of the community, despite the natural lure of hardware. In order to fully integrate modeling and simulation into the scientific method, the community will have to devote its attention to the algorithms that power software once they are encoded. Translating from algorithms to code typically claims the lion's share of the funding of any given project and brings together computer scientists, domain scientists, and applied mathematicians. The software created through this process draws on a lexicon of protocols, mathematical libraries, and system software, and usually outlives the hardware on which it runs. Domain scientists have an ambitious vision of the future of computing where highly integrated applications run at the petascale with optimal performance, and automatically work through the processor failures that occur routinely on such a scale. To realize this vision, many aspects of the software environment will require greater development and funding. The resolution of the most fundamental mathematical and computational research problems must happen concurrently with the formulation of the next generation of scientific software. Collaboration and research will be the engine for this evolution, which will require the support of both government and industry to create new software research centers.
    Click Here to View Full Article

  • "Data Security Laws Seem Likely, So Consumers and Businesses Vie to Shape Them"
    New York Times (11/01/05) P. C3; Zeller, Tom

    In response to a rise in data security breaches this year, more than a dozen bills have been introduced in Congress, but the data brokering industry and consumer and data privacy groups disagree about how far-reaching any new federal regulations should be. Companies that compile, trade and store consumer data, while largely resigned to the idea that new legislation will hold them to a higher standard for security, want to minimize the impact of any new law, maximize their discretion when it comes to notifying consumers of breaches and limit their liability when they do spring leaks. However, consumer and data privacy groups want strict new security standards that would require notification whenever data is inappropriately viewed or acquired and give individuals more control over how their information is stored and used. While the industry wants to ensure that any new federal law, which is likely to be less restrictive than existing state and local laws, would pre-empt state laws, privacy groups want to preserve the ability of state and local governments to make and maintain tough laws. The United States Public Interest Research Group's Edmund Mierzwinski says, "Industry hopes to use the furor over breaches as a way to pass a modest federal reform that just happens to also permanently restrict the states from passing virtually any financial privacy or identity theft laws." The lack of agreement could prevent any legislation from passing this year, even as 47 state attorneys general sent a letter to Congress last week urging them to pass a tough, comprehensive bill.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Broadcast Flag Bill Writers Run Drafts Up the Pole"
    Reuters (11/01/05); Boliek, Brooks

    The authors of broadcast flag legislation that would prohibit the uploading of digital content onto the Internet and regulate its conversion to analog format are circulating drafts of three bills authorizing federal agencies to impose the proposed restrictions among the House Judiciary Committee. Two of the bills concern digital broadcasts, while the third bars companies from trading in devices that convert digital broadcasts protected by copyright into analog programs. That so-called "analog hole" and the uploading of digital content onto the Internet are critical issues pertaining to the conversion to all-digital broadcasts. The entertainment industry is lobbying actively for restrictions, claiming that without adequate copyright protection, high-value content will no longer be freely available over the airwaves, and will instead shift to a fee-based environment. However, opponents of the bills say they give too much power to copyright holders and take away consumer rights. Public Knowledge's Art Brodsky says the bills "give the FCC and the Patent and Trademark Office wide-ranging power and control over the development of technology while depriving consumers of rights they have enjoyed for years. These are unwarranted technological mandates." Last May, a federal appeals court threw out a broadcast flag regulation adopted by the FCC after Public Knowledge challenged the regulation. The committee is scheduled to hear testimony from leaders in the entertainment industry this week.
    Click Here to View Full Article

  • "GMU's Harry Wechsler"
    Technology Research News (10/31/05); Smalley, Eric

    IEEE Fellow and director of George Mason University's Distributed and Intelligence Computation Center Harry Wechsler is interested in increasing computer intelligence by training machines to recognize patterns, often through learning by example. He foresees both positive and negative trends in science and technology: Ubiquitous, anywhere/anytime computing and communication on the one hand, and reduced security, privacy infringement, and intrusive surveillance on the other. Wechsler sees an urgent need for pattern recognition researchers to experiment on real and large data sets, maintain honesty in reporting and competitive assessments, exploit the temporal dimension and incorporate change and drift, and make allowances for noise, distortions, and occlusion. He also perceives an overreliance on standard data sets among the pattern recognition research community, and recommends that "testing should be on unseen data and should lack ground truth information." Wechsler says the last four decades have seen virtually no progress in the effort to build machines capable of effective performance in unfamiliar, dynamic surroundings, and he suggests the computer vision community should begin again. Pattern recognition and machine perception technologies hold great promise in terms of health care, education, and dialog, but Wechsler anticipates an erosion of security and personal privacy; his answer to this is to discard or not collect personal information in the absence of a court order. Wechsler considers artificial neural networks to be "a dead end," given the lack of knowledge we have on the brain's mechanics. He expects the complexity of life and the amount of regulation to increase over the next 10 to 50 years, and argues that competition and a minimum of government intervention are necessary if important social challenges related to cutting-edge technologies are to be tackled.
    Click Here to View Full Article

  • "U.S. Mulls New Digital Signature Standard"
    CNet (11/01/05); McCullagh, Declan

    A shockwave reverberated throughout the data security community earlier this year when a team of Chinese researchers exposed a flaw in the 10-year-old Secure Hashing Algorithm, or SHA-1, which has long served as the official standard for creating and verifying digital signatures. The National Institute of Standards and Technology (NIST) is considering the matter, though the group's John Kelsey said that it is likely to involve other organizations should it adopt a new standard. While the vulnerability discovered is still only of theoretical value, that is likely to change as computing speeds increase. NIST is considering both an update to SHA-1 and scrapping the standard altogether and undertaking the long process of testing and selecting a replacement. There is some concern that SHA-1 variants could be susceptible to the same vulnerabilities as the current version. Hash algorithms collect all manner of data and produce a unique fingerprint, which is supposed to be altered completely if even a single letter is changed. The vulnerability arises when a hacker can replicate a fingerprint, which is called a hash collision, that could enable a criminal to drain a bank account or sign a contract on someone else's behalf. Should NIST opt to replace SHA-1, hundreds of protocols in use by Web browsers, remote logins, and VPNs, among others, would have to be restructured to embrace the new standard. While its decision in the immediate future is uncertain, NIST has declared its intentions to abandon SHA-1 by 2010 at the latest.
    Click Here to View Full Article

  • "Rensselaer Researcher Awarded DARPA Funding to Improve Terrain Maps"
    Rensselaer News (10/31/05)

    The Defense Advanced Research Projects Agency (DARPA) has awarded Rensselaer associate professor of electrical, computer, and systems engineering W. Randolph Franklin $845,000 to produce better terrain models of the Earth and other surfaces, with a broad variety of potential applications, such as the installation of radio towers on the moon and the strategic placement of soldiers. Existing techniques often produce inaccurate maps, which Franklin hopes to improve by developing better methods of compressing the data gathered by radar and laser scanning. Within the project, known as Geo, an abbreviation of GeoSpatial Representation and Analysis, there will be a particular emphasis on improving the navigational abilities of unmanned aerial vehicles. One facet of Franklin's research will focus on identifying the vantage point from which soldiers would have the greatest visibility, which could also be applied to the strategic placement of cell phone towers. Radio relays that travel over the ground with a line of sight between them could enable the installation of radio towers on the moon or Mars, neither of which have the ionosphere required to facilitate long-distance radio, said Franklin.
    Click Here to View Full Article

  • "Q&A: Kris Pister"
    EDN (10/27/2005); Schweber, Bill

    While current RFID technologies only indicate the location of an item the last time it was checked, Dust Networks co-founder Kris Pister believes that instant-identification RFID sensors are only a few years away, and with them will come a torrent of attention from the business community that is interested in real-time tracking. Sensor networks have become able to detect environmental information, such as temperature, vibration, motion, and light. As the cost of wireless sensors plummets, they will enjoy more widespread use in a growing number of industrial and commercial settings, with a particular emphasis on mobility. For any technology to enjoy broad, immediate appeal, it must emerge from the lab with an instantly popular application, otherwise its adoption will be gradual and less dramatic, said Pister. The growing level of abstraction has made it harder for children in school to get practical physical experience with circuits and processors, though Pister acknowledges that inexpensive eight bit design kits still provide basic hands-on training. He also laments the absence of fun in most engineering curricula that focus so heavily on the fundamentals of math and physics.
    Click Here to View Full Article

  • "IRC Channel as Startup Incubator"
    Wired News (11/01/05); Andrews, Robert

    The Instant Relay Chat channel #Winprog has spawned some of the most cutting edge software created in recent years. The channel helped Shawn Fanning refine preliminary versions of Napster, and served as a consultancy for Gnutella's Justin Frankel as he wrote Winamp. Many involved with the channel have gone on to high profile technology companies, and the channel often assists Windows employees with their own operating system. While many incubators are denigrated for providing services of marginal value, #Winprog offers consistently useful technical advice and support. Fanning and Frankel teamed up with other programmers, businesspeople, and marketers who aided them in preparing their projects for public release when they joined #Winprog. The channel boasts up to two dozen Microsoft developers each day, who use it to receive and provide technical advice. There are roughly 140 participants in the channel, each of whom prides himself on providing brutally honest advice. While many incubators are reluctant to give startup projects honest feedback, #Winprog "smacks you in the face with your failure," said Microsoft's Ben Knauss. The channel has a reputation for thoroughly analyzing bad ideas and being intolerant of inane questions, as well as stringent expectations of its members.
    Click Here to View Full Article

  • "The Computer of the Future"
    NewsFactor Network (10/31/05); Germain, Jack M.

    Computer experts believe the home depicted in the 1960s cartoon "The Jetsons" could soon become the norm, considering the rapid growth of computer power and miniaturization. "Imagine all sorts of appliances that know when to turn themselves on and off, toasters that respond to a spoken command or phones that automatically search electronic Yellow Pages for a pizza parlor and then place your order," says the Millennium Group's Gerald Flournoy. Wireless Ultra Wideband (UWB) technology will eliminate connection cables and bring the Bluetooth effect of instant activation to computers peripherals, says Alereon's James Lansford, who adds that wireless docking stations for UWB-enabled devices will be available next year. Brian Young, vice president of IT at Creighton University, expects to see wireless devices embedded into clothing that users can command through diction and voice modules, and says students at the Omaha, Neb., school already constantly recharge laptops with the aid of solar cells that line their backpacks. Ubiquitous hotspots and new wireless specifications such as WiMAX will spell the end of desktop computers, but laptops will remain popular, says Lenovo's Howard Locker. And trusted computing, the specifications for improving security, will ensure protection for sharing and replicating users' data on devices to networks, and limit access. "The PC of the future will provide a root of trust and will no longer rely on a user ID and password for security purposes because all authentication will be done machine to machine," says Wave Systems CEO Steven Sprague.
    Click Here to View Full Article

  • "Google Will Return to Scanning Copyrighted Library Books"
    Wall Street Journal (11/01/05) P. B1; Delaney, Kevin J.; Trachtenberg, Jeffrey A.

    Google will return to scanning copyrighted library books despite opposition from several publishers, after suspending its scanning operations in August in order to give publishers a chance to ask that their works not be scanned. The move has created tension and controversy over how copyright laws apply to online content. Lawyers say Google's book-scanning plan, which uses digital scanners for new and out-of-print books so customers can search for specific passages in the text, could lead to a legal challenge that sets a precedent for "fair use" rights in the Internet age. Google says it will soon continue scanning book collections at Stanford University and the University of Michigan. Google's decision to scan out-of-print copyrighted material without permission from publishers and authors has been met with strong criticism. The Association of American Publishers filed a federal law suit against Google last month in New York. "I feel this is a potential disaster on several levels," said Michael Gorman, president of the American Library Association and librarian at California State University, Fresno. He says, "They are reducing scholarly texts to paragraphs." However, other librarians argue that what matters most is that information is accessible on all levels. John Wilkin, associate university librarian at the University of Michigan, said, "We think what Google is doing is legal and consistent with copyright law because copyright law is about striking a balance between the limited rights of the copyright owner and the long-term rights of the public."
    Click Here to View Full Article

  • "Diversification of the IT Department"
    Network World (10/24/05) Vol. 22, No. 42, P. 87; Leung, Linda

    The number of racial minorities and women filling information technology-related positions remains low, but companies are still actively looking for such candidates to join their workforce. Media company Scripps Network introduced a policy last year that ties 5 percent of the bonuses of senior managers to their ability to attract and hire qualified minorities and women. At Scripps, which is willing to relocate new recruits, women and minorities now account for 31 of its 57 IT employees, and six of its 13 IT managers. HSBC does not relocate candidates, but the Prospect Heights, Ill., company is closely involved with organizations such as the National Black MBA Association, the National Society of Hispanic MBAs, and Inroads, which train and develop potential minority candidates. HSBC has 3,400 IT workers, and 17.4 percent of managers are ethnic minorities and 27 percent are female. Such efforts come at a time when the number of minorities and women in IT jobs has fallen. The Information Technology Association of America reported in June that since 1996 there has been an 18.5 percent decline in the number of women in the IT workforce to 32.4 percent in 2004. The number of black workers fell from 9.1 percent to 8.3 percent, but the number of Hispanics rose from 6.4 percent to 12.9 percent.
    Click Here to View Full Article

  • "Attack of the Quantum Worms"
    New Scientist (10/29/05) Vol. 188, No. 2523, P. 30; Anderson, Mark

    Researchers say the emergence of quantum malware is an inevitability, but only recently has serious debate about protecting computers from such programs started, compared to the decades of research and billions of dollars already committed to quantum computer development. Quantum computers have yet to be fully realized, but a "quantum Internet" comprised of optical fiber and free space point-to-point networks dedicated to channeling quantum information already exists. This prompted University of Toronto researchers Lian-Ao Wu and Daniel Lidar to author a 2005 paper detailing a defense against quantum malware. Lidar says a quantum communication network will invite interference like any other network, while hackers could "decohere" a quantum bit's phase information and cause the output to randomize. Wu and Lidar recommend that quantum systems be kept offline as long as possible, and they propose a back-up system in which all networked quantum computers have an ancillary register of qubits equal in size to the quantum computer's memory, which is isolated whenever the computer is linked to the network to prevent direct infection. All members of a network share a secret sequence of run-time intervals that are very brief, and that must be considerably shorter than the periods when the calculations are stored in the ancillary qubit register. The setup of quantum computer networks in which more than a few kilometers separates the computers necessitates the inclusion of "quantum repeater" boxes, which could be hijacked. Lidar suggests an alternative device he and Wu conceived that uses the most simple optical components installed at regular intervals along the optical fiber.

  • "DARPA Scientists Seek Next Generation of Wireless Data Networking"
    Military & Aerospace Electronics (10/05) Vol. 16, No. 10, P. 1; Keller, John

    The high-tech industry has until November 2, 2005, to submit proposals to the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., regarding the development of a control-based wireless network. Such next-generation networking would enable users to access wireless resources according to their goals and operational hurdles. DARPA's project, Control-Based Mobile Ad-Hoc Networking (CBMANET), seeks to address issues such as availability of resources, fairness, connectivity, quality of service assurance, mission-adaptive multilevel precedence and priority, and operation during jamming, intercept, or detection. Tech firms will initially have to produce software models, and then demonstrate an integrated solution for evaluation. Current research in the area of mobile ad-hoc networking tends to focus on developing standard Internet protocol routing for wireless application in topologies that are static and dynamic. Factors such as node motion are used to further the dynamics. DARPA desires an overall improvement in the performance of mobile ad-hoc networking.
    Click Here to View Full Article

  • "Using Machine Learning to Support Quality Judgments"
    D-Lib Magazine (10/05) Vol. 11, No. 10; Custard, Myra; Sumner, Tamara

    Myra Custard and Tamara Sumner of the University of Colorado at Boulder's Department of Computer Science outline a methodology to automate quality assessments of digital library resources and collections using machine-learning techniques. The authors undertook a pilot study to devise an initial computational model of quality and associated machine learning algorithms that can spot variations of quality in digital library resources. Their investigation focused on what characteristics of resources might function as quality indicators and whether machine-learning techniques can be applied to identify such indicators while also being sensitive enough to detect quality variations in existing resources. The research also studied how the classification of the resource into a specific quality brand is positively or negatively affected by specific indicators. Custard and Sumner identified 16 aspects of resources or metadata that could potentially be used to recognize quality variations across resources: Cognitive authority, site domain, element count, description length, metadata currency, resource currency, advertising, alignment, word count, image count, link count, multimedia, the World Wide Web, annotations, cost, and functionality; these indicators were assigned to the categories of provenance, description, content, social authority, and availability. The researchers' methodology was tested in two experiments. The first experiment, which covered two out of three selected classification categories, yielded an assessment accuracy of 93.75 percent, and the second, which encompassed all three categories, boasted 76.67 percent accuracy. In both experiments, metadata currency was rated the single best quality indicator.
    Click Here to View Full Article

  • "Why Your Data Won't Mix"
    Queue (10/05) Vol. 3, No. 8; Halevy, Alon

    A critical step in building applications that share data is the resolution of semantic heterogeneity among multiple database systems, a problem compounded by the presence of semi-structured data. Reasons for this include the fact that applications that involve such data usually involve the exchange of data between multiple parties; the greater flexibility and likelihood of variation in semi-structured data schemas; and the significant addition of attributes that can be inserted into the data at will. Few solutions exist for reconciling schema heterogeneity, because the data sets were devised independently, and so differing structures were employed to represent identical or overlapping concepts. Resolving schema heterogeneity is a tough and time-consuming challenge given the need for domain and technical expertise, and the use of standard schemas in the resolution process only yields success in domains with very strong incentives to agree on standards. Concepts for semi-automated schema-matching systems have up to now been very fragile because they focus only on evidence present within the two schemas being matched, and do not take past experience into account. One alternative approach involves machine learning, in which the schema matcher learns models of the domain of interest using a series of training samples. There is a pronounced need to manage data rife with uncertain values, attribute names, and semantics; looking ahead, the central challenges involve contending with dramatically bigger schemas as well as more complicated data-sharing environments, termed "dataspaces." Mapping larger schemas should be guided by the principle that the schema-mapping tools must include sophisticated information visualization techniques, and the problem should be approached with a schema search engine containing a set of indexes on the elements of a specific schema, and which accepts schema elements, schema fragments, or combinations of schema fragments and data instances as input.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM