Be a TechNews Sponsor

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of ACM. To send comments, please write to [email protected].
Volume 7, Issue 798:  Wednesday, June 1, 2005

  • "EU Studies Impact of Software Patents on Open Source"
    IDG News Service (05/27/05); Sayer, Peter

    University of Maastricht researchers have been enlisted by the European Commission to study the economic, technical, and legal ramifications of software patents on innovation, although the results of the analysis may ultimately have little sway on the Commission's plans to introduce controversial legislation to bring software patenting into alignment throughout the European Union. The EU's current hdgepodge patent system treats software patent applications in a variety of ways, in accordance with member states' individual laws. The Commission's proposed draft law for harmonizing software patenting was sharply criticized by the European Parliament, which called for a restart to the drafting process; despite this resistance, a heavily revised version of the draft passed its first reading, and will be presented for a second reading soon. The Maastricht study will not be completed until late 2007, and Maastricht researcher Rishab Aiyer Ghosh finds it odd that the Commission would want a report on how software patent legislation affects innovation while still planning to go forward with such legislation without waiting for results. The Commission announced this week that it will underwrite another study focusing on how the use and development of free, "libre," and open-source software (FLOSS) impacts regional economies. The EU will provide $829,000 in funding to EU researchers and partners from Brazilian, Argentinean, South African, Malaysian, Chinese, Croatian, and Bulgarian institutions to complete the two-year study, dubbed Flossworld, while other participating institutions will have to supply their own funding.
    Click Here to View Full Article

  • "Domain System Creator Honoured"
    BBC News (06/01/05)

    Nominum chief scientist Dr. Paul Mockapetris has received the a lifetime Achievement award from ACM's Special Interest Group on Data Communication (SIGCOMM) for developing the Domain Name System (DNS) over 20 years ago. The DNS plays a critical role in the operation of the Internet because it converts Web address names into numerical addresses that Net routers can comprehend, and its contribution to applications ranging from Web searches to Web mail to VoIP phone calls is vital. Mockapetris developed the DNS during his tenure at the University of Southern California's Information Sciences Institute. "The Domain Name System [DNS] lies at the heart of every user experience with the Internet," declared SIGCOMM Chair Jennifer Rexford. Mockapetris is a former chair of the Internet Engineering Task Force, and he designed the initial deployment of the Internet's Simple Mail Transfer Protocol. He said the award was a "tremendous honor" and comes at a time when the Internet's societal effects are unprecedented, but cautioned: "While the Internet has come a long way there is still much to do in terms of making the network more useful and secure."
    Click Here to View Full Article

  • "Federal ID Act May Be Flawed"
    Los Angeles Times (05/31/05) P. A19; Menn, Joseph

    The Real ID Act President Bush signed on May 11 as part of an omnibus spending bill is supposedly designed to make it harder for impersonators to acquire driver's licenses, but critics say the act will allow both authorized and unauthorized people to access considerably more personal data about citizens, and would actually raise the likelihood of identity theft and other abuses. Privacy Journal publisher Robert Ellis Smith says the Real ID scheme "builds a purported real identity on a document that we all know is built on fragile documents." The act requires driver's licenses to be standardized as machine-readable documents that contain personal identifiers, but opponents expect this measure will encourage many more merchants to scan customer licenses and supply this information to data brokers such as LexisNexis and ChoicePoint, whose databases were recently broken into by identity thieves. ACLU legislative counsel Timothy Sparapani calls attention to the fact that, under the Real ID Act, "We will have all this information in one electronic format, in one linked file, and we're giving access to tens of thousands of state DMV employees and federal agents." Some people opposed to certain provisions of the law acknowledge that the legislation will complicate the attainment of licenses by impostors, but others say the specter of fraud will not be entirely dissipated. For instance, applicants can still obtain licenses without providing a photo ID, and foreign passports are valid as proof of identity. University of Pittsburgh professor Joseph Eaton says it is relatively easy to send authentic U.S. birth certificates to the wrong people through chicanery. A 2002 study from the National Academy of Sciences concludes that it would be easier to create a national ID system than to maintain its security.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

    For information on ACM's activities regarding the Real ID Act, visit http://www.acm.org/usacm.

  • "Linux Powers Airborne Bots"
    Wired News (06/01/05); Poulsen, Kevin

    University of Essex researchers are building a fleet of miniature robotic helicopters that exhibit swarm intelligence and cooperative function by running the Linux 2.6 kernel and communicating wirelessly with each another via Bluetooth. Researcher Owen Holland expects the UltraSwarm project to yield a "flock" of helicopters that "will be autonomous individually and as a swarm, and they will be gathering and processing visual data in [a] distributed way." Such technology could one day be applied to military surveillance: For instance, a fleet of drone aircraft with video cameras could capture hostile terrain from multiple perspectives and process the image locally while airborne. The UltraSwarm team credits the flocking patterns of birds as the inspiration for the project. Holland believes a powerful computer can be configured out of small, lightweight components by tapping wireless communications and distributed computing. However, University of Essex doctoral candidate Renzo de Nardi says devising the software that pilots the mini-helicopter is a tough challenge, since the job requires the integration of highly disparate disciplines, such as aerodynamics and visual processing. The researchers will detail their work at next week's IEEE Swarm Intelligence Symposium, and Holland expects the system to be operational in a matter of weeks.
    Click Here to View Full Article

  • "Camera Sees Behind Objects"
    Technology Research News (06/08/05); Patch, Kimberly

    A digital camera that can see from the point of view of a light source has been developed by Stanford University and Cornell University researchers. The system includes a digital projector that beams a series of black and white pixels at a scene, while the camera captures how the light bounces off objects; a computer program monitors the data and adjusts the patterns in order to acquire the necessary information. The system can read a playing card that faces away from the camera, but the technology could eventually find use in the special effects industry, where it could be employed to re-light movie scenes and realistically splice actors and computer graphics together. A long-term goal of this research is the generation of a photorealistic virtual reality training environment, not unlike the "Star Trek" holodeck. "In the card experiment, the camera cannot see the card directly, but it can see the surface of the book [behind the card]; the light from the projector bounces off the card, then bounces off the book and hits the camera," explains Stanford electrical engineering researcher Pradeep Sen. Among the technical challenges the system needed to address were finding a way to rapidly measure pixel changes, handling noise and accommodating black pixels, and compensating for the camera's low dynamic range. The research, which will be presented at ACM's upcoming SIGGRAPH conference, was sponsored by the Defense Advanced Research Projects Agency, the National Science Foundation, Nvidia, and Germany's Max Planck Institute.
    Click Here to View Full Article

    For more information on SIGGRAPH, or to register, visit http://www.siggraph.org/s2005/.

  • "EU Highlights Grid Computing as 'Pay-as-You-Use Solution' for Europe"
    PublicTechnology.net (06/01/05); Reding, Viviane

    Grid technology adoption in European commercial and public sectors will boost that region's competitiveness and quality of life by the end of the decade, says European Commission Information Society head Viviane Reding. European researchers at CERN and other institutions are leading grid technology development just as they paved the way for widespread adoption of the World Wide Web in the 1990s. Like the Web, grid technology will have an enormous impact on business and society because it will make massive computing resources available for more end users. Information and communications technology, especially grid technology, has been identified as key to achieving EU goals such as the Lisbon Strategy and the 7th EU Framework program, but Europe currently lags in IT research spending. The European Council in 2002 set a goal of raising IT research spending to 3 percent of GDP by 2010, up from 1.9 percent of GDP. To better direct research efforts, the GridCoord project was initiated to coordinate national and EU grid research efforts, while public-private partnerships have also been formed to more closely align research with business goals and therefore spur greater industry adoption of grid technology. Service-oriented architecture is another critical aspect in encouraging commercial grid adoption because it fits with the overall shift from selling products to providing on-demand services. Grid technology is already delivering real benefits in Europe, such as with small video production firms now able to tap grid computing to operate virtual 3D studios in post-production efforts. Car manufacturers are also using grids to provide holistic design, simulation, and testing tools that can be easily accessed by people throughout the design chain, including suppliers, while grids are also playing a role in natural disaster response and crisis management.
    Click Here to View Full Article

  • "Maryland Adopts Computer-Recycling Fee"
    Washington Post (06/01/05) P. D4; ElBoghdady, Dina

    Several U.S. states are attempting to deal with the mounting problem of electronic waste by implementing computer-recycling programs, and Maryland is the latest participant with Gov. Robert Ehrlich Jr.'s (R) recent signing of a bill requiring computer makers producing over 1,000 computers annually to pay an initial recycling fee of $5,000 by Jan. 1 of next year. Manufacturers will then have to pay just $500 in subsequent years, provided they have "takeback" programs that let consumers return products for recycling free of charge. California, on the other hand, charges consumers a $6 to $10 disposal fee on every computer or television set purchased, while Maine requires manufacturers to pick up the entire recycling tab for their products, and to partially cover the recycling cost for products of unknown origin. Computer manufacturers are relieved that Maryland opted for a much less burdensome e-waste recycling scheme, while environmentalists say the measure is a positive step forward. Computer makers have the option of paying the $5,000 fee, recycling the products themselves, or outsourcing recycling to a third party. However, the state has not yet determined how many companies qualify for the recycling program, and therefore has no idea of how much money it will raise; an initial estimate of $400,000 was probably excessively optimistic, according to officials. Panasonic's Mark Sharp is skeptical that Maryland will raise enough money to cover the recycling costs, arguing that a much better long-term solution is to incorporate a recycling fee into the purchase price.
    Click Here to View Full Article

  • "Java Sets Sail for the Final Frontier"
    VNUNet (05/25/05); Sanders, Tom

    Java creator and Sun Microsystems vice president James Gosling says the programming language will enable radical new applications as it becomes more widely deployed on the edge of the network. Java-coded sensors at the bottom of San Francisco Bay could be used to build predictive models, or sensors could be used for an airplane's control system, while Java-enabled devices at the network edge could also help financial services companies make small business loans to rural farmers in developing countries, or make health care more efficient by enabling effective global resource allocation. Gosling says he envisioned none of these applications when he created Java, but knew from experience that inventive people would use unrestrictive technology to do unexpected things. Another aspect in early Java development was the realization that the technology would support functions normal people use everyday, and Gosling says the thought of his children using Java-dependent devices made him not compromise on security or reliability. Still, Gosling says system testing needs to be less expensive and complex; he cites the example of the Federal Aviation Authority, which uses an extremely difficult and costly testing regime. Sun Microsystems focuses on managing the growing number of Java libraries and the virtual machine, ensuring reliability, performance, and scalability. Java needs to be able to deal with large multi-processor chips that are emerging, and part of that challenge means improved development tools; the Java Development Kit 5.1 makes instrumentation always available for debugging code when it is actively being deployed. Gosling predicts the WS-* protocols will be more important in an engineering sense.
    Click Here to View Full Article

  • "Setting the Stage for China's Tech Future"
    TheDeal.com (05/20/05); Swanson, K.C.

    Major technology vendors are stepping up research collaborations with Chinese universities in order to curry favor with government policymakers, tailor products for the local market, develop the local technology base, and tap local talent. IBM China Research Laboratory director James Yeh notes IBM has operations in China as well as India, Israel, Japan, and Switzerland because it wants to capture innovations coming from many different places. Another benefit is access to top Chinese academics, who often sit on government policy panels and serve as senior advisers with more influence than university professors have in the West; companies that develop relationships with these people benefit from knowing government stances on topics. The size of university collaborations is growing and intellectual property has been protected, with chief scientists taking responsibility for any leaks as in the United States. Some significant collaborations include Microsoft's work with five different universities, each of which focuses on different technology aspects. Tsinghua University's Microsoft partnership deals with multimedia and networking while the partnership with Zhejiang University involves computer vision and graphics, for instance. IBM China university relations manager Qiu Xiaoping notes that companies involved with budding Chinese technologists have an advantaged position in the burgeoning Chinese market; they are able to introduce technology products and shape curriculum. Chinese researchers tend to think differently than in the U.S., and students are given less creative freedom, but State University of New York provost Denis Simon says it is essential for multinational companies to capitalize on global talent pools.
    Click Here to View Full Article

  • "'Patient Zero' Pinpointed in PC-Worm Outbreak"
    New Scientist (05/27/05); Knight, Will

    University of California, Berkeley, researchers Vern Paxson and Nicholas Weaver and Georgia Institute of Technology researcher Abhishek Kumar detail in a paper how they applied a "telescope analysis" method to reconstruct the propagation trail of the "Witty worm" epidemic of March 2004 and find the first PC, or "patient zero," responsible for kick-starting the outbreak. Telescope analysis monitors parts of the Internet that are typically inactive, but which receive data packets when malware starts randomly generating traffic. "Within the overwhelming mass of observed data lies a very structured process that can be deciphered and understood, if studied with the correct model," the researchers write. By analyzing the Witty worm's code, the researchers discovered that it produced new network addresses to target via a "pseudo random number generator," and from this data they precisely calculated which addresses were viable targets. Patient zero was identified as the earliest machine to be manually infected through analysis of traffic data compiled by the network telescopes. The researchers think this strategy could be used to help law enforcers unmask the culprits responsible for spreading malware across the Internet. Symantec Security Response chief researcher Eric Chien says Paxson, Weaver, and Kumar's work could be valuable, but notes that worm authors will usually launch an infection using another computer, while sorting through the volume of data collated by telescope analysis could be an arduous process.
    Click Here to View Full Article

  • "LinuxWorld Summit: Professor Predicts Open Source Revolution"
    SearchEnterpriseLinux.com (05/31/05); Loftus, Jack

    In the final keynote panel at this year's LinuxWorld Summit, Free Software Foundation general counsel and Columbia University professor Eben Moglen told attendees that the SourceForge.net open source software development Web site is an outstanding resource for free computer-science research and information. He noted that typical SourceForge contributors are IT professionals with a decade's worth of experience, who devote between 10 and 11 hours to a selected project every week. Moglen said the results of his own research project revealed that the open source community is an effective vehicle for research, which supported his conclusion that software development will undergo a dramatic transformation by 2010. He predicted that by then, "the real work of industrial software [companies will be] editing existing open source software from the outpouring of millions of minds, and weeding out what really matters to provide a product to the software customer." Moglen pointed to the existence of software purists who are firmly opposed to the slightest modification of code that cancels its ability to be freely distributed. "So not only is a large part of the community doing QA work...but a large part of the community is doing your legal review as well," he said, adding that these services cost nothing. Moglen also reported that there should be fewer open source software licenses, and projected that licenses will branch along two different paths: Permissible licenses where the developer acknowledges the software's copyright but can use it as he wishes, and "copy left" licenses where a portion of the software is developed under the General Public License and then incorporated into proprietary software.
    Click Here to View Full Article

  • "Machines' Way With Words"
    BBC News (05/28/05); Evans, Stephen

    Customer support and other services that employ voice recognition systems are benefiting from research to determine the most effective kinds of voices that automated agents should possess. Stanford University professor Clifford Nass of the Laboratory for Communication Between Humans and Interactive Media conducts tests designed to ascertain how people react to voices of different accents, ages, and tones. The tests generally show that people feel male voices are more persuasive, energetic, authoritarian, and sincere than female voices, although male voices are more likely to antagonize people. Nass has also learned that annoyed motorists are more likely to be calmed down by a subdued voice on the navigation system. Conversely, low-key voices with a steady, unchanging pitch can aggravate calm drivers. Nass is developing a system where the machine's voice changes in accordance to how the user addresses it. Machines that talk first involve recordings of words and syllables spoken by a human actor that are stitched together into coherent sentences, according to the machine's interpretation of the human speaker's questions or statements. Machines' ability to recognize more accents and variations is improving all the time, as is their ability to talk back without sounding mechanical.
    Click Here to View Full Article

  • "AI Draws Stanford Students"
    Wired News (05/31/05); Poulsen, Kevin

    Student test subjects at Stanford University's Virtual Human Interaction Lab responded favorably to digital avatars that mimicked the subjects as they pitched a notional university security policy. Half of the sessions, which were conducted in a 3D virtual-reality environment, involved avatars programmed to precisely imitate the students' head movements with a four-second delay, while the other half used programs whose head movements were recorded from earlier subjects. Only eight out of 69 students detected the mimicry, while the rest preferred the mimicking agents to the recorded agents, which they rated as friendlier, more honest, persuasive, and interesting; not only did the mimicking agents hold the students' attention more, but were also more likely to sell them on their viewpoint. Virtual Human Interaction Lab director Jeremy Bailenson says test subjects "across the board" found the mimicking agents to be more convincing, and notes that such mimics could be a powerful influence in virtual worlds, where their subliminal imitation could be applied to many people at once. In an earlier experiment, the lab had test subjects complete a survey of how they felt toward the presidential candidates while viewing side-by-side photos of Bush and Kerry--but one-third saw photos with their own faces morphed with Bush's, another one-third had their faces morphed with Kerry's, and the last third saw unaltered pictures. Bailenson says Bush won by 15 points among the first third, Kerry won by 6 percent among the second third, and the control group favored Bush by roughly the same three-point margin as the national election; none of the test subjects detected the doctored pictures.
    Click Here to View Full Article

  • "Standardizing Middleware: More Than Meets the Eye"
    Wireless Week (05/15/05) Vol. 11, No. 11, P. 28; Brown, Karen

    Standardizing middleware could simplify the creation of mobile business products for wireless systems developers and enable business owners to more easily adopt mobile applications, but not all wireless systems providers agree that such an approach is viable. Opponents claim a uniform middleware standard cannot accommodate the numerous business computer systems and wireless gadgets involved, nor can it account for the fact that many organizations still rely on custom products to align with their own mobility needs. Sun Microsystems is working to establish Java as a uniform middleware standard, given the wide adoption of Java application standards; the Java 2 Platform, Enterprise Edition (J2EE) spec manages applications servers, while the Java 2 Mobile Edition (J2ME) spec supplies mobile device plug-ins. However, Sun's Ken Drachnik admits that both standards are relative newcomers, and their wide adoption will take time. JP Mobile's Lori Williams says relying on Java as a middleware standard may be harder than people think: For one thing, mobile products must be designed to operate regardless of whether the device has access to a wireless link, although Drachnik claims the J2ME spec can accommodate unconnected devices. Meanwhile, the Integration Consortium has organized a wireless subcommittee to create a standard middleware platform that integrates with the wireless system, monitors and manages the devices plugging into the network, handles security, facilitates the addition or modification of applications by providing development plugs, and supplies the client software on the devices to interact with the overall network and manage data. And IBM is seeking to expand its Websphere enterprise computer system platform into the wireless domain by calibrating new wireless services systems with Websphere and employing standards-based application programming interfaces.
    Click Here to View Full Article

  • "NASA Looks Beyond Wheels for Rover Locomotion"
    Aviation Week & Space Technology (05/30/05) Vol. 162, No. 22, P. 48; Morring Jr., Frank

    NASA engineers are working on small robotic vehicles for planetary exploration whose capabilities should surpass those of wheeled rovers. Under development is the prototype of a tetrahedral rover that uses a system of telescoping struts in a pyramid configuration to shift its locomotion to a slithering or tank-tread-like mode, depending on what the terrain calls for. Such locomotion could one day propel autonomous nanotechnology swarms (ANTS) that could assemble themselves into a nearly infinite number of shapes in order to better traverse various surfaces and repair any damage they sustain. The software for such swarms is still under development, but principal investigator Steven Curtis is confident that this goal is reachable. The ANTS project calls for a robotic swarm to be driven by a synthetic neural system that has a dual-level operational scheme: One level is reserved for autonomic functions, while the other supports higher-intelligence operations such as environmental perception and response. "We combined both those functionalities in what we call a neural basis function, and, by adding up a bunch of those with specialized capabilities, you can get almost arbitrarily complicated behavior," Curtis says. Clusters of hundreds of Beowulf processors are being used to initially model the neural system, and Curtis predicts that a single Beowulf cluster--which currently fills an entire room--should fit on a computer chip within two decades. This will allow autonomous tetrahedral nodes to reconnect in numerous ways and operate collaboratively in many functional scenarios.

  • "Privacy Matters"
    Washington Technology (05/23/05) Vol. 20, No. 10, P. 1; Lipowicz, Alice

    Privacy proponents' increased emphasis on enhancing the collection, storage, and sharing of personal information with more protective measures has sparked expectations of a legislative mandate for more rigorous controls over personal information. However, it remains uncertain as to how the government plans to balance out the often antagonistic goals of privacy rights and national security. "The question is: How do you do what you need to do while minimizing the damage to civil liberties and rights?" says consultant Ramon Barquin. Better data security alone does not adequately address privacy concerns, which have been key factors in the delay, reassessment, or cancellation of high-profile anti-terrorism projects such as the Transportation Security Administration's CAPPS II airline passenger screening initiative, the Pentagon's Total Information Awareness data mining program, and the Justice Department's Terrorist Information and Prevention System. Homeland Security officials insist that their department's privacy office has stepped up efforts to address privacy issues earlier; DHS Privacy Officer Nuala Kelly earned some credibility with a report on certain improprieties of TSA staff during the early development of CAPPS II that probably helped hasten the program's termination, yet many say her office does not carry sufficient clout. "The chief privacy officer needs the independence and adequate authority to properly evaluate the privacy concerns of the department, outside political pressures," noted the House Homeland Security Committee's Rep. Bennie Thompson (D-Miss.) last month. Congress is mulling a batch of proposals to reduce ID theft while strengthening privacy protections, including the establishment of a national privacy and civil rights oversight board.
    Click Here to View Full Article

    For more on privacy, security, and public policy, visit http://www.acm.org/usacm.

  • "Embedded-System Programmers Must Learn the Fundamentals"
    EDN Magazine (05/26/05) Vol. 50, No. 11, P. 91; Hyde, Randall

    Software programmers need to start taking code optimization seriously as Moore's Law produces less performance gains, writes consultant Randall Hyde, author of "Write Great Code: Understanding the Machine." Though Moore's Law is likely to continue increasing transistor density, those gains are becoming more and more difficult to translate into added performance: New chips today feature 15 percent higher clock frequencies compared to 200 percent increases seen with new chip generations 10 years ago, for instance; this means embedded-system programmers need to learn machine organization and other fundamentals in order to optimize their code for performance. Too often, university classes that deal with machine organization and assembly-language programming are passed off as irrelevant and students come away with a dependency on abstract programming languages such as Java. A large part of the problem is the misconception that optimization is bad, and this idea stems from the misinterpretation of British computing pioneer Sir Tony Hoare's quote that "Premature optimization is the root of all evil;" the quote taken in context refers to small efficiencies that are not worth spending time on. However, Charles Cook says system-level bottlenecks should be of concern for all system designers. To write more efficient code, programmers do not have to resort to tweaking designs in assembly language, but simply should apply their understanding of low-level-computer-system operation when writing high-level code; they should know the implications of different high-level source statements, for instance. Companies can encourage optimized programming by suggesting assembly language skills for positions and by pairing new programmers with experienced veterans. Another way to promote efficient code is to eliminate the culture that depends on continued hardware performance gains.
    Click Here to View Full Article

  • "Hacker Hunters"
    BusinessWeek (05/30/05) No. 3935, P. 74; Grow, Brian; Bush, Jason

    To counter the growing threat of professional, profit-driven cyber-criminals, enforcement agents or "hacker hunters" are combining the latest cybercrime deterrents with traditional tactics such as infiltration and the Internet equivalent of wire-tapping to topple and successfully prosecute online crime rings. The need to prevent cybercrime has never been more crucial, as the damage caused by hackers is growing steadily worse, while enforcement agencies are underfunded and underequipped. The urgency of the situation has not only helped cultivate smarter federal, state, and local agencies, but greater collaboration between them; in addition, cybercrime legislation is being pursued more aggressively. The highly publicized takedown of the ShadowCrew hacker gang by the Secret Service is a case study in how both the nature of cybercrime and anti-cybercrime strategy is changing. ShadowCrew's suspected ringleaders allegedly ran shadowcrew.com as an international clearinghouse for stolen credit cards and identity documents, and the gang reportedly had 4,000 members worldwide: Two people administered the Web site and recruited members; "moderators" hosted online forums where members could share tips on hacking and ID theft; "reviewers" obtained and tested merchandise; and "vendors" bought and sold on the site, mostly through online auctions. The Secret Service enlisted an insider to act as an informant, created and used a gateway to locate gang members, and coordinated an international crackdown on ShadowCrew by state and local police and authorities in six foreign countries. The biggest obstacle law enforcement faces in curbing cybercrime is its worldwide scope. Countries with weak hacking laws and flimsy enforcement are havens for cyber-criminals, who can also tangle up the trail for investigators by keeping servers in a separate country.
    Click Here to View Full Article

  • "Designing for the Virtual Interactive Classroom"
    Campus Technology (05/05) Vol. 18, No. 9, P. 20; Boettcher, Judith V.

    Online learning offers flexibility and convenience, but synchronous online collaboration tools can support richer and more spontaneous, natural, and efficient interaction. Designing for Learning founder Judith Boettcher recommends that educators choose at least two to three synchronous collaboration tools for the various applications that students, faculty, and staff will quickly find for synchronous, real-time interaction. She writes that faculty should design online and blended courses to support three synchronous scenarios--small group meetings of two to six people, interactive class meetings of 10 to 30 people, and large class meetings of more than 100 people. Small group meetings such as tutorials and study groups usually involve a combination of live audio and video feeds, and tools for such events should support basic capabilities such as application sharing, text chat, voice-over-IP chat, meeting documentation, and "presence" software. Interactive class meetings involve geographically scattered participants in situations that may feature several video streams and numerous audio streams; synchronous tools for this scenario must provide the capabilities required in small group meetings, along with interoperability, ease of use, excellent customer support, and the ability to pilot a tool before committing to it. Tools for large class meetings feature software that delivers a high-bandwidth video downstream and audio channels from participants, and buttresses interactive communication streams with large groups of students with the aid of an assistant who can filter and sequence questions. Microphones can be of particular benefit to both small group and large class meetings.
    Click Here to View Full Article

    [ Archives ]  [ Home ]