ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 668:  Wednesday, July 14, 2004

  • "E-Voting Backup Is Demanded"
    Salt Lake Tribune (07/14/04); Burr, Thomas

    Some 23 "Computer Ate My Vote" rallies were held across the United States on July 13, where advocates clamored for the deployment of voter-verifiable paper ballots to ensure accurate recounts because of a lack of security with electronic-voting systems. In Salt Lake City, Utah election officials were given a 2,000-signature petition calling for new e-voting systems to be backed up with a paper trail. The local rally drew 50 people, including former ACM President Barbara Simons, who advised state election officials not to rush into purchasing e-voting equipment because "there are other technologies coming." Last week, officials opened the bidding for vendors to compete for $20.5 million in federal funds to provide new voting technology, although the bid request does not stipulate that the systems have to be electronic. Democratic senatorial candidate Paul Van Dam said at the Utah rally that it is vital for election officials to guarantee the security of the voting system; "Let's not allow democracy to be hacked," he entreated. In related news, the Associated Press reported that over 100 "Computer Ate My Vote" activists in Maryland demanded that printers be installed in voting machines before the presidential election, while some 20,000 petitions for paper trails were delivered to Florida's Division of Elections. Florida officials countered that touchscreen systems offer more accuracy than older voting systems such as punch cards. The "Computer Ate My Vote" campaign was organized by VerifiedVoting.org, which estimated that rallies were held in 19 U.S. states.
    Click Here to View Full Article
    Barbara Simons is co-chair of ACM's U.S. Public Policy Committee

  • "Hacktivism and How It Got Here"
    Wired News (07/14/04); Delio, Michelle

    The term "hacktivism" was not coined until 1998, when several members of the Cult of the Dead Cow (cDc) hacker organization held an online discussion of how hacking could be used to promote political freedom in China after the Tiananmen Square incident. Professor Ronald Diebert of the University of Toronto's Citizen Lab explains, "The combination of hacking in the traditional sense of the term--not accepting technologies at face value, opening them up, understanding how they work beneath the surface, and exploring the limits and constraints they impose on human communications--and social and political activism is a potent combination and precisely the recipe I advocate to students and use to guide my own research activities." He adds that increasing numbers of mainstream human rights activists and major foundations are embracing hacktivism, and singles out cDc in particular for its often irreverent, ethical, and ingenious tactics. CDc leverages the section of the UN Declaration of Human Rights stating that freedom of opinion and expression without interference and through any media is a universal human right. Oxblood Ruffin, a member of cDc, says the group has been establishing relationships with grass-roots and traditional human rights organizations. One cDc group, Hacktivismo, has devised tools that permit people to access and exchange information marked as undesirable by their government. Patrick Ball, who directs human rights programs at the nonprofit Benetech, says "hacktivism is an opportunity for engaged young programmers to do cool and socially beneficial stuff with their technical skill and curiosity--instead of getting in trouble."
    Click Here to View Full Article

  • "Computer, Heal Thyself"
    Salon.com (07/12/04); Williams, Sam

    Berkeley researcher and ACM President David Patterson and Stanford scientist Armando Fox's Recovery Oriented Computing (ROC) project focuses on the design of computer systems that can can rapidly bounce back from malfunctions. The initiative is just one of many "autonomic computing" projects that are sweeping academic and corporate research facilities. Fox says modern systems are plagued with software bugs that programmers have had to contend with since "the beginning of time," and he and Stanford doctoral student George Candea have co-authored a series of papers that probe "micro-rebooting," a strategy in which system managers simply reboot the malfunctioning elements of a computing network, an approach that Candea says often fixes the bug faster than tracking down and correcting the root cause. Both he and Fox have devised recursive restartability, a preventative maintenance process whereby an automated network manager reboots each branch of a network's node tree, while Candea is focusing on the integration of micro-rebooting and fault injection, a strategy he calls crash-only computing. The doctoral student has created a Java applications server split into a management element that periodically queries the software system and looks for any indications of bad data, and a monitoring element that assesses the error path and malfunctioning component and triggers a micro-reboot. The National Science Foundation has funded University of Virginia researcher David Evans' project, which mimics biological systems more closely by having modules in a software network communicate in a manner modeled after chemical diffusion. Each module is programmed to construct and maintain a 3D superstructure, after which various modules are exposed to destructive data and purged from the system when they fail; the network is designed to replace the lost modules by tapping a distributed memory or "signal" of each component's position and function.

  • "Erecting Borders on the Web"
    Philadelphia Inquirer (07/11/04) P. E3; Jesdanun, Anick

    Improving geolocation technology is enabling companies to divide the Web and control its accessibility by detecting users' whereabouts with greater accuracy, but privacy advocate Jason Catlett is most concerned that the technology will encourage Web sites to deceive visitors. "The technical possibilities do allow a company to be two-faced or even 20-faced, based on who they think is visiting," he warns. RealNetworks recently started offering soccer games and movies restricted to certain nations, and Art.com's Web site was coded to automatically present pricing in dollars for American visitors and euros for German visitors. The erection of artificial borders in the Web is generally the province of commercial companies rather than governments, but Alan Davidson with the Center for Democracy and Technology is worried that governments will attempt to use geolocation technology for staking digital real estate where their laws apply. Supporters of geolocation insist that the technology will increase the Internet's reach to a global audience: This means more efficient traffic distribution for ISPs such as AOL, the construction of demographic profiles by integrating census data with zip codes, and more customizable news, advertisements, and other content. Sports leagues and movie studios can use geolocation to distribute content that would otherwise be banned online because copyright and licensing laws can vary between countries. Leading geolocation companies claim their products are 80 percent accurate or higher for city-level data and 99 percent for country targeting, although troublesome addresses are usually excluded. AOL is still problematic, as are anonymizing services that mask a user's true location and ID.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "'Virtual Clay' Brings the Act of Sculpting to the Virtual World"
    UB News Services (07/12/04); Contrada, John Della

    A virtual clay sculpting system invented by researchers at the University of Buffalo's Virtual Reality Lab will give product designers or artists the power to sense 3D virtual objects through touch, and shape and manipulate them in real time as if they were actual material, according to lab director Thenkurussi Kesavadas. The researchers developed a "ModelGlove" that records the exertion of force by the sculptor's hand when manipulating a block of clay, and this information, along with hand position and fingertip movement speed, is sent to a PC where a virtual slab of clay is shaped accordingly. The ModelGlove currently sports one tactile sensor on the tip of the index finger, which is represented on the computer screen as either a sharp tool for poking holes, a medium-sized implement for measuring or molding clay, or a large diameter tool for rough surface shaping; a next-generation ModelGlove will give users complete finger control of the virtual clay by embedding sensors in all the fingers as well as the palm. The UB researchers say the system builds on freeform Nonuniform Rational B-spline Surfaces modeling methods, which is the only technology that can transfer tactile sensation directly from the user's hand to the electronic shape. "Our technology is far more intuitive than click-and-drag virtual prototyping tools currently in use," Kesavadas boasts. He posits that virtual counterparts for any manual instrument used by artists, even a chisel or potter's wheel, can be created. "Most virtual-reality technologies to this point have focused on 3D visualization, but the sense of touch may be the most powerful way to make virtual reality more real," says Kesavadas. The UB Virtual Reality Lab is home to some of the leading haptic technology researchers, who are focusing on other touch system applications such as surgical enhancement.
    Click Here to View Full Article

  • "Messenger Taps Social Nets"
    Technology Research News (07/21/04); Patch, Kimberly

    University of Michigan researcher Jun Zhang noticed that when he queried friends and colleagues via instant messaging and email to get answers to questions, they would often defer to one of their friends or colleagues; with funding from the National Science Foundation and Intel, Zhang has automated this process with the Small-World Instant Messaging System (SWIM). Zhang notes that SWIM supports a more complex user profile than the majority of instant messengers: The system not only permits a user to manually enter his expertise and interests, but can build a keyword vector to represent the user's information identity by automatically mining his homepage and browser bookmarks, while the information-querying process is automatically coordinated by a referral agent. The referral agent receives the user's question, and relays it to all of the user's friends' agents, and then an agent in the friend's messenger checks its information identity profile to see if that person can answer the question. When a probable match is determined, that person sees the question and the route it has followed, and can either reply immediately or later if he is too busy. Cornell University computer science professor Jon Kleinberg says that Zhang's system must meet two key challenges: Locating people with relevant information, which is accomplished by the embedding of navigational methods into the software, and motivating people to participate, a harder challenge because the further along a social network an inquirer travels, the less familiarity he has with the people the system refers to for answers. Zhang says SWIM is one component of a larger project to see how productivity is impacted by information streams. SWIM was detailed at the Association of Computing Machinery's Computer-Human Interaction 2004 conference.
    Click Here to View Full Article

  • "UW Professor 'Knows It All' About Search Engines"
    UW Daily Online (07/14/04); Campbell, Taj

    KnowItAll is a search engine currently under development by professor Oren Etzioni of the University of Washington-Seattle's computer science and engineering department that seeks out data on the Web and arranges it in a list. Etzioni explains that "KnowItAll extracts information on your behalf, so that you can be interested in broader questions without having to narrow lists." The search engine takes an input noun and looks for sentences on Web sites containing the noun while also seeking words that often come after it, so that the results are not nonsensical. KnowItAll, which is being funded by the National Science Foundation and Google, was inspired by MIT's Open Mind artificial intelligence project and Etzioni's own MULDER project. Open Mind is supposed to learn as volunteers type sentences into a Web interface, but Etzioni figured that such a program could be run more efficiently. MULDER was designed as a reverse query question-and-answer system, and Etzioni playfully refers to KnowItAll as "MULDER on steroids." The search engine's creator doubts that KnowItAll will be publicly released in its current incarnation, since the code was not intended for a professional application when it was written. Etzioni believes the project's greatest contribution will be the papers and ideas it inspires.
    Click Here to View Full Article

  • "France Lends Support to New Open-Source License"
    IDG News Service (07/09/04); Sayer, Peter

    French government researchers have created the first in a family of open-source software licenses they say will better fit the needs of the French legal system than do current U.S.-centric open-source licenses. The CeCILL license serves the same purpose as the Free Software Foundation's GNU General Public License (GPL), but differs in the areas of copyright and product liability to serve the needs of French law. Unlike in the United States, where developers can deny any responsibility for their software products, French developers must be held accountable. CeCILL tries to reconcile not-for-profit open-source development with this responsibility by stating that licensed products are meant only for knowledgeable users and that developers do take some responsibility. National Research Institute for Computing and Automation (INRIA) development and industrial partnerships head Gerard Giraudon says this claim will reassure users. French software is covered by artistic and literary copyright, not by commercial intellectual property, and that fact influenced another variation in CeCILL; software that incorporates CeCILL code must also adopt that license, similar to how GPL "contaminates" software incorporating it. But the French researchers drafting CeCILL specifically allowed GPL software to incorporate CeCILL software code without having to adopt that license. The researchers, who come from three government-funded research institutions, also plan to release other French versions of open-source licenses, including the LGPL (Lesser GPL) and BSD license, to allow for different combinations of open-source and proprietary code.
    Click Here to View Full Article

  • "Movie and Software File Sharing Overtakes Music"
    New Scientist (07/12/04); Knight, Will

    A July 12 report from the Organization for Economic Cooperation and Development (OECD) suggests that more video and software files than music files are being swapped over the Web, although audio files are still the most frequently traded kinds of files: 49 percent of the data traded globally via file sharing networks consisted of music files in 2003, compared to 62 percent the year before. The OECD report also finds that video and software file trading is more commonplace in Europe than the United States. The increase in the amount of traffic taken up by software and video is partly due to the larger size of such files. The OECD report draws no line between trading freely available content and illegally sharing copyrighted material, but the film industry is clearly worried: A separate study released by the Motion Picture Association of America (MPAA) on July 13 alleges that online movie piracy has proliferated widely, as 25 percent of American Internet users interviewed admitted to illegally downloading a movie online. The trend was characterized by the MPAA as a "growing global epidemic" that cost movie studios $6.3 billion in 2003. Sandvine, an American firm that supplies network monitoring services for ISPs, also reports increased movie swapping on trading networks, and European marketing manager Chris Coleman attributes this growth to the spread of broadband Internet connections, noting that the size of movie files makes them a bigger headache for ISPs than music files. Meanwhile, a Business Software Alliance report estimates that 36 percent of all software installed on computers globally has been copied unlawfully, and contends that file sharing networks are the channel through which most of this copying takes place.
    Click Here to View Full Article

  • "We, Robots"
    SiliconValley.com (07/12/04); Poletti, Therese

    Researchers at Stanford Research Institute (SRI) International's Artificial Intelligence Center are working on robots that can swim, run, jump, and climb using artificial muscles, which are thought to be less computationally taxing and more efficient than mechanical limbs. These muscles consist of hollow rolls of electroactive polymers, while the machines that use these polymers are modeled after insects: The eight-legged Flex resembles a spider both in form and function, while its 10-year-old predecessor, Hex, boasts six legs. A snake-like robot developed by SRI could be used to explore tight spaces and repair pipelines, while the more advanced Centibots can map out their surroundings with Web cameras and laser scanners and communicate with each other via onboard wireless networks. The Centibot project was underwritten by the Defense Advanced Research Projects Agency, and the machines have proven their viability for reconnaissance and security operations. The diminutive, wheeled Centibots are built from commercially available parts and use the Linux operating system as their software component. Roboticists' most formidable challenge is developing the AI software needed to enable robots to mimic the human brain. "The robots we have nowadays are really not that adaptable," observes Reid Simmons of Carnegie Mellon University's Robotics Institute. "Most robotic systems don't actively learn from their experiences."
    Click Here to View Full Article

  • "New Tools to Study Forest Fires, Traffic Jams, & Other Problems"
    Newswise (07/09/04)

    Researchers at Carleton University in Ottawa are refining the simulation of complex physical systems such as forest fires and traffic flow. A new toolkit developed at the university takes bulky software code used in another application and shrinks it, allowing new parameters to be factored in easily. Carleton researchers built upon work done at the University of Arizona, which is a key academic research partner with Carleton, by applying higher-level software language to roughly 50 pages of code and allowing users to address the problem more directly instead of worrying about the specifics of how the code worked. Carleton systems and computer engineering Professor Gabriel Wainer says the new toolkit makes the fire simulation program more effective: "Practically, someone could add a couple of lines that introduces different factors such as 'What if it is raining?' or 'What if a firefighter is placed here at this point and time?'" he says. In order to determine whether a town is in danger or where best to deploy firefighters, users need to be able to quickly evaluate different scenarios, factoring in wind speed, wind direction, terrain, slope, and firefighter resources. The University of Arizona software is founded on work done by USDA Forest Service researcher R.C. Rothermel in 1972, who Wainer describes as the "godfather" of fire simulation research. The simulation research can also be applied to other fields, including the management of traffic flow using programmable traffic signals, or in finding optimal configurations in ad hoc wireless networks.
    Click Here to View Full Article

  • "Framing the Reuse of Digital Cultural Heritage"
    IST Results (07/14/04)

    The Information Society Technologies program-funded European Museums' Information Institute Distributed Content Framework (EMII-DCF) project sought to determine the needs of digital content users and to illuminate requirements related to intellectual property rights and unified technical standards, and this has yielded guidelines that will conclusively maintain interoperability and the safekeeping of digital cultural resources while making the processes for generating and capitalizing on cultural content less complicated and more accessible. Many museums are reluctant to distribute digital copies of material that researchers need, for instance, because they fear they will lose control over the material, or that the researchers will need special software to access the content. EMII-DCF was designed to address such concerns, explains Gordon McKenna of Britain's Museum Documentation Association, the EMII-DCF project leader. EMII was established as a virtual network linking major EU cultural institutions that promotes and shares best practices and the use of information management standards among EU member states and kindred nations. Intellectual property law has been adjusted and altered to deal with the increasing ease and low cost of copying digital material. EMII-DCF guidelines suggest a three-step strategy: The auditing of cultural resources to ascertain copyright holders; negotiation with copyright holders to secure permission to use the resources; and management and protection of intellectual property rights through the documentation and updating of the above data. Advised measures to secure copyrighted material include the display of statements about terms of use, digital encryption, low-resolution images, password protection, Web-casting, and watermarking. McKenna says the DCF "has made considerable steps in guiding content holders through the maze of standards and sets out a model that...will ensure that digital resources are easily accessible by everyone."
    Click Here to View Full Article

  • "Mind Power: Scientist Turns Thoughts Into Actions, Literally"
    Boston Globe (07/13/04); Goldberg, Carey

    Researchers were impressed with the results of an experiment conducted by Brigham and Women's Hospital scientist Seung-Schik Yoo in which people moved a computer cursor through a maze by their thoughts, which were read by a powerful MRI scanner. The scanner picks up changes in blood oxygen levels in various areas of the brain triggered by specific thoughts, and translates them into cursor movements: For example, thinking the calculation "50-3=47" causes the cursor to go up, while envisioning the tapping of right-hand fingers induces a move to the right. "This is a new use of a very new technology that is a baby step toward mind-reading, toward being able to read out the cognitive functions of human beings in real time, noninvasively," boasted Stanford University researcher Christopher deCharms. He said he hoped Yoo's breakthrough will help lay the foundation for inexpensive, faster, and improved mind-directed technology for paralyzed patients. Yoo thinks that this method can be used to detect 20 or more classes of thought, which would enable a patient to operate a virtual typewriter. The process is not simple--each cursor movement eats up 12 MB of data, and completing the maze takes about 30 minutes. Duke University brain imaging researcher James T. Voyvodic doubts that Yoo's process will seriously compete with other brain-computer interface techniques given how slow, costly, and inconvenient it is. Yoo noted that his method could be used to enhance other brain-computer interfaces: It could, for instance, help researchers find the best places to implant or attach electrodes that tap into the brain's electrical impulses.
    Click Here to View Full Article

  • "Social Lives of a Cell Phone"
    Technology Review (07/12/04); Bender, Eric

    A number of researchers around the world are taking advantage of the ubiquity and capabilities of mobile phones: The free Dodgeball service that has already taken hold in New York City is poised to spread to nine other U.S. locales, enabling people to move the type of social networking programs they use on desktops to their mobile phones, says co-founder Dennis Crowley. Dodgeball lets people query the central database for a list of friends or friends of friends who are within a 10-block radius, as well as profiles of those people; users can send messages to specific people or an entire group, but Crowley says Dodgeball is building in privacy protections such as the ability to make a person invisible to another user. In Japan, more than 500,000 people have already subscribed to a location-enabled social networking application called Imahima, or "Are you free now?" Many other mobile phone applications are gaining grassroots adoption, such as mobile weblogs and sharing snapshots taken on a mobile phone. In Europe, people looking for anonymous sexual encounters are using the Bluetooth detection feature on their phones to discreetly contact interested persons nearby. Microsoft is developing more structured, and more useful, mobile phone services by integrating a bar code scanner that enables people to retrieve information on products, including Web search results and comments left by other people about the same item. The Aura project would revolutionize shopping, for example, but would also pose serious threats to individual privacy since frequent users would leave a digital trail of their daily activity. Microsoft is also working with University of Michigan researcher Paul Resnick on a carpooling scheme that lets mobile phone owners see who on their list of trusted contacts is in the vicinity and could give them a ride.
    Click Here to View Full Article

  • "Computer Design Approach Scraps Quest for Perfection"
    EE Times (07/12/04) No. 1329, P. 41; Brown, Chappell

    Computer scientists in California are working to make computer systems more resilient to failure so that users will not be significantly affected by glitches. The Recovery-Oriented Computing (ROC) effort takes a much different approach to computer science than traditional reliability efforts because it does not aim for the perfect operation of computer systems. Because increasingly complex computers will experience more and more failures, the researchers from the University of California, Berkeley, and Stanford University say the shift in mindset is necessary. In past decades, computer hardware designers have focused on increasing the operating speed of their machines by 4% per month, according to Moore's Law, and software engineers have had to deal with significantly more software code in each product iteration. Reduced instruction-set computer (RISC) inventor David Patterson is working on the ROC project and says the RISC concept was also intended to mitigate computer failure; because minicomputers at the time dealt with so many bugs in microcode, Patterson sought to reduce the instruction set and the chance for faulty code. Patterson, currently serving as ACM president, later began work on blade-style computers for large computing operations in order to reduce the cost of maintaining those systems. He says many organizations are increasing their maintenance costs even while they reap cost and performance gains. Stanford researcher George Candea is also working on the ROC project and has developed software fuse and system parameters that prevent computers from performing unpredictable operations and possibly crashing. ROC work also includes partitioning systems so that "micro-reboots" can be performed on a single module without seriously disrupting the entire system.
    Click Here to View Full Article

  • "No Where to Hide"
    PC Magazine (07/13/04); Cohen, Alan

    The spread of effective and cost-efficient database technology means that more and more personal information is becoming available, which can be both a benefit and hazard: On the one hand, technologies that keep tabs on us can increase the convenience of our daily lives, but at the same time they raise fears of privacy infringement, identity theft, and other kinds of exploitation. These worries are reinforced by analysis showing that the United States lags far behind Europe, Canada, and Japan in terms of strong privacy safeguards. The usefulness and convenience of technologies such as radio-frequency identification has bred a reluctance to restrict them, but falling prices are hastening their proliferation, which could lead to rampant data exploitation without effective oversight. "The key to privacy is that users need to be in control of their data," argues Center for Democracy and Technology associate director Ari Schwartz. Electronic Frontier Foundation staff attorney Lee Tien comments that the law does not ascribe any privacy value to people's movements: "The way they see it, location-tracking technology is the equivalent of having a lot of innocent bystanders see what you're doing," he explains. Having personal information accessible to businesses and potential criminals may seem bad enough, but federal data-mining initiatives supposedly designed to promote homeland security have provoked fears of Orwellian monitoring and the targeting of innocents. Supporting the advantages of such technologies while addressing privacy concerns is a formidable challenge, one that is currently hampered by a lack of privacy rules to accompany the rapid rollout of systems such as the CAPPS II high-risk air traveler screening system. The U.S. supports a patchwork of individual privacy statutes rather than an all-in-one model adopted by Europe, but the incorporation of privacy by organizations that develop technical standards offers a ray of hope.
    Click Here to View Full Article

  • "The Coming Robot Revolution"
    Computerworld (07/12/04) Vol. 32, No. 28, P. 26; Mearian, Lucas

    In this article a panel of experts discusses how robot technology will continue to progress, and the technological difficulties that need to be surmounted. Chuck Thorpe, director of Carnegie Mellon University's Robotics Institute, says it is a tricky proposition to get robots to perform seemingly simple domestic chores such as making a bed as well as people, and he thinks a bipedal, humanoid domestic robot will remain a staple of science fiction for several decades. Thorpe says the institute is focusing on research into robots that are very similar to humans, and robots that are better than humans; Robotics Institute projects include search-and-rescue robots, machines that can remove paint off ships, snake-like robots that can move through tight spaces, and teams of robots playing soccer. "Where we've had our greatest success is when we let the robots be robots and do what they're best at and let humans be humans and do what they're best at," Thorpe remarks. He predicts that within the next two decades or so robots will be used as surgical assistants and at-home caretakers for the elderly, and replace humans in unpleasant, mundane jobs. MobileRobots.com CEO Jeanne Dietsch says robots can help IT, and uses her company's PatrolBot as an example: The machine is essentially a wheeled computer with temperature sensors that can model the heat in the facility it patrols in three dimensions, and is directed by the computer or a person to check on problems. She explains that her company is moving toward the development of robots that function outdoors. University of Pennsylvania professor Vijay Kumar believes the entertainment sector offers the biggest opportunity for robots, but he also foresees a time when robots will fight battles to keep human casualties to a minimum.
    Click Here to View Full Article

  • "Redefining Radio With Software"
    Electronic Business (06/04) Vol. 30, No. 6, P. 58; Testa, Bridget Mintz

    Non-interoperability between radios is a staggering problem for the U.S. military and an even tougher dilemma for thousands of local public safety agencies lacking the benefit of central organization. Software-defined radio (SDR) is being pursued by both military contractors and electronics companies as a broad-spectrum, multi-protocol solution that employs software or software-driven silicon such as field programmable gate arrays (FPGAs) and digital signal processing (DSP) chips. The U.S. military's SDR overhaul of its communications infrastructure represents a lucrative opportunity for the semiconductor industry, according to Venture Development analyst Chad Hart. David Squires of Xilinx's DSP Center of Excellence reports that the world civilian SDR market could be equal to the international military SDR market, which should be worth twice as much as the U.S. market. Semico Research's Bob Merritt notes that third-generation mobile phone technology could greatly benefit from SDR, which would allow more processing power to be bundled into phones without increasing size, weight, power consumption, or cost. Forward Concepts founder Will Strauss says the U.S. military is the only SDR market right now, while commercial wireless SDR rollouts will probably move faster than those for the fragmented public safety market. The commercial wireless market is focusing on two products, base stations and handsets, with the former receiving greater attention from companies because SDR-enabling base stations is easier. Solving the formidable technical challenges of SDR will hinge on the development of a dynamically reconfigurable chip and figuring out how dynamic configuration capabilities should be rationed between the software and the silicon.
    Click Here to View Full Article

  • "MDA: Revenge of the Modelers or UML Utopia?"
    Software (06/04) Vol. 21, No. 3, P. 15; Thomas, Dave

    Bedarra Research Labs and OpenAugment Consortium co-founder Dave Thomas writes that model-driven agile development can be an effective software development technique, although he is uncertain about the Model Driven Architecture (MDA) proposed by the Object Management Group (OMG); Thomas argues that embracing or rejecting the MDA without appropriate analysis and assessment by the broader software community is folly. The author outlines the good and bad aspects of the Unified Modeling Language: Its primary benefit is the provision of a common and practical visual notation for characterizing many software artifacts employed in current object-oriented analysis, design, and development, while its drawbacks include a paltry effort by the OMG to establish tool interoperability, and an especially poor language architecture attributable to apathetic vendors. Thomas notes that model-driven development perceives models as partial descriptions that people turn into programs via tools, and points out that agile development refers to the important role people play in both the modeling and transformation processes. New concepts can be easily added with metamodels, but there is no guarantee that the concepts will be semantically astute, or that different metamodels will be orthogonal or uniform. Successful MDA tooling is impossible without platform-independent models (PIMs) and platform-specific models (PSMs) for target platforms such as J2EE, J2ME, MS.NET Windows and Compact Frameworks, and Linux Apache; it is relatively simple to build PIMs and PSMs for restricted behaviors, but building solid PSMs for J2EE, .NET, and similar platforms is immensely difficult, given the massive number of poorly aligned or inadequately documented application programming interfaces, not to mention the huge amount of software bereft of a comprehensive model represented by middleware and enterprise applications.
    Click Here to View Full Article

  • "ACM Professional Development Centre Adds Online Books Program"
    ACM (7/15/04) Francis, Caryn

    ACM's Professional Development Centre has introduced a new online books service powered by Books24x7. Student and professional members now have free, unlimited access to 395 online volumes, selected from the ITPro Collection. Members can look for books to help with their PD Centre online courses or to read up on new subjects. Topic areas include C++, C#, ASP, SQL, PHP, Java, Linux, .NET, Visual Basic, data structures, data mining, networking, security, and Web design. This virtual library can be read, bookmarked, and searched by keywords, author, title, ISBN, or publisher. A personalized bookshelf allows for quick retrieval; members can view as many books as often as they like. An upgrade to the full ITPro Collection with over 3,000 volumes will be available soon at a special ACM rate of $249.

    [ Archives ]  [ Home ]