HomeFeedbackJoinShopSearch
Home

Be a TechNews Sponsor

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of ACM. To send comments, please write to [email protected].
Volume 7, Issue 769:  Wednesday, March 23, 2005

  • "Faster XML Ahead?"
    CNet (03/23/05); LaMonica, Martin

    The World Wide Web Consortium (W3C) Advisory Committee and director are set to decide on a committee recommendation for a binary XML standard, and if the proposal is approved, a vote on a binary XML standard could occur this summer; if formed soon after, a working group could take up to three years to complete the specification. Binary XML would bolster XML adoption in mobile communications and embedded computing, where the bulky text format eats up battery power in cell phones and slows critical embedded applications in Air Force jets, for example. In February, W3C meeting attendees argued against the creation of a new standard, saying existing binary XML implementations were sufficient or that other measures could solve the problem without introducing a questionable specification. Current XML performance is not so bad and complaints about XML processing speed are similar to early complaints about the World Wide Web being too slow, says Iona Technologies CTO Eric Newcomer. Among the options for making text-based XML faster is completely rewriting the parser programs used to process XML data, and Sun Microsystems is working on a Fast InfoSet project that would reportedly accelerate XML anywhere between two and 10 times. There are more than a dozen industry-specific binary XML efforts already in use or in development, and a W3C standard might not sufficiently meet the needs of those applications, says Microsoft SQL Server database program manager Michael Rys. Another concern is that a binary XML standard would not be widely adopted; Rys notes that XML 1.1 has not met expectations and that Microsoft has not yet supported the specification because of backwards-compatibility fears.
    Click Here to View Full Article

  • "Supersmart Security"
    Computerworld (03/21/05) P. 46; Anthes, Gary H.

    University of California-Berkeley computer science professor and ACM President David Patterson describes computer security problems as "glaring" because security measures follow an outdated prevention-oriented rather than repair-oriented model. The measures are also unreliable in detecting threats, since they are generally designed to identify established instead of novel threats. However, a number of prototype and commercial security products offer a glimmer of hope: Sana Security's Primary Response intrusion-prevention software, which is modeled after biological immune systems, profiles an application's routine behavior with software agents and monitors the program's execution for any signs of deviant behavior, and responds to aberrations by blocking system call executions. Sana says the software can identify and permit valid code changes because it is continually learning. University of New Mexico researchers are developing Randomized Instruction Set Emulation, a tool inspired by biological diversity that makes each system unique--and less vulnerable to intrusion--by randomly modifying some code so that malware would have a harder time spreading. Another University of Mexico project, Responsive Input/Output Throttling, imitates biological defenses through a combination of distinct defense mechanisms: The approach limits the rate of linkage to other computers via throttling, and blends this technique with agents that learn the normal behaviors of specific combinations of users, machines, and applications to add flexibility. Some computer security experts are developing systems that can mitigate the effects of an attack while keeping systems operational. Patterson and colleagues in Berkeley's recovery-oriented computing project use logic to monitor processes and trigger fast "microreboots" at signs of trouble to prevent system crashes.
    Click Here to View Full Article

  • "Tool Turns English Into Code"
    Technology Research News (03/20/05); Patch, Kimberly

    MIT researcher Hugo Liu believes the Metafor program for translating natural-language descriptions of software into scaffolding code--a program "skeleton"--could be practically applied to software brainstorming in less than two years; later applications could include a programming teaching tool for kids and enhanced storytelling. Liu explains that Metafor renders the innate structure of English as a fundamental programmatic architecture of class objects, properties, functions, and if-then rules. "The basic ingredients for the program are there--the noun phrases are objects, the verbs are functions, [and] the adjectives are object properties," he notes. MIT researchers crafted a parser that deconstructs text into subject, verb, and object roles, and programming semantics software that maps data from these English language constructs to basic code structures; this data is then applied toward the real-time interpretation of skeleton code in any of seven programming languages. Metafor displays four panels to the user: One panel for entering sentences, and three panels containing canned dialogue that verifies the user's statement, debug data, and the programming language version of the code. Running a mouse over a code object produces a pop-up window displaying an English explanation of the code, says Liu. The researchers studied the performance of groups of intermediate and beginning programmers using Metafor, and found that the program accelerated brainstorming about 10 percent for intermediates and 22 percent for beginners. Liu observes that Metafor encouraged programmers to use simple and declarative language, which subsequently made the system more effective for brainstorming and outlining.
    Click Here to View Full Article

  • "A Tool Box for Building Real-Time Embedded Systems"
    IST Results (03/22/05)

    IST's OMEGA project has made a stride toward the development of real-time embedded systems with its integration of cutting-edge validation techniques and tools into Unified Modeling Language (UML), which subsequently spurred the creation of a tool box for simulating and analyzing UML models and for checking them against their properties to study usability, scalability, and effectiveness in embedded real-time software. The Omega UML profile and verification tool box was employed in a number of feasibility case studies that took place in industrial settings. One case study was the National Aerospace Laboratory's proposed Medium Altitude Reconnaissance System that offsets image quality defects induced by an aircraft's forward motion. Another was a flight control mechanism that deploys "sensor voting" and "sensors monitoring" operations in an average flight control system, the end product of a collaboration involving Israeli Aircraft Industries. "Generally there was a lot of enthusiasm from the industrial users who provided the case studies and who used both commercial UML CASE tools as well as the tools developed in the project," notes OMEGA project coordinator Susanne Graf. She says that because no UML CASE tool is completely aligned to the standard, the profile had to sometimes be adapted to the current tools rather than the wanted features. Graf also admits that another obstacle the project has yet to surmount is "the difficulty to motivate tool providers to spend money and effort on integration of tools that are appreciated by only a small number of clients." The OMEGA partners currently face the challenges of improving existing tool features and fully incorporating UML 2.0.
    Click Here to View Full Article

  • "Computers Gain Power, But It's Not What You Think"
    Chicago Tribune (03/20/05); Van, John

    Computers are no match for a four-year-old human when it comes to intelligence, but they can appear to operate intelligently thanks to increasing computing power that enables pattern recognition. Northwestern University professor Kristian Hammond has rejected the common view of intelligence shared by artificial intelligence researchers, which subscribes to a model in which people have a clear idea of what they are thinking. "Our model is that there's never a clear idea; often it's just a collection of ideas in a context," explains Hammond, who co-founded Intellext, a company that markets pattern recognition software that can find and flag relevant information by contextualizing terms without knowing their meaning. NICE Systems, a New Jersey-based firm that monitors call center conversations, has developed software that tracks the emotions of callers and recognizes specific words, such as "cancel." The software can relieve clients of the burden of unnecessary callbacks and other problems that slow down call center operations, while simultaneously facilitating more timely responses to callers who are truly in need of assistance. IBM senior research manager Steve White says a truly intelligent computer must be modeled after the human brain. He says, "They're very slow as computing elements, but their connections make them very powerful and intelligent. We might build a different kind of computer modeled after that." Redwood Neuroscience Institute founder and author Jeff Hawkins is working on such a machine, which he expects to be ready by 2010. In his book, "On Intelligence," Hawkins theorizes that genuinely intelligent machines will be able to perform tasks beyond the abilities of both humans and contemporary computers, rather than merely execute functions humans already perform.
    Click Here to View Full Article

  • "How the Problem of Sight Could Help Servers"
    Extreme Tech (03/22/05); Hachman, Mark

    A paper presented by IBM researchers at the International Conference on Adaptive and Natural Computing Algorithms on March 22 details their attempt to learn "the fundamental principles of brain functions...and operate technology to solve problems in much the same way systems solve them" by abstracting neurons and axons in the neural cortex, according to researcher James Kovloski. The researchers focused on the brain's process for parsing, characterizing, and recognizing an object. Lead author of the paper Charles Peck notes that an object such as a box is deconstructed into elemental traits in an attempt to ascertain what the eye is viewing. The researchers studied the interactions of minicolumns--the basic order of measurement of the brain's computational power--in such a scheme as they would apply to an artificial neural network, and found that the network will assign each minicolumn node the job of scanning for the presence of a vertical or horizontal edge, with maybe another set of minicolumns assigned to look for the presence of any edge within a specific space. Once that is done, the minicolumn or node must then attempt to put the image back together from its component characteristics, a subject not covered by the paper. The study, which applies to IBM's "autonomic computing" concept to make computers capable of identifying and responding to changing conditions, establishes a considerable gap between artificial neural networks and their biological counterparts. Peck says that both system managers and vertebrate organisms adapt to situations by learning the relationships between needs, actions, and environment.
    Click Here to View Full Article

  • "IBM Embraces Bold Method to Trap Spam"
    Wall Street Journal (03/22/05) P. B1; Forelle, Charles

    Efforts to block spam are getting more aggressive, as the fight moves from passive spam filters to counterattacking measures such as "teergrubing," where spammers are trapped by tying up their servers. Although open-source counterattacking software has been available for a while, new products from IBM and Symantec have made the practice less problematic for corporate users. A new service from IBM that sends junk email directly back to the machine identified as the spammer is scheduled to debut on March 22. The system, which is based on IBM's FairUCE technology, scans incoming data packets bearing email and checks their point of origin against a continually updated database of established spamming machines, routing the data back to the sender if the source is in the database. The zealousness of the response is proportional to the amount of spam received. The system can also delay rather than unequivocally reject data packets originating from a computer that is probably but not definitely spamming. Symantec, meanwhile, released a product in January that uses "traffic shaping" to slow links from suspected spamming machines: Data streams that appear to be coming from a spammer are throttled down so that data moves slowly; Symantec's Carlin Wiegner says the product is designed to "slow [spammers] down so much that it is more interesting for them to spam some small business or some other country." Both IBM and Symantec's products are geared toward large companies with sizable enough email traffic to realize significant profits from less spam. The products do not break anti-hacking laws that criminalize unauthorized entry to a remote system, even to protect another system; but they can boost network traffic, which is generally unwanted. "Yes, we are adding more traffic to the network, but it is in an effort to cut down the longer-term traffic," argues IBM corporate security strategy director Stuart McIrvine.

  • "CERN Readies World's Biggest Science Grid"
    IDG News Service (03/21/05); Niccolai, James

    CERN engineers recently announced that more than 100 sites in 31 countries have joined a computing grid for storing and processing the massive volumes of data that will be generated by the Large Hadron Collider (LHC), effectively creating the largest international science grid in the world. The particle accelerator, which will go into operation in mid 2007, is expected to yield an estimated 15 TB (over 15 million gigabytes) of data annually, and the sites encompassing the grid have committed over 10,000 processors' worth of computational power, along with hundreds of millions of gigabytes in disk and tape storage. Project leader Les Robertson says the grid infrastructure "is not very specialized; we're just creating a virtual clustered system, but on a large scale and with a very large amount of data." CERN built the grid using the Globus Alliance's Globus Toolkit, scheduling software from the University of Wisconsin's Condor project, and tools created under the auspices of the European Union's DataGrid initiative. Much of the LHC data will reside in Oracle databases, and a small number of the sites involved in the grid employ commercial storage systems. Robertson notes that commercial tools are chiefly designed to construct enterprise grids that might more accurately be described as clusters, while grid computing as defined by the LHC project is the interconnection of computing capacity across multiple sites so that everyone can access that capacity on an as-needed basis. In addition to middleware, CERN must develop an operational architecture that not only guarantees the availability of these resources when needed, but also gives participating institutions enough room to run other projects and applications. "The grid has got to be democratic, you can't have someone in the middle like a dictator," Robertson explains.
    Click Here to View Full Article

  • "Justices to Weigh Key Copyright Case"
    National Law Journal (03/21/05); Coyle, Marcia

    The U.S. Supreme Court is set to revisit the Sony Corp. of America v. Universal City Studios decision of 1984 in which technology vendors were deemed not liable for possible copyright infringement if their products had significant legitimate uses. The new MGM Studios v. Grokster case has drawn more than 50 friend-of-the-court briefs from government, the technology industry, and academia because the ruling will help shape copyright law in the Internet age. Lawyers have clashed over contributory liability and vicarious liability, the second of which entails financially benefiting from user copyright infringement. In the Sony case, the Supreme Court cleared Sony of contributory infringement because its Betamax VCRs could be used for substantial non-infringing purposes. Media industry lawyers say P2P software firms engineered their products so that they would not have control over user activity, but that copyright protection technology would be simple to implement. More sophisticated technology means finer lines should be drawn when deciding liability, says University of Richmond copyright scholar James Gibson. Other groups warn that entertainment industry demands would stifle technology innovation and that both the technology and entertainment industries have prospered since the Sony ruling, which Electronic Freedom Foundation counsel Cindy Cohn calls the technology industry's Magna Carta. The entertainment industry is trying to reverse the effects of the digital age and technologies such as P2P, but a better approach would be to develop new ways of compensating artists and copyright owners; "We need to find a different way to ensure artists get paid instead of trying to control copies," says Cohn. The Bush administration has filed an amicus brief supporting MGM, although Cohn says it does not support MGM's position on vicarious liability.
    Click Here to View Full Article

  • "Careers for Women in IT Is at Risk"
    MC Press Online (03/01/05); Stockwell, Thomas M.

    MC Press Online editor in chief Thomas Stockwell notes that women rose in the workplace in general and IT in particular up to 1996, when 41 percent of IT workers were female and pay scales between male and female IT professionals were nearing equivalency. However, the National Science Foundation estimates that the female IT workforce dived 15 percent between 1996 and 2002, while the percentage of women receiving bachelor's degrees in computer science fell from 37 percent to 28 percent between 1985 and 2001. Caroline Slocock, CEO of Britain's Equal Opportunities Commission, reasons that this decline could be attributable to a scarcity of promotional prospects for entry-level female IT workers, as well as a lower pay scale. Stockwell sees IT outsourcing and H-1B visas also playing a significant role in the erosion of women in IT. Some analysts suggest companies could use declines in the salaries of H-1B visa holders to circumvent published Human Resources guidelines for hiring IT professionals and compress the wages of medium-salary workers; these workers would naturally be women who historically earn less than men in IT. Meanwhile, outsourcing is squeezing out workers caught between staffers with seniority and imported lower-wage earners, and analysts raise the possibility that more women than men are getting laid off because of de facto "structural"
    discrimination. These trends have subsequently discouraged female college students from studying computer science and pursuing IT careers. Most analysts concur that management must not just understand what structural, cultural, and financial factors are responsible for creating these IT inequities, but also find ways to identify these factors before they can negatively impact other departments.
    Click Here to View Full Article

    For information on ACM's Committee on Women in Computing, visit http://www.acm.org/women.

  • "Decrypting the Future of Security"
    Globe and Mail (CAN) (03/18/05); Kirwan, Mary

    Lawyer, writer, and IT security expert Mary Kirwan notes that there was "universal agreement" among speakers and panelists at the recent RSA Security Conference that innovation is a fundamental component of IT, that security is important, and that something must be done to improve security; from there the debate over what to do devolved into a blame game where most fingers were pointed at software vendors. Vendors, most clearly represented by a panel of lawyers, warned that imposing liability and subjecting them to government regulation would choke innovation and lead to higher prices, arguing that the burden of security belongs to users. One panelist disagreed, noting that customers are demanding better software licensing terms, as well as input into the code development lifecycle, greater transparency, and code escrowing in the event vendors are unavailable when customers need them. In-house Microsoft lawyers dominating the panel implied that the legal concept of "intervening criminal act" would spare vendors from being found guilty of negligence, and raised the possibility that consumers would be charged with contributory negligence. Audiences, however, generally favored legislation mandating software quality assurance and liability for code development as long as it improved IT security and eliminated vaporware providers. Security guru Bruce Schneier, former U.S. cybersecurity czar Richard Clarke, one-time U.S. elected House representative Rick White, and ITAA President Harris Miller formed a panel debating software regulation. White and Miller, representing the industry, argued that government intervention is "highly undesirable," with Miller damning widely adopted European Union software security liability laws as globally out of touch. Clarke, meanwhile, reflected the attitude of many senior government officials who have lost patience with the IT industry.
    Click Here to View Full Article

  • "What Users Want"
    Builder AU (03/21/05); Yates, Ian

    Usability and user interface design is an integral component to software success and can be achieved through relatively simple means. User interface design consultants can usually work out the basic architecture of an application in three or four days, providing a framework for programmers to fill in and implement, says Performance Technology Group managing director Craig Errey; the University of Technology, Sydney, is working with Errey's company to determine the appropriate level of integration between computer science and usability disciplines. Major software vendors IBM and Microsoft spend significant research resources to increase usability in their software, but developers of mundane corporate applications tend to be more spare in terms of usability. Sometimes, usability has little effect on software's popularity, even in the mass market: The early version of Napster was terribly designed, but was popular because it provided a one-of-a-kind service. TASKey managing director Neil Miller says software applications that automate existing functions are usually straightforward to design, whereas applications that require users to learn new processes are more problematic. Simplicity and usability go hand-in-hand, and applications that contain only core functions are easier to use than feature-rich applications, says Miller. Red Square senior analyst Steve Baty, whose firm does Web development, says Web site developers should ask basic questions and assume users do not know anything about the company; Web site development, especially, requires at least some consideration of usability issues. Oftentimes, developers wrongly imagine usability work to involve high costs and teams of specialists, says Baty.
    Click Here to View Full Article

  • "Let's Focus on the Theft, Not the Identity"
    Boston Globe (03/21/05) P. C3; Bray, Hiawatha

    Highly publicized cases of identity theft at database companies and universities is worrisome, but even more disturbing is research by Carnegie Mellon University associate computer science professor Latanya Sweeney, writes Hiawatha Bray. She created a set of programs that collects Social Security numbers via the Google search engine, and then emails people to alert them about the situation. Sweeney proposes a service called Internet Angel that would monitor the Internet for Social Security numbers, which are one component companies and institutions use to verify people's identities. Identity theft is too easy, and the supply of personal data will never be staunched completely since every conceivable authentication method can be faked--even biometric authentication metrics. Technologists, business leaders, and lawmakers are focusing on how to protect identities when a more effective approach would be to restrict what criminals can do with stolen identities, says computer security expert Bruce Schneier. With credit cards, banks and businesses take very few measures to ensure the security of the card number and instead monitor card activity using sophisticated computer programs; suspicious card activity triggers alerts, and the bank will call the cardholder to verify certain purchases. This system is not perfect, but it is easier than confirming identities. A "do-not-issue" proposal before Congress would similarly limit identity thieves' ability to open new charge accounts by requiring credit reporting agencies to get individuals' approval before releasing their information to banks and merchants. Instead of making it more difficult to steal identities, business and government should make it harder to exploit stolen identities, suggests Bray.
    Click Here to View Full Article

  • "Behind the Digital Divide"
    Economist Technology Quarterly (03/05) Vol. 374, No. 8417, P. 22

    Establishing rural information and communications technology (ICT) centers in developing nations, ostensibly to help bridge the digital divide and improve the standard of living for impoverished people, has largely been sidelined in favor of initiatives with more obvious benefits, such as better medicine and sanitation in poor regions. Determining precisely how the Internet and other ICTs can benefit the world's poor is difficult because there is little actual data, and solid methodologies for compiling the data are few; but obtaining rural residents' own views on ICTs can yield insights. Interviews with rural, illiterate, and low-caste Indian villagers reveal a general obliviousness to the existence of ICTs, and the complete absence of technology as a desirable development priority. People who sit on the top rung of the socioeconomic ladder--the young, the wealthy, the literate--are the ones who derive the most benefit from rural ICTs: Knowledge Centers set up by the M.S. Swaminathan Research Foundation to serve Indian villages are frequently used by students to help them reference test scores, search for jobs, and build computer skills, while farmers with substantial assets use the centers to obtain information on crop prices and veterinary knowledge. The value of ICT investments resides not so much in whether such investments can help people, but whether their general advantages exceed those of investments in medicine or education, for example. Indian Institute of Technology professor Ashok Jhunjhunwala says the "deciding factor" in determining whether the digital divide will ever be eliminated is cost, and he and several colleagues are developing inexpensive devices for accessing useful data and services. An argument could be made for linking older technologies such as radio and printed newsletters to Internet hubs for the purpose of disseminating important information to rural residents.
    Click Here to View Full Article

  • "How to Save the Internet"
    CIO (03/15/05) Vol. 18, No. 11, P. 70; Berinato, Scott

    CIO Magazine has tapped key figures in the information security community to suggest "Big Ideas" for dramatically improving the security of the Internet, excluding technological band-aids and "generic truths" such as user education. The results are diverse and intriguing: National Security Agency Information Assurance Directorate director Daniel Wolf proposes a massive, government-funded mobilization of his directorate, the Defense Department, private-sector and academic researchers, national research labs, and foreign partners to collectively tackle the problem. Appointing the information security equivalent of a surgeon general who reports to the secretary of the Homeland Security Department is another popular suggestion, while Oracle CSO Mary Ann Davidson suggests that eliminating all coding errors should be a primary goal of the development process, even if it involves imposing restrictions on coders' freedom and creativity. AT&T CISO Ed Amoroso proposes shifting control of security settings away from end users and back to network providers, a potentially profitable strategy for his firm. James Whittaker considers licensing programmable PCs, while Motorola CISO Bill Boni believes a cyber-Interpol could facilitate both domestic and international cybercrime investigations and speed up response time as well. Other suggestions include labeling Web sites with XML and metadata so that visitors can clearly see whether the sites are secure or not, or requiring all specification documents for software applications to include actions the software must not carry out. One of the most sweeping Big Ideas suggested is the construction of a secure Internet that exists parallel to the old one, and onto which all Net users will eventually be transferred.
    Click Here to View Full Article

  • "Artificial Intelligence Marches Forward"
    Scientist (03/14/05) Vol. 19, No. 5, P. 18; Spinney, Laura

    Robot technology development is being increasingly influenced by physiology and neuroscience, and the time may come when robots will complement research in those disciplines. Current artificial intelligence efforts focus on imbuing robots with anthropomorphism, which is more likely to make machines capable of learning in a human-like manner and acquiring intelligence that can be applied practically. Giulio Sandini of the University of Genoa's Laboratory for Integrated Advanced Robotics will supervise a sweeping international project to develop RobotCub, an anthropomorphic learning machine designed to shed more light on human cognition and human-machine interaction. MIT, Delft University of Technology, and Cornell University made headlines last month with the debut of bipedal robots that walk using passive dynamics, but Rolf Pfeifer of the University of Zurich's Artificial Intelligence Laboratory says more effective walking robots can be realized by powering and directing motion in a more human-like way: He notes that "The brain doesn't control the trajectory of the joints; rather, it initiates that trajectory and controls the material properties of the muscles." Luciano Fadiga of the University of Ferrara believes equipping RobotCub with a system modeled after the "mirror neurons" that allegedly enable people to understand and attribute meaning to the actions of others will give it a mechanism for learning. Meanwhile, the University of Osaka's Minoru Asada is applying developmental psychology to robots by designing self-developing structures linked to or within artificial neural networks. This allows the robot to adapt to increasingly complicated tasks within its environment. Such breakthroughs are yielding new insights into human development and neurological disorders such as autism and hereditary cerebellar ataxia.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "Computing the Right Pitch"
    Computerworld (03/14/05) P. 36; Monash, Curt A.

    Analyst Curt Monash describes predictive analytics as "a replacement phrase for 'data mining' [that] roughly equates to 'applications of machine learning and/or statistical analysis to business decisions.'" Business decisions, as defined in most current and short-term applications, are forms of small group marketing, and Monash lists questions that business analytics attempts to address, such as which customers are likely to churn; what types of offers will attract new customers or retain old customers; which prospective customers are most probabilistically profitable, nonprofitable, and churn-threatening; and what content should be shown to particular Web surfers when the next page is served. Monash says the information used to answer such questions can be culled from a diverse array of sources, including transactional data, customer contacts, and third-party data. The difficulty resides in the mathematical methods employed to address predictive questions: The process involves formalizing the problem as one of clustering or classification, and the answer as an algorithm that places each customer or prospect into one of the limited number of buckets. Data on previous prospects and customers serves as evidence used to build the algorithm. Producing such algorithms is beyond the capabilities of conventional statistical techniques, and calls for methods that include neural networks, support-vector machines, and linear algebra. Monash says the best algorithm for any given problem usually consists of a "complex hierarchy of 'elementary' algorithms."
    Click Here to View Full Article

  • "Extending the Service-Oriented Architecture"
    Business Integration Journal (02/05) Vol. 7, No. 1, P. 18; Papazoglou, Michael P.

    The Extended Service-Oriented Architecture (xSOA) accounts for SOA deficiencies in such areas as management, security, service choreography and orchestration, and service transaction management and coordination, writes Michael P. Papazoglou, computer science chair at the University of Tilburg and director of the university's INFOLAB. Papazoglou says the xSOA logically separates functionality according to the need to isolate basic service capabilities from more sophisticated service functionality, and proposes a new approach for constructing distributed applications where services can be published, discovered, orchestrated, and managed to provide value-added service solutions and composite application mechanisms. The xSOA tries to boost the value of integrated process solutions via the coordination and management of the interaction of applications across enterprise borders. The xSOA is arranged in a pyramid scheme with managed services comprising the top-most layer, composition services the middle layer, and description and basic SOA constructs the bottom layer. The managed services in the top layer fall into the complementary categories of service operations management and open service marketplace management. Grid services employed in this tier generate an infrastructure for systems and applications that need services to be integrated and managed in dynamic virtual marketplaces. End-to-end quality of service and the resolution of critical application and system management concerns can potentially be provided through grid services. The xSOA delivers a design-oriented solution that reuses proven resources to provide integrated business processes that satisfy continuously shifting business needs; the architecture could potentially simplify and accelerate enterprise creation of value-added applications that retool existing functionality and enable them to more relevantly address business problems on demand as circumstances may warrant.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM