Association for Computing Machinery
Welcome to the July 24, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


This Small Change Could Make a Big Difference for Accessible Technology
The Washington Post (07/23/15) Hayley Tsukayama

Yahoo, Facebook, and several other technology companies on Thursday announced they will develop a standard language informing job applicants that accessibility knowledge is "preferred" to land a job, as part of a joint initiative between disability proponents, schools, and the tech industry to make all technology accessible from the outset. Consultant Henry Claypool says such language sends the message to applicants and universities that knowledge of accessibility issues is critical. "You can easily argue that if you don't have [accessible technology] that people are being excluded," he says. "And that's dangerously close to discrimination." Yahoo has constructed an accessibility lab on its campus where it brings in employees and guests to highlight the experience of using technology when you are disabled, for the purpose of raising awareness. The lab features computers and mobile devices with software that will read onscreen content, while various gloves, goggles, and other tools simulate disabilities. Meanwhile, Facebook's "empathy lab" enables employees to test accessibility products for the social media site. The collaboration with schools also helps increase the number of future tech designers who receive accessibility training. "Our hope is that together we can tackle this systemic challenge and find ways to make accessibility fundamental to one's learning path in technology," says Facebook's Jeffrey Wieland.


Object Recognition for Robots
MIT News (07/23/15) Larry Hardesty

Members of Massachusetts Institute of Technology professor John Leonard's team presented a paper last week at the Robotics Science and Systems conference on how simultaneous localization and mapping (SLAM) can enhance object-recognition systems for future robots. The system employs SLAM data to augment existing object-recognition algorithms, and its performance should continue to improve as computer-vision researchers create better recognition software and roboticists produce better SLAM software. The system also outperforms object-recognition systems that try to identify objects in still images by fusing information captured from different camera angles. The three-dimensional nature of a SLAM map means it can better differentiate between objects that are close to each other than single-perspective analysis, and the system uses the map to direct the segmentation of camera-captured images before feeding them to the object-recognition algorithm. The SLAM data enables the system to correlate the segmentation of images captured from different angles, improving performance. Object recognition could help tackle the challenge of "loop closure," in which a robot needs to identify previously visited locations so it can combine mapping data obtained from different perspectives. "This work shows very promising results on how a robot can combine information observed from multiple viewpoints to achieve efficient and robust detection of objects," says University of Washington professor Dieter Fox.


Researchers Look to Bots, Big Data to Fix Software Flaws
eWeek (07/22/15) Robert Lemos

The problem of insecure code is growing worse, with increasing numbers of developers without any training in security building more and more software and apps using large code libraries that often harbor vulnerabilities. As a result, researchers increasingly are searching for automated solutions to this problem. Massachusetts Institute of Technology (MIT) researchers have developed a system they call Code Phage, which is able to take pieces of one code that can counter a vulnerability and graft it into another piece of software. The researchers presented Code Phage last month at the ACM SIGPLAN conference on Programming Language Design and Implementation in Portland, OR. The technique focuses on flaws in image-reading software and the researchers say they have successfully used Code Phage to fix 10 errors in seven programs, with each repair taking two to 10 minutes. “The long-term vision is that you don't ever have to write a piece of code that someone else has written, because we will find it and integrate them all together,” said MIT's Martin Rinard. Another group looking for automated solutions to vulnerable software is the non-profit Draper Laboratories, which unveiled DeepCode this year. The DeepCode tool automatically assesses code to see if it is strong and secure, or weak and full of bugs.


MSU Gets $22.5M to Continue Evolution Research
Detroit News (07/22/15) Kim Kozlowski

Michigan State University (MSU) researchers are studying modern evolution in humans, nature, and computers at the BEACON Center for the Study of Evolution in Action. The goal of the center is to bring biologists, computer scientists, and engineers together to study evolution in real time and apply that knowledge to solve real-world problems. Areas of study include evolution of disease, understanding and reducing the evolution of antibiotic resistance, and evolution of "digital organisms" in computer environments. Headquartered at MSU, the BEACON Center comprises about 600 people ranging from senior professors to undergraduates doing summer research, says center director Erik Goodman. At any time, about 100 projects are underway in pure and applied research, along with education and outreach. Partners include North Carolina A&T State University, the University of Idaho, the University of Texas at Austin, and the University of Washington. The U.S. National Science Foundation (NSF) awarded MSU its first five-year grant of $25 million in 2010, and in July 2015 officials renewed the grant for an additional five years. "The renewal is because they have made such fantastic progress and have really changed the landscape in evolutionary computation and evolutionary biology," says NSF's George Gilchrist.


Computer Security Tools for Journalists Lacking in a Post-Snowden World
University of Washington News and Information (07/22/15) Jennifer Langston

Researchers at the University of Washington (UW) and Columbia University recently conducted a study of the computer security habits of 15 journalists across two continents and found security weaknesses in their technological tools and ad-hoc workarounds. The weaknesses included computer security tools that go unused because they introduce roadblocks to information gathering, inadequate solutions for basic tasks, and failing to consider potential risks from cloud computing and other common practices. "Addressing many of the security issues journalists face will require new technical solutions, while many existing secure tools are incompatible with the journalistic process in one way or another," says Columbia professor Susan McGregor. The researchers interviewed journalists from the U.S. and France about how they communicate with sources, what strategies they use to organize notes and protect sensitive information, and their use of existing information security tools. They found although some reporters took steps to lessen certain types of security risks, others did not. "It's not just a matter of giving journalists information about the right tools to use--it's that the tools are often not usable," says UW professor Franziska Roesner. The researchers will present their study next month at the 24th USENIX Security Symposium in Washington, D.C.


Tackling Power and Resilience at Exascale
HPC Wire (07/21/15) Tiffany Trader

The exascale challenges of power and resiliency are the focus of two projects that recently received funding from the U.S. Department of Energy (DOE) Early Career Research Program. The funding recipients are University of Chicago professor Hank Hoffmann and Oak Ridge National Laboratory's Christian Engelmann. "We are going to build computer systems that are so complicated that very few people can optimize them, so we need to make the machines intelligent to handle some of this, or we'll have machines that are not useful," Hoffmann says. He earned his award for work on Constraint Language and Optimizing Runtime for Exascale Power Management, whose goal is enabling an exascale system to intelligently distribute power to optimize application performance via self-awareness. "A computer runs on a model, and self-aware computing lets it manipulate that model," Hoffmann notes. "It can change its behavior as it is running." Engelmann received his DOE award for work in resilience design patterns for extreme-scale high-performance computing (HPC), whose underlying concept involves matching repeatedly occurring resilience problems with flexible fault management across hardware and software. Upon identification of these patterns, researchers will produce reusable programming templates that enable resilience portability across different HPC system architectures.


Research Reveals How Advertisers Play the Online Bidding Game
Cornell Chronicle (07/17/15) Bill Steele

Researchers led by Cornell University professor Eva Tardos have proposed a method for estimating what advertisers feel an ad is worth based on what computer scientists call "no-regret learning." The approach involves a computer reviewing past performance and concluding, "if we'd done it that way, we'd be happier now." The researchers presented their method last month at the 16th ACM Conference on Economics and Computation (EC15) in Portland, OR. Their findings were based on analyzing a Bing search engine dataset. The researchers focused on nine advertising accounts that changed their bid several times per hour, indicating the decisions were being made by computers rather than humans. Because the game was played differently each time, they averaged over a number of auctions and assumed that each one is a replay of the same game, providing a clue to advertisers' value in how they adjusted their bids on the assumption that bidding algorithms seek to minimize regret. "Advertisers modify their bids reacting to what gets attention and what causes users to buy things," Tardos says. "Our methods help the platforms use the bidding data to better understand their advertisers." The research was selected for the conference's Best Paper Award.


How Playing Computer Games Can Make the World Safer
BBC News (07/21/15) Paul Rubens

It is estimated there are five bugs in every thousand lines of code in commercially available software, yet only a small handful of people have the skills to do the mathematical verification process needed to confirm a piece of software is error-free. However, the U.S. Defense Advanced Research Projects Agency (DARPA) has funded a program to find ways of crowdsourcing the software verification problem. One of the solutions the program proposes is to turn the verification problem into puzzle games. One such game is Binary Fusion, developed by SRI International in partnership with the Air Force Research Laboratory and the University of California, Santa Cruz. The game presents players with colored balls that represent good and bad values. The player then has to select "filters" generated by the game to separate the good and bad balls, assisting the verification process as they play. People who have played the game, including game designer Simone Castagna, say it is fun and note they had no idea until they were told that it was part of an effort to verify software integrity. The DARPA effort has so far produced six such games, which can be played on its Verigames.com website.


The Next Frontier for Artificial Intelligence? Learning Human's Common Sense
ZDNet (07/17/15) Anna Solana

Ramon Lopez de Mantaras, director of the Spanish National Research Council's Artificial Intelligence Research Institute (IIIA), is dubious of predictions that strong, human-like artificial intelligence (AI) is only a decade or two away. He says fundamental changes in computing hardware will be necessary before researchers can even begin to imagine creating AI that would come close to approximating human intelligence. However, he says there also are more basic challenges that must be met first, such as determining how to imbue AI with a common-sense understanding about how the world works. In order to interact with the world effectively, robots and AI will need to be capable of understanding basic concepts, such as the fact that if you want to move an object attached to a rope, you need to pull and not push on the rope. Lopez de Mantaras and his team at IIIA are working to develop this sort of AI common sense by teaching a robot how to play a music instrument called Reactable. Developed by the University of Pompeu Fabra in Barcelona, Reactable is played by manipulating various physical objects on a large, round multi-touch table. Developing the robot's capacity to learn how to play Reactable through trial and error will be crucial for enabling AI to learn other skills in the future.


How Lesbians Who Tech Is Driving LGBTQ and Gender Equality in Technology
Tech Republic (07/21/15) Lyndsey Gilpin

Just a few years after its founding by Leanne Pittsford, Lesbians Who Tech has become an influential organization in the tech industry. Pittsford spent years working for gay rights organizations, whose biggest donors were mostly men. After she joined the tech sector by starting her own digital consulting agency, she found tech conferences were predominantly male, and she became determined to carve out a place for lesbian women in tech. Pittsford began organizing Women Who Tech happy hour meetups in San Francisco in 2012. The meetups were immediately popular and Pittsford soon realized the potential for something larger. The first Lesbians Who Code Summit was held in 2014, and there already are plans for European and East Coast events. In June, the group was awarded a $165,000 grant from the Arrillaga-Andreessen Foundation, enabling it to form a non-profit arm. The grant money will be used to fund two programs. The first is "Bring a Lesbian to Work Day," in which participants are matched with mentors in the tech field for one-day on-site events. The second is a coding scholarship fund, which will subsidize tuition for women who need financial assistance to attend coding academies.


The 2015 Top 10 Programming Languages
IEEE Spectrum (07/20/15) Stephen Cass

In cooperation with computational journalist Nick Diakopoulos, IEEE Spectrum has released its second annual ranking of programming languages. Forty-eight languages were included this year and are ranked based on weighting and combining 12 metrics from 10 data sources, including the IEEE Xplore digital library and Github. The rankings are weighted to broadly represent the interests of IEEE's members, but users can apply filters to tailor the rankings to their specific situation using the Top Programming Languages Web app. The first half of the top 10 remains the same as last year, with Java ranked number one, followed by C, C++, Python, and C#. The biggest move in the top 10 is statistical computing language R, which jumped from number nine last year to number six this year. R is useful for analyzing and visualizing big data, and its jump in the rankings reflects the growing importance of such capabilities. Meanwhile, PHP, JavaScript, and Ruby each fell one place to seven, eight, and nine, respectively. Matlab retained its number 10 spot. The 2015 rankings saw the addition of seven languages, including Apple's Swift and Nvidia's Cuda, and the removal of ASP.NET, which IEEE Spectrum determined did not meet its definition of a programming language.


Expect More Prize Competitions to Address Tough IT, High-Tech Challenges
Network World (07/21/15) Michael Cooney

The U.S. government's use of competitions or crowdsourcing to address sometimes-complex problems has been very successful, and those accomplishments could lead to many more such contests in the future. In the five years since the America Competes Act was enacted, Challenge.gov has prompted more than 400 public-sector prize competitions, awarding about $72 million in prizes. The U.S. Office of Science and Technology Policy (OSTP) recently issued a report on the use of 97 such competitions during fiscal year 2014. The report found in fiscal year 2014, 79 percent of challenges were designed to achieve multiple goals, and 56 percent of all prizes and challenges leveraged partnerships to expand their reach, impact, and scope. In addition, 38 percent of all prizes and challenges conducted in fiscal year 2014 sought software or analytics solutions such as applications, data visualization tools, and predictive models and algorithms. The Obama administration, in an attempt to build on this momentum, will hold an event this fall to emphasize the role prizes play in solving important national and global issues. Going forward, several new competition techniques could emerge, according to the OSTP report.


Stanford's Christopher Manning and a Columbia Colleague Discuss Talking, Thinking Machines
Stanford University (07/20/15) Kim Martineau

In an interview, Stanford University professor Christopher Manning and Columbia University professor Julia Hirschberg discuss natural language processing (NLP), which Hirschberg describes as "using computational techniques--algorithms and statistical analysis--to learn, understand, and produce human language content." Teaching language to computers is challenging, given the constant evolution of exceptions to standard rules and new expressions, according to Manning. He stresses training computers on large bodies of text--the big data approach--has taught them to learn language much faster. "If we collect lots of parallel text--in our target language and translated from our native language--we can use statistics to infer which words typically follow other words, and how words are translated in context, to produce new translations," Hirschberg says. Manning predicts neural machine translation methods, in which algorithms process information in layers to obtain deeper understanding, will greatly improve computers' ability to infer context and recognize similar meanings. However, Manning does not think the advance of NLP will enable computers to replace writers, noting writing is an art form that computers cannot duplicate. "We value the original ideas that only a human author can provide," he says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe