Association for Computing Machinery
Welcome to the October 14, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


How Perfect Is Too Perfect? Research Reveals Robot Flaws Are Key to Interacting With Humans
University of Lincoln (10/13/15) Elizabeth Allen

University of Lincoln researchers have found humans have more successful interactions with robots when they exhibit some of the same foibles as humans. The researchers, Ph.D. student Mriganka Biswas and professor John Murray, examined making robots' interactive behavior more human by introducing cognitive biases. "We have shown that flaws in their 'characters' help humans to understand, relate to, and interact with the robots more easily," Mriganka says. The researchers gave two robots--ERWIN (emotional robot with intelligent network), which can express five emotions, and Keepon, a small robot designed to study social development in children--the ability to demonstrate "misattribution of memory" and the "empathy gap," two common cognitive biases. They programmed ERWIN to make mistakes when remembering simple facts, and programmed Keepon to show extremes of "happiness" or "sadness" during its interactions. The researchers found human participants reported more meaningful interactions with the robots when they exhibited these behaviors. Participants "paid attention longer and actually enjoyed the fact that a robot could make common mistakes, forget facts, and express more extreme emotions, just as humans can," Mriganka says. The researchers presented their findings this month at the International Conference on Intelligent Robots and Systems conference in Hamburg.


Robots and Us
MIT News (10/13/15) Peter Dizikes

The dream of full automation, as in Google's ambitions for its self-driving cars, is outdated and does not lead to the best outcomes, writes Massachusetts Institute of Technology professor David Mindell in his new book, "Our Robots, Ourselves." Mindell reaches back into history to make his case, noting for decades promises have been made about the potential of full automation in areas ranging from space exploration to air travel, but a happy balance between automation and human control has always prevailed. Cases cited by Mindell include the Apollo moon landing program and, more recently, modern commercial air travel. Although today's passenger planes are highly automated, they still require highly trained pilots who monitor the systems, make minor corrections, and take control of the plane when necessary. Mindell says automation exists on a scale of 1 to 10, in which 10 is full automation, and systems that achieve what he calls a "perfect 5," in which automation is balanced with human control, have consistently proven to be more effective than systems on either end of the scale. "There's an idea that progress in robotics leads to full autonomy," he says. "That may be a valuable idea to guide research...but when automated and autonomous systems get into the real world, that's not the direction they head."


Obama Won't Seek Access to Encrypted User Data
The New York Times (10/10/15) Nicole Perlroth; David E. Sanger

The Obama administration has decided to not compel U.S. technology companies to give law enforcement and intelligence agencies access to user data encrypted on digital devices. The White House is bowing to experts' argument that doing so would place millions of citizens' information in danger from enemy hackers. Computer scientist Peter G. Neumann lauds the decision, but warns law enforcement will still exert heavy pressure to allow access. "The [U.S. National Security Agency] is capable of dealing with the cryptography for now, but law enforcement is going to have real difficulty with this," he says. A study co-authored by Neumann contended installing a back door into encrypted communications would inevitably open up that information to exploitation by Russian and Chinese intelligence agents, cybercriminals, and terrorist organizations; this conclusion is shared by the White House's Office of Science and Technology Policy. President Barack Obama and his aides also are concerned with such a policy serving as a precedent that China and other countries would imitate, requiring U.S. technology companies to permit them the same access, according to officials. The U.S. National Security Council's Mark Stroh says his agency is collaborating with the private sector "to ensure they understand the public safety and national security risks that result from malicious actors' use of their encrypted products and services."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Report: Federal Cybersecurity Education Programs Must Expand
EdTech Magazine (10/08/15) D. Frank Smith

Federal cybersecurity education programs need to reach out to more institutions to fortify security in the future, according to a new National Academy of Public Administration report. "A well-trained cybersecurity workforce is essential to both government and private industry," the report says. "With cyberthreats growing, however, the United States faces a severe shortage of properly trained and equipped cybersecurity professionals." The study evaluated the effectiveness of two federal cybersecurity education programs offered at higher education institutions across the country--the National Centers of Academic Excellence in Information Assurance/Cyber Defense (CAE) and the CyberCorps: Scholarship for Service (SFS) programs. The report makes a series of recommendations, including bolstering both programs' hands-on education component, as well as identifying, tracking, and using performance indicators for the programs. In addition, the report advises expanding the SFS program to automatically cover the whole public sector, and including qualified two-year programs irrespective of their association with a four-year institution. The report's final recommendation calls for stressing the value of the CAE program for building the federal cybersecurity workforce to the U.S. Department of Defense's senior leadership.


An Algorithm Might Save Your Life: How the Amazon and Netflix Method Might Someday Cure Cancer
Salon.com (10/10/15) Pedro Domingos

Machine-learning algorithms underlying Amazon and Netflix recommendation engines might one day have a transformative effect on humanity, writes University of Washington professor Pedro Domingos in his book, "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World." Domingos says keeping machine learning opaque raises the specter of abuse and error, which makes it essential to understand machine learning and its capabilities so it is more controllable. Not all learning algorithms follow the same operational pattern, and there are consequences inherent in those functional differences. For example, Netflix's recommender uses knowledge culled from subscriber tastes to suggest obscure films and TV shows instead of more popular choices, as customers' subscriptions are not enough to pay for choosing blockbusters. By contrast, Amazon's algorithm simplifies logistics and taps buyer preferences to direct customers to more familiar products. Machine-learning advocates can be categorized into those that subscribe to different visions of a general-purpose master learning algorithm for discovering knowledge in any subject, although the real thrust of their efforts is a universal algorithm that can extract all knowledge in the world from data. This Master Algorithm could theoretically be the key to building domestic robots or curing cancer. In the latter case the algorithm would be able to sequence a tumor's genome, determine which drugs will work against it without harming the patient, and possibly design a new drug tailored for that patient.


Black Engineers Join Forces to Boost Diversity
USA Today (10/07/15) Jessica Guynn

Software engineer Makinde Adeagbo officially launched the /dev/color nonprofit last week to address the scarcity of African-Americans in the technology industry. The group has recruited African-American engineers from top tech firms to mentor and represent other blacks and elevate the next generation, according to Adeagbo. "We are a community that helps one another, and part of that is that younger people get to see these role models: black software engineers who are getting into management, or trying to start their own companies, or are becoming real experts in their technical domain," Adeagbo says. "Those examples help lead someone to believe: I can do this because someone like me is doing this." The nonprofit pairs each member with another to offer guidance and establish goals. In an online community, members learn the skills and connections needed to advance in the tech industry, while social events give black engineers the opportunity to engage with a Caucasian-dominated sector. The nonprofit joins a growing list of African-American-founded organizations committed to solving the tech industry's racial diversity problem, which includes Black Girls Code, Code 2040, and the Hidden Genius Project.


When Hackers Talk, This Research Team Listens
National Science Foundation (10/08/15) Robert J. Margetta

Research being conducted at the University of Arizona (UA) and funded by the U.S. National Science Foundation seeks to combat cyberattacks by better understanding the people that launch them. The team is led by UA professor Hsinchun Chen, whose previous work includes developing systems that help to ferret out drug smuggling and terrorism by examining the digital footprints left behind by terrorists and criminals. Chen and his team now are bringing this same approach to the underworld of hackers. Their work revolves around collecting as many "artifacts" of the hacker world as they can; this primarily involves automatically scraping hacker forums and IRC chat logs. They then perform automated text mining and sentiment analysis on the data to distill it into top-tier data that contains insights on possible targets or likely threats. Although Chen and his team are still developing their techniques, they say they already have learned a great deal about the hacker subculture. For example, they have found a culture of "honor among thieves" in which a given hacker's reputation for, say, prompt payment affects how others treat him. Chen and his team hope to further develop their methods into tools that can automatically generate actionable threat intelligence based on the conversations hackers conduct on the Dark Web.


Solving the Internet's Identity Crisis
Georgia Tech News Center (10/08/15) Tara La Bouff

Internet service providers (ISPs) should be able to better verify the true owner of a network and legitimate traffic paths using new tools developed by researchers at the Georgia Institute of Technology (Georgia Tech). The team will develop new protocols as part of Resource Public Key Infrastructure, a multi-year project funded by the U.S. National Science Foundation. ISPs are responsible for routing billions of users to the right destinations every day, but there is a weakness in the trust relationship between routing protocols. Georgia Tech professor Russ Clark says the protocols are not designed to recognize impostors, especially not fake ISPs. The researchers plan to add a new type of server to the routing infrastructure and also update the software inside the routers. They will gradually deploy the changes via the Southern Crossroads Internet Exchange, documenting observations, and creating a template for others across the U.S. to follow. Clark says network operators are aware of the solution, but have been reluctant to try it due to concerns of slowing down traffic in the interim. "We're going to prove that it's possible, work through the pains, and show others how to do it," he says.


Researchers Aim to Refocus Wandering Minds
Notre Dame News (10/08/15) William G. Gilroy

University of Notre Dame researchers led by professor Sidney D'Mello say they have developed a prototype system that can detect when the mind of a student in a science, technology, engineering, or math (STEM) class is wandering. The system uses a commercial eye tracker and Webcam to track a person's eye movement, facial features, and interaction patterns. If it determines the student's mind is wandering, the system can pause the session, notify the person, highlight the content, display the missed content in another format, or tag the content for future study. The researchers note the system also has the potential to assess course materials based on how well they engage students' attention. The team wants to develop a user interface that is intelligent enough to spot waning attention and take action. The U.S. National Science Foundation is funding the project, which also has potential applications in business, aviation, and the military. The researchers now are refining the system and testing it in STEM classes at an Indiana high school.


UT Arlington Computer Scientist Using Deep Web Mining to Make Browsing Easier
UT Arlington News Center (10/07/15) Herb Booth

University of Texas at Arlington (UT Arlington) researchers, in conjunction with colleagues at Qatar University and George Washington University, are developing a technique to automatically create mobile-friendly websites where they do not exist. The researchers are developing an app that uses Deep Web mining to act as a third-party system and examine the databases behind the websites to see what information is contained in them. The app then will design a way to make the website more mobile-friendly. "This will improve the user experience immensely, and also assist small companies and government entities that do not have the human or financial resources to redo their existing sites," says UT Arlington professor Gautam Das. For example, an airline website might ask a user to type in an airport code, but the app would access the database and automatically create a drop-down menu of available airports, saving time and frustration due to the incorrect entry of information. "The cost savings on the host side, and the ease of use on the consumer side, could make this work beneficial to all parties," says UT Arlington College of Engineering dean Khosrow Behbehani.


SHA-1 Hashing Algorithm Could Succumb to $75K Attack, Researchers Say
IDG News Service (10/08/15) Peter Sayer

Researchers in Singapore, France, and the Netherlands say they have found a new way to attack the SHA-1 hashing algorithm, which is used to sign nearly one in three SSL certificates for securing major websites. As a result, the researchers say the SHA-1 algorithm urgently needs to be retired. SHA-1 is a cryptographic hashing function intended to create a fingerprint of a document to help determine if the document undergoes modification. Weaknesses had already been identified in SHA-1, and most modern Web browsers will no longer accept SSL certificates signed with it after Jan. 1, 2017. That date was selected based on the declining cost of the computing power required to attack the algorithm. The researchers who developed the latest attack say SHA-1 should be phased out sooner because it currently costs $75,000 to $120,000 to mount a viable attack using freely available cloud-computing services. Previously, Intel researcher Jesse Walker estimated it would take until 2018 for the cost to reach this level, which he suggested was well within the reach of criminal syndicates. Meanwhile, the Certification Authority/Browser Forum is examining a proposal to continue issuing SSL certificates signed with SHA-1 beyond the previously agreed cut-off date; the researchers strongly recommend against that proposal.


'Psychic Robot' Will Know What You Really Meant to Do
UIC News Center (10/06/15) Jeanne Galatzer-Levy

Researchers at the University of Illinois at Chicago (UIC) are developing technology they say could lead to "psychic robots," which would be able to accurately anticipate a person's next action and behave accordingly. The researchers say they have developed an algorithm that can "see" intention, enabling robots to know how people are moving and understand the underlying intent, says UIC graduate student Justin Horowitz. As a result, the robot would be able to follow through on an ordinary action if it is interrupted. Horwitz says an artificial-intelligence system in a car could potentially use the algorithm to correct the vehicle's course if it swerves on ice--and react much faster than the driver. "The computer has extra sensors and processes information so much faster than I can react," he says. "If the car can tell where I mean to go, it can drive itself there." The researchers say the technology also could be used in prosthetics to interpret what the person wants to do even if their body cannot do it. "If you know how someone is moving and what the disturbance is, you can tell the underlying intent, which means we could use this algorithm to design machines that could correct the course of a swerving car or help a stroke patient with spasticity," Horwitz says.


Is Your Digital Information More at Risk Today Than 10 Years Ago?
UNM Newsroom (10/06/15) Karen Wentworth

Researchers at the University of New Mexico and Lawrence Berkeley National Laboratory say data breaches are not happening any more frequently than they did a decade ago, and are not, in general, growing in size. A paper detailing their research won the Best Paper Award at the Workshop on the Economics of Information Security in June. The researchers used data published by the Privacy Rights Clearinghouse to construct a statistical model to analyze trends and make predictions about future breaches. The data showed breaches are twice as likely to be caused by negligence than by malicious actions, and there is a virtual certainty a breach exposing more than 5 million records will occur in the next three years. The model estimates the cost of data breaches over the next three years will add up to $180 billion. However, the model also shows breaches are not, on average, getting any bigger or more frequent, but a "long tail" distribution in the data is liable to distort public perception. The researchers say their study acts as a useful counterpoint to industry studies on data breaches, which often use private data and opaque analysis methods that are hard to fact-check.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe